id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
49310785 | pes2o/s2orc | v3-fos-license | 5G New Radio: Unveiling the Essentials of the Next Generation Wireless Access Technology
The 5th generation (5G) wireless access technology, known as new radio (NR), will address a variety of usage scenarios from enhanced mobile broadband to ultra-reliable low-latency communications to massive machine type communications. Key technology features include ultra-lean transmission, support for low latency, advanced antenna technologies, and spectrum flexibility including operation in high frequency bands and inter-working between high and low frequency bands. This article provides an overview of the essentials of the state of the art in 5G wireless technology represented by the 3GPP NR technical specifications, with a focus on the physical layer. We describe the fundamental concepts of 5G NR, explain in detail the design of physical channels and reference signals, and share the various design rationales influencing standardization.
I. THE BIRTH OF 3GPP 5G NEW RADIO
The 5th generation (5G) wireless access technology, known as new radio (NR), will address a variety of usage scenarios from enhanced mobile broadband (eMBB) to ultra-reliable lowlatency communications (URLLC) to massive machine type communications (mMTC). NR can meet the performance requirements set by the international telecommunication union (ITU) for international mobile telecommunications for the year 2020 (IMT-2020) [1].
The third-generation partnership project (3GPP) is a global standard-development organization and has been developing 5G NR over the past few years. After initial studies [2]- [4], 3GPP approved a work item in March 2017 for NR specifications as part of Release 15 [5]. At the same meeting, 3GPP agreed to a proposal to accelerate the 5G schedule to complete non-standalone (NSA) NR by December 2017, while standalone (SA) NR was scheduled to be completed by June 2018. In NSA operation, long-term evolution (LTE) is used for initial access and mobility handling while the SA version can be deployed independently from LTE. A major milestone was reached in December 2017 with the approval of the NSA NR specifications and the SA version was completed in June 2018. The last step for Rel-15 is a late drop that will be completed by December 2018. The late drop will include more architecture options, e.g., the possibility to connect 5G NodeBs (gNB) to the evolved packet core (EPC) and operating NR and LTE in multiconnectivity mode wherein NR is the master node and LTE is the secondary node.
Key NR features include ultra-lean transmission [6], support for low latency, advanced antenna technologies, and spectrum flexibility including operation in high frequency bands, interworking between high and low frequency bands, and dynamic time division multiplexing (TDD). A high-level overview of these technology components and NR design principles is provided in [7]. In contrast, the objective of this article is to delve into the detailed NR technical specifications (TS) to unveil the essentials of NR design, while keeping the overall contents at a level accessible for an audience working in the wireless communications and networking communities.
The radio interface of NR consists of the physical layer (Layer 1) and higher layers such as medium access control and radio resource control (RRC). Physical layer specifications are described in the TS 38.200 series [8]- [13], and higher layer specifications are described in the TS 38.300 series (see, e.g. [14] for RRC specifications). We will mainly focus on the physical layer in this article and keep the treatment of higher layers to a minimum.
The remainder of this article is organized as follows. In Section II, we introduce basic NR design and terminologies. In Section III, we describe the synchronization signals (SS), physical broadcast channel (PBCH), and physical random access channel (PRACH). We present the physical downlink shared channel (PDSCH) and physical uplink shared channel (PUSCH) in Section IV, and the physical downlink control channel (PDCCH) and physical uplink control channel (PUCCH) in Section V. The design of reference signals is described in Section VI, followed by our concluding remarks in Section 0.
II. FUNDAMENTALS OF 5G NEW RADIO
In this section, we present the fundamental concepts of 5G NR design and basic terminologies, with an illustration of the frame structure given in Figure 1.
A. Waveform, Numerology, and Frame Structure
The choice of radio waveform is the core physical layer decision for any wireless access technology. After assessments of all the waveform proposals, 3GPP agreed to adopt orthogonal frequency division multiplexing (OFDM) with a cyclic-prefix (CP) for both DL and UL transmissions. CP-OFDM can enable low implementation complexity and low cost for wide bandwidth operations and multiple-input multiple-output (MIMO) technologies. NR also supports the use of discrete Fourier transform (DFT) spread OFDM (DFT-S-OFDM) in the uplink to improve coverage. GHz. • FR2: 24.25 GHz -52.6 GHz, commonly referred to as millimeter wave. Scalable numerologies are key to support NR deployment in such a wide range of spectrum. NR adopts flexible subcarrier spacing of 2 ⋅ 15 kHz ( = 0, 1, … , 4) scaled from the basic 15 kHz subcarrier spacing in LTE. Accordingly, the CP is scaled down by a factor of 2 − from the LTE CP length of 4.7 μs. This scalable design allows support for a wide range of deployment scenarios and carrier frequencies. At lower frequencies, below 6 GHz, cells can be larger and subcarrier spacings of 15 kHz and 30 kHz are suitable. At higher carrier frequencies, phase noise becomes more problematic and in FR2, NR supports 60 kHz and 120 kHz for data channels and 120 kHz and 240 kHz for the SS/PBCH block (SSB) used for initial access. At higher frequencies, cells and delay spread are typically smaller and the CP lengths provided by the 60 and 120 kHz numerologies are sufficient.
A frame has a duration of 10 ms and consists of 10 subframes. This is the same as in LTE, facilitating NR and LTE coexistence. Each subframe consists of 2 slots of 14 OFDM symbols each. Although a slot is a typical unit for transmission upon which scheduling operates, NR enables transmission to start at any OFDM symbol and last only as many symbols as needed for the communication. This type of "mini-slot" transmission can thus facilitate very low latency for critical data as well as minimize interference to other links per the lean carrier design principle that aims at minimizing transmissions. Latency optimization has been an important consideration in NR. Many other tools besides "mini-slot" transmission have been introduced in NR to reduce latency, as detailed throughout this article.
B. Resource, Carrier, and Bandwidth Part
A resource block (RB) consists of 12 consecutive subcarriers in the frequency domain. A single NR carrier in Release-15 is limited to 3300 active subcarriers and to at most 400 MHz bandwidth. The maximum bandwidth in FR1 is 100 MHz, and the maximum bandwidth in FR2 is 400 MHz. Both are much greater than the maximum LTE bandwidth of 20 MHz. Despite wide bandwidth, the ultra-lean design in NR minimizes alwayson transmissions, leading to higher network energy efficiency and lower interference.
Operation in millimeter wave bands benefits significantly when complemented by a low frequency carrier to ensure good coverage, especially in the uplink. This can be achieved with the carrier aggregation framework in NR, which is similar to the corresponding framework in LTE. Carrier aggregation provides a tool to combine spectrum in multiple bands. NR supports the possibility to have an NR carrier and an LTE carrier overlapping with each other in frequency, thereby enabling dynamic sharing of spectrum between NR and LTE. This facilitates a smooth migration to NR from LTE. Solutions specified to allow this type of operation are the ability for NR Figure 1: Illustration of 5G NR frame structure and basic terminologies PDSCH to map around LTE cell specific reference signals (CRS), and the possibility of flexible placements of DL control channels, initial access related reference signals and data channels to minimize collisions with LTE reference signals. NR also supports a so-called supplementary uplink (SUL) which can be used as a low-band complement to the cell's UL when operating in high frequency bands and a supplementary DL (SDL), that can be used, for example, in DL only spectrum.
To allow good forward compatibility support in NR, it is possible to configure certain sets of resources to be unused in any PDSCH transmission. This will allow 3GPP to develop new physical layer solutions for currently unknown use cases.
For a carrier with a given subcarrier spacing, the available radio resources in a subframe of duration 1 ms can be thought of as a resource grid composed of subcarriers in frequency and OFDM symbols in time. Accordingly, each resource element (RE) in the resource grid occupies one subcarrier in frequency and one OFDM symbol in time.
To reduce the device power consumption, a user equipment (UE) may be active on a wide bandwidth in case of bursty traffic for a short time, while being active on a narrow bandwidth for the remaining. This is commonly referred to as bandwidth adaptation and is addressed in NR by a new concept known as bandwidth part. A bandwidth part is a subset of contiguous RBs on the carrier. Up to four bandwidth parts can be configured in the UE for each of the UL and DL, but at a given time, only one bandwidth part is active per transmission direction. Thus, the UE can receive on a narrow bandwidth part and, when needed, the network can dynamically inform the UE to switch to a wider bandwidth for reception.
C. Modulation, Channel Coding, and Slot Configuration
The modulation schemes in NR are similar to LTE, including binary and quadrature phase shift keying (B/QPSK) and quadrature amplitude modulation (QAM) of orders 16, 64, and 256 with binary reflected Gray mapping. NR control channels use Reed-Muller block codes and cyclic redundancy check (CRC) assisted polar codes (versus tail-biting convolutional codes in LTE). NR data channels use rate compatible quasicyclic low-density parity-check (LDPC) codes (versus turbo codes in LTE).
The duplexing options supported in NR include frequency division duplex (FDD), TDD with semi-statically configured UL/DL configuration, and dynamic TDD. In TDD spectrum, for small/isolated cells it is possible to use dynamic TDD to adapt to traffic variations, while for large over-the-rooftop cells, semi-static TDD may be more suitable for handling interference issues than fully dynamic TDD. If a slot configuration is not configured, all the resources are considered flexible by default. Whether a symbol is used for DL or UL transmission can be dynamically determined according to layer 1/2 signaling of DL control information (DCI). This leads to a dynamic TDD system.
A. Synchronization Signals and Physical Broadcast Channel
Recall that the combination of SS and PBCH is referred to as SSB in NR. The subcarrier spacing of SSB can be 15 or 30 kHz in FR1 and 120 kHz or 240 kHz in FR2. By detecting SS, a UE can obtain the physical cell identity, achieve downlink synchronization in both time and frequency domain, and acquire the timing for PBCH. PBCH carries the very basic system information.
NR SS consists of primary SS (PSS) and secondary SS (SSS). Due to the absence of frequent static reference signals to aid tracking, there could be larger initial frequency errors between the gNB and UEs as compared to LTE, especially for low-cost UEs operating in higher frequencies. To fix the time and frequency offset ambiguity problem of traditional Zadoff-Chu sequence-based LTE PSS, a BPSK modulated m-sequence of length 127 is used for NR PSS. NR SSS is generated by using BPSK modulated Gold sequence of length 127. PSS and SSS together can be used to indicate a total of 1008 different physical cell identities.
An SSB is mapped to 4 OFDM symbols in the time domain and 240 contiguous subcarriers (20 RBs) in the frequency domain, as illustrated in Figure 2. To support beamforming for initial access, a new concept, SS burst set, is introduced in NR to support possible beam sweeping for SSB transmission. To minimize always-on transmissions [6], multiple SSBs are transmitted in a localized burst set in conjunction with a sparse burst set periodicity (default at 20 ms). Within an SS burst set period, up to 64 SSBs can be transmitted in different beams. The transmission of SS blocks within a SS burst set is confined to a 5 ms window. The set of possible SSB time locations within an SS burst set depends on the numerology which in most cases is uniquely identified by the frequency band. The frequency location of SSB is not necessarily in the center of the system bandwidth and is configured by higher layer parameters to support sparser search raster for SSB detection. A sparser raster in frequency is required to compensate for the increased search time due to the sparser SSB periodicity.
B. Physical Random Access Channel
PRACH is used to transmit a random-access preamble from a UE to indicate to the gNB a random-access attempt and to assist the gNB to adjust the uplink timing of the UE, among other parameters. Like in LTE, Zadoff-Chu sequences are used for generating NR random-access preambles due to their favorable properties, including constant amplitude before and after DFT operation, zero cyclic auto-correlation and low crosscorrelation. In contrast to LTE, NR random-access preamble supports two different sequence lengths with different format configurations, as shown in Figure 3, to handle the wide range of deployments for which NR is designed.
For the long sequence of length 839, four preamble formats that originated from the LTE preambles are supported, mainly targeting large cell deployment scenarios. These formats can only be used in FR1 and have a subcarrier spacing of 1.25 or 5 kHz.
For the short sequence of length 139, nine different preamble formats are introduced in NR, mainly targeting the small/normal cell and indoor deployment scenarios. The short preamble formats can be used in both FR1 with subcarrier spacing of 15 or 30 kHz and FR2 with subcarrier spacing of 60 or 120 kHz. In contrast to LTE, for the design of the short preamble formats, the last part of each OFDM symbol acts as a CP for the next OFDM symbol and the length of a preamble OFDM symbol equals the length of data OFDM symbols. There are several benefits of this new design. Firstly, it allows the gNB receiver to use the same fast Fourier transform (FFT) for data and random-access preamble detection. Secondly, due to the composition of multiple shorter OFDM symbols per PRACH preamble, the new short preamble formats are more robust against time varying channels and frequency errors. Thirdly, it supports the possibility of analog beam sweeping during PRACH reception such that the same preamble can be received with different beams at the gNB.
A. Physical Downlink Shared Channel
PDSCH is used for the transmission of DL user data, UEspecific higher layer information, system information, and paging.
For transmission of a DL transport block (payload for physical layer), a transport block CRC is first appended to provide error detection, followed by a LDPC base graph selection. NR supports two LDPC base graphs, one optimized for small transport blocks and one for larger transport blocks. Then segmentation of the transport block into code blocks and code block CRC attachment are performed. Each code block is individually LDPC encoded. The LDPC coded blocks are then individually rate matched. Finally, code block concatenation is performed to create a codeword for transmission on the PDSCH. Up to 2 codewords can be transmitted simultaneously on the PDSCH.
The contents of each codeword are scrambled and modulated to generate a block of complex-valued modulation symbols. The symbols are mapped on up to 4 MIMO layers. A PDSCH can have two codewords to support up to 8-layer transmission. The layers are mapped to antenna ports in a specification transparent manner (non-codebook based), hence how beamforming or MIMO precoding operation is performed is up to network implementation and transparent to the UE. For each of the antenna ports (i.e. layers) used for transmission of the PDSCH, the symbols are mapped to RBs. When receiving unicast PDSCH, UE can be informed that certain resources are not available for PDSCH. These unavailable resources may include configurable rate matching patterns with RB and symbol level granularity or RE level granularity. The latter is used to map around LTE CRS in case NR and LTE share the same carrier. This facilitates both forward and backward capability, since the network can blank radio resources to serve future unknown services while not causing backward compatibility issues.
Physical layer processing for NR PDSCH is summarized in in the left part of Figure 4.
B. Physical Uplink Shared Channel
PUSCH is used for the transmission of UL shared channel (UL-SCH) and layer 1/2 control information. The UL-SCH is the transport channel used for transmitting an UL transport block. The physical layer processing of an UL transport block is similar to the processing of a DL transport block, as summarized in right part of Figure 4.
The contents of the codeword are scrambled and modulated to generate a block of complex-valued modulation symbols. The symbols are then mapped onto one or several layers. PUSCH supports a single codeword that can be mapped up to 4 layers. In case of a single layer transmission only, DFT transform precoding can optionally be applied if enabled. For the layers to antenna ports mapping, both non-codebook-based transmission and codebook-based transmission are supported in the UL. For each of the antenna ports used for transmission of the physical channel, the symbols are mapped to RBs. In contrast to LTE, the mapping is done in frequency before time to enable early decoding at the receiver.
A. Physical Downlink Control Channel
PDCCH is used to carry DCI such as downlink scheduling assignments and uplink scheduling grants. An illustration of NR PDCCH is given in the upper part of Figure 5.
Legacy LTE control channels are always distributed across the entire system bandwidth, making it difficult to control intercell interference [6]. NR PDCCHs are specifically designed to transmit in a configurable control resource set (CORESET). A CORESET is analogous to the control region in LTE but is generalized in the sense that the set of RBs and the set of OFDM symbols in which it is located are configurable with the corresponding PDCCH search spaces. Such configuration flexibilities of control regions including time, frequency, numerologies, and operating points enable NR to address a wide range of use cases.
Frequency allocation in a CORESET configuration can be contiguous or non-contiguous. CORESET configuration in time spans 1-3 consecutive OFDM symbols. The REs in a CORESET are organized in RE groups (REGs). Each REG consists of the 12 REs of one OFDM symbol in one RB. A PDCCH is confined to one CORESET and transmitted with its own demodulation reference signal (DMRS) enabling UEspecific beamforming of the control channel. A PDCCH is carried by 1, 2, 4, 8 or 16 control channel elements (CCEs) to Figure 5: Illustration of 5G NR PDCCH and PUCCH accommodate different DCI payload size or different coding rates. Each CCE consists of 6 REGs. The CCE-to-REG mapping for a CORESET can be interleaved (for frequency diversity) or non-interleaved (for localized beam-forming). A UE is configured to blindly monitor a number of PDCCH candidates of different DCI formats and different aggregation levels. The blind decoding processing has an associated UE complexity cost but is required to provide flexible scheduling and handling of different DCI formats with lower overhead.
B. Physical Uplink Control Channel
PUCCH is used to carry uplink control information (UCI) such as hybrid automatic repeat request (HARQ) feedback, channel state information (CSI), and scheduling request (SR). An illustration of NR PUCCH is given in the bottom part of Figure 5.
Unlike LTE PUCCH that is located at the edges of the carrier bandwidth and is designed with fixed duration and timing, NR PUCCH is flexible in its time and frequency allocation. That allows supporting UEs with smaller bandwidth capabilities in an NR carrier and efficient usage of available resources with respect to coverage and capacity. NR PUCCH design is based on 5 PUCCH formats. PUCCH formats 0 and 2, a.k.a. short PUCCHs, use 1 or 2 OFDM symbols while PUCCH formats 1, 3 and 4, a.k.a. long PUCCHs, can use 4 to 14 OFDM symbols. PUCCH formats 0 and 1 carry UCI payloads of 1 or 2 bits while other formats are used for carrying UCI payloads of more than 2 bits. In PUCCH formats 1, 3 and 4, symbols with DMRS are time division multiplexed with UCI symbols to maintain low low peak-to-average-power-ratio (PAPR) while in format 2, DMRS is frequency-multiplexed with data-carrying subcarriers. Multi-user multiplexing on the same time and frequency resources is supported only for PUCCH format 0, 1, and 4 by means of different cyclic shifts or OCC when applicable. In the following, additional information on NR PUCCH formats are briefly described: A UE can be configured with PUCCH resources for CSI reporting or SR. For UCI transmission including HARQ-ACK bits, a UE may be configured with up to 4 PUCCH resource sets based on the UCI size. The first set can only be used for a maximum of 2 HARQ-ACK bits (with a maximum of 32 PUCCH resources) and other sets are applicable for more than 2 bits of UCI (each with a maximum of 8 PUCCH resources). A UE determines the set based on the UCI size, and further indicates a PUCCH resource in the set based on a 3-bit field in DCI (complemented with an implicit rule for the first set with more than 8 resources).
VI. PHYSICAL REFERENCE SIGNALS
NR reference signal design follows the lean carrier principles [6] reference signals are on-demand when possible, and their time and frequency distributions are configurable so that requirements can be met with minimal overhead. At low load, reference signal transmission can be extremely sparse. This serves to reduce energy consumption and intercell interference. The high flexibility of the design and the on-demand principle also result in a degree of forward compatibly. In LTE, multiple functions are tied to the always-on CRS. In NR, these functions are supported by multiple UE specifically configured reference signals.
A. Downlink and Uplink Demodulation Reference Signals (DMRS)
DMRS is used by the receiver to produce channel estimates for demodulation of the associated physical channel. The design of DMRS is specific for each physical channel -PBCH, PDCCH, PDSCH, PUSCH, and PUCCH. In all cases, DMRS is UE specific, transmitted on demand, and normally does not extend outside of the scheduled physical resource of the channel it supports. In the sequel, we focus on the DMRS for PDSCH and PUSCH when CP-OFDM is used.
The PDSCH/PUSCH DMRS structure supports a wide range of scenarios, UE capabilities, and use cases. The number of DMRS symbols in a PDSCH/PUSCH duration can be configured; this enables support for very high UE mobility, but also low DMRS overhead when the scenario allows so. Similarly, the density of DMRS in the frequency domain is configurable to allow for an optimized overhead. The first DMRS instance comes early in the PDSCH/PUSCH transmission; this enables channel estimation to start early in the receiver, thereby reducing the processing latency. DMRS can be on a regular comb structure, and RB bundling is configurable. This is beneficial for efficient, high performance channel estimation. NR DMRS supports massive multi-user MIMO; it can be beamformed and supports up to 12 orthogonal layers. DMRS sequence for CP-OFDM is QPSK based on Gold sequences. For PUSCH with DFT-S-OFDM there is also a low PAPR Zadoff-Chu mode.
B. Downlink and Uplink Phase-Tracking Reference Signals (PTRS)
PTRS is used for tracking the phase of the local oscillator at the receiver and transmitter. This enables suppression of phase noise and common phase error, particularly important at high carrier frequencies such as millimeter wave. Due to the properties of phase noise, PTRS can have low density in the frequency domain but high density in the time domain. PTRS can be present both in the downlink (associated with PDSCH) and in the uplink (associated with PUSCH).
If transmitted, PTRS is always associated with one DMRS port and is confined to the scheduled bandwidth and duration of PDSCH/PUSCH. Time and frequency densities of PTRS are adapted to signal-to-noise-ratio (SNR) and scheduling bandwidth.
C. Channel-State Information Reference Signals (CSI-RS)
Similar to the LTE counterpart, NR CSI-RS is used for DL CSI acquisition. Beyond this use case, CSI-RS in NR also supports reference signal received power (RSRP) measurements for mobility and beam management (including analog beamforming), time/frequency tracking for demodulation, and UL reciprocity-based precoding. CSI-RS is UE specifically configured, but multiple user can still share the same resource. Zero-power CSI-RS can be used as a masking tool to protect certain REs by making them unavailable for PDSCH mapping. This masking supports transmission of UE specific CSI-RS, but the design is also a tool for allowing introduction of new features to NR with retained backward compatibility.
NR supports high degree of flexibility for CSI-RS configuration. A resource can be configured with up to 32 ports, and the density is configurable. In the time domain, a CSI-RS resource may start at any OFDM symbol of a slot and it spans 1, 2, or 4 OFDM symbols depending on the number of ports configured. CSI-RS can be periodic, semi-persistent or aperiodic (DCI triggered).
When used for time frequency tracking, CSI-RS can be periodic, or aperiodic. In this use case a single port is configured, and the signal is transmitted in bursts of two or four symbols spread over one or two slots.
D. Sounding Reference Signals (SRS)
SRS is used for UL channel sounding. The design supports UL link adaptation and scheduling, but in reciprocity operation also downlink precoder selection, link adaptation and scheduling, e.g., for massive multi-user MIMO.
Contrary to LTE, NR SRS is UE specifically configured. This enables a high degree of flexibility in the system. In the time domain, an SRS resource spans 1, 2 or 4 consecutive symbols mapped within the last 6 symbols of a slot. Multiple SRS symbols allow coverage extension and increased sounding capacity. If multiple resources are configured for a UE, intraslot antenna switching is also supported (when UE has fewer transmit chains than receive chains). Both these features are important, e.g., in the reciprocity use case. The SRS sequence design and frequency hopping mechanism are similar to LTE SRS.
VII. CONCLUSIONS
The next generation wireless access technology -5G NRserving a wide range of use cases is expected to lead to significant socio-economic benefits. A significant step towards this was achieved when 3GPP approved the highly anticipated standalone 5G NR specifications in June 2018. This article has provided an overview of the essentials of the 3GPP NR specifications representing the state of the art in 5G wireless technology, with a focus on the physical layer. NR is a flexible air interface, capable of meeting a wide range of requirements, use cases, deployments and providing a solid foundation for the future evolution of wireless communications services. | 2018-06-18T19:23:39.000Z | 2018-06-18T00:00:00.000 | {
"year": 2018,
"sha1": "b8ca81e6fe77bf19b40dc016d1aaaad6cf7d020f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1806.06898",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b8ca81e6fe77bf19b40dc016d1aaaad6cf7d020f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
195190673 | pes2o/s2orc | v3-fos-license | Characterization of C-nucleoside Antimicrobials from Streptomyces albus DSM 40763: Strepturidin is Pseudouridimycin
Pseudouridimycin (PUM), a selective inhibitor of bacterial RNA polymerase has been previously detected in microbial-extracts of two strains of Streptomyces species (strain ID38640 and ID38673). Here, we isolated PUM and its deoxygenated analogue desoxy-pseudouridimycin (dPUM) from Streptomyces albus DSM 40763, previously reported to produce the metabolite strepturidin (STU). The isolated compounds were characterized by HRMS and spectroscopic techniques and they selectively inhibited transcription by bacterial RNA polymerase as previously reported for PUM. In contrast, STU could not be detected in the cultures of S. albus DSM 40763. As the reported characteristics reported for STU are almost identical with that of PUM, the existence of STU was questioned. We further sequenced the genome of S. albus DSM 40763 and identified a gene cluster that contains orthologs of all PUM biosynthesis enzymes but lacks the enzymes that would conceivably allow biosynthesis of STU as an additional product.
www.nature.com/scientificreports www.nature.com/scientificreports/ revealed a biosynthetic gene cluster similar to the known PUM pathway. RNAP inhibition assays provided comparable activities to those reported for PUM. According to this data, the existence of STU may be questioned and the previously reported STU may, in fact, be PUM.
Results and Discussion
Isolation of the secondary metabolites. In order to obtain C-nucleosidic secondary metabolites, S. albus DSM 40763 was cultivated under conditions similar to those reported for STU production and the medium extracts were screened by LC-MS. Once compounds with m/z values corresponding to PUM or STU and dPUM were detected, the strain was grown in a larger scale in a 3 l bioreactor to obtain sufficient material for structure elucidation of the metabolites. Two compounds (products A and B in Fig. 2) were observed and isolated from the culture broth using activated charcoal extraction, followed by chromatographic purifications that gave homogeneous products. For − , 469.1801) were observed for the products. The former value corresponds to the masses calculated for STU or PUM (product B), both molecules having the same exact mass, and the latter m/z value refers to dPUM (product A). The NMR characterization (matching well to previously reported 1 H 13 ,C and 2D data) verified readily the authenticity of dPUM, but the discrimination whether the other isolated compound was STU or PUM proved to be more complex. The reported 1D NMR chemical shifts for STU and PUM 1,3 resemble each other and direct comparison of the measured 1 H and 13 C NMR data could not reliably distinguish the identity of the isolated metabolite (see Tables S3 and S4 in SI for side by side comparison of the reported chemical shifts and the ones measured in the present study).
Characterization of the isolated compounds.
In D 2 O, two spin-coupled systems of protons (i.e. protons of the sugar and glutamine moieties) and two spin-isolated systems (single and two protons) were detected. The spin-isolated single proton on the low field could be assigned to the base moiety (H6, pseudouracil). HMBC measurements revealed one carbon (C1 at 110.3 ppm) that coupled to both this proton and the spin-coupled system belonging to the six protons of the sugar moiety (H1′, H2′, H3′, H4′, H5′ and H5″). The H5′ and H5″ were coupled to a carbonyl carbon (Gln C=O, 171.1 ppm) that was also coupled to the five protons of the glutamine moiety (Gln-Hα, -Hβ and -Hγ). These signals are www.nature.com/scientificreports www.nature.com/scientificreports/ uniform in both PUM and STU and do not yet reveal the location of the spin-isolated methylene protons. The key difference between the two compounds is whether the methylene group belongs to a glycine moiety as in the structure of PUM or to a cyclic 2-aminoimidazol-5-one as in STU. In the case of STU, HMBC correlations have previously been reported to reveal the location of the 2-aminoimidazol-5-one moiety, bound to the acyclic sugar moiety 3 . However, the signals of the 2′-proton and the methylene protons were overlapping both in D 2 O and in a mixtures of D 2 O and DMSO-d 6 , which complicated their spectroscopic assignment (Supplemental Material). As the 2D data proved insufficient for the characterization, the compound was subjected to chemical derivatization and exposed to a titanium trichloride treatment 1,4 , which may be used for the reduction of the N-hydroxy group. The reaction product was proven to be identical to the isolated dPUM by NMR spectroscopy and HPLC-coelution test ( Fig. S16 in SI). In this case, the HMBC data showed readily detectable HMBC correlations in D 2 O between the carbonyl carbon (Gly C=O) and alpha protons of both the Gln and Gly moieties (Fig. 3). Thus the isolated compound (B) could be identified as PUM.
Tandem mass spectroscopy with PUM and dPUM ( Fig. S15 in SI) showed fragmentation patterns similar to those reported for STU and Gln-STU 3 which supports the assumption that PUM is reassigned STU.
Isolated product B inhibits transcription by multisubunit RNAPs by competing with UTP.
We first tested the effect of the isolated product (B) in a single nucleotide addition assay performed at 5 μM NTP substrates for 2 minutes. High concentration of B (100 μM) did not measurably inhibit incorporation of AMP, GMP and CMP, but strongly delayed incorporation of UMP into the nascent RNA by E. coli (Eco) RNAP (Fig. 4A). Incorporation of UMP into the nascent RNA by S. cerevisiae RNA polymerase II (Sce Pol II) was also strongly inhibited, whereas no inhibition was observed in the case of human mitochondrial RNAP (Hsa mt-RNAP). Time courses of UMP incorporation at several concentrations of B revealed that Sce Pol II required approximately 100-fold higher concentration of B than Eco RNAP to achieve a similar magnitude of the inhibition (Fig. 4B). We then investigated the effect of 100 μM B on the processive transcription through 49 base pairs of the template DNA at 5 μM NTP substrates (Fig. 4C). Transcription by Eco RNAP was nearly completely halted at +11, one base pair upstream of the incorporation of the first UMP into the nascent RNA. Sce Pol II also strongly paused at +11, but cleared the pause within the timeframe of the experiment (15 min) and paused again at +20, just before incorporation of the second UMP. At the same time, processive transcript elongation by Hsa mt-RNAP was not affected by 100 μM of B. Overall, the above experiments demonstrate that the isolated product (B) possesses characteristic biological activities of PUM: it inhibits transcription by multisubunit RNAPs by competing with UTP and displays high selectivity towards the bacterial RNAP 1 .
Identification and analysis of the sap (Streptomyces albus pseudouridimycin) gene cluster. The draft genome of S. albus DSM 40763 was acquired by MiSeq sequencing, which resulted in 4,517,426 reads (error corrected and trimmed down to 4,335,036) that were assembled de novo into 162 contigs (Supplementary Table S1). Bioinformatic analysis revealed a gene cluster containing open reading frames www.nature.com/scientificreports www.nature.com/scientificreports/ homologous to pseudouridine synthases and glycine amidinotransferases, which have recently been implicated in the biosynthesis of C-nucleosides 2 . The sap gene cluster displayed strong synteny to the pum pathway with the arrangement of genes from sapJ to sapU (pumE to pumO) conserved ( Fig. 5b) and all core genes implicated in the formation PUM could be identified. The sequence identity between the corresponding sap and pum gene products varied from 62% to 88% (Table 1). Similar gene clusters to the sap pathway appeared to be widely spread in Streptomyces and a Blast search revealed eight strains harboring over 99% identity at nucleotide level (List S1 in Supplement). However, to date only two strains, Streptomyces sp. ID38640 and ID38673 1 , have been reported to produce PUM in addition to S. albus DSM 40763.
The key enzyme in the biosynthesis of PUM is likely to be SapC, which according to sequence analysis belongs to the TruD family of tRNA pseudouridine synthases. These enzymes are responsible of modification of pseudouridine 13 in tRNA Glu 5 . Pseudouridine synthases typically use RNA as substrate, but in the biosynthesis of PUM a more likely substrate for SapC is a derivative of uridine obtained from the cellular pool. SapA, a hypothetical adenylate kinase, is possibly involved in providing a phosphorylated substrate for SapC, but the identity of this compound remains to be solved. Modification of pseudouridin most likely continues by the action of SapB, which resembles dehydrogenases belonging to the glucose-methanol-choline oxidoreductase family, and SapH, a pyridoxal phosphate-dependent aminotransferase, that are responsible for the generation of 5′-oxo-pseudouiridine and 5′-aminopseudouridine, respectively. SapD and SapF are similar to ATP-grasp domain containing carboxylate-amine ligases. The established functions of the orthologues in pum cluster, PumK and PumM, are attachment of glutamine to aminopseudouridine and guanidinoacetic (GAA) to glutaminyl-5′-N-pseudouridine, respectively. SapG resembles proteins of amidinotransferase superfamily that contains glycine and inosamine amidinotransferases. Glycine amidinotransferases generate GAA from arginine and glycine and thereby SapG would be responsible of providing GAA substrate for SapF. SapJ is a FAD dependent oxidoreductase and is 39% identical to KijD3, an N-oxygenase in the biosynthesis of antibiotic kijanimicin 6 and therefore deduced to carry out the hydroxylation of dPUM.
In addition, two hypothetical genes of unknown function, coding for SapI and SapU, were found located in analogous positions to their counterparts PumF and PumO, respectively. Altogether three transporters were encoded in the gene cluster: SapE is a Major Facilitator Superfamily protein and two ABC-transporters, SapV and SapX, reside at the right edge of the cluster. Two genes, sapK and sapL, are located on the left edge of the analyzed area that are homologous to hydro-lyases and peptidases, respectively. No orthologues to SapK and SapL are found in the pum cluster suggesting that they do not have a role in the biosynthesis of PUM.
Conclusions
Here we have described the isolation and characterization of two C-nucleoside analogues produced by Streptomyces albus DSM 40763, which were identified unambiguously as pseudouridimycin (PUM) and desoxy-pseurouridimycin (dPUM). This is in contrast to previous studies, which have suggested that the strain produces the structurally related strepturidin (STU) 3 . The characterization of PUM was conclusive after chemical reduction of the N-hydroxy group yielding a product that was identical to the isolated dPUM. For dPUM, the 2D HMBC NMR experiment was of sufficient quality and revealed the locations of each spin system. The RNAP inhibition assays done with the isolated compound also provided similar activities to those measured for PUM previously 1 . Thus these observations (NMR, chemical derivatization and RNAPs inhibition data) altogether confirm the identity of the isolated compound as PUM.
We were not able to detect STU from culture extracts of Streptomyces albus DSM 40763 during the course of our study, but this does not formally exclude the possibility that the strain is able to synthesize both PUM and STU. The NMR data of PUM revealed similarities hardly discriminated from that previously reported for www.nature.com/scientificreports www.nature.com/scientificreports/ STU. Moreover, the MS/MS-data of PUM and dPUM showed the same daughter ions and similar fragmentation patterns previously reported for STU and dSTU. Genome sequencing revealed a cluster containing genes highly similar to those with established functions found in the pum cluster 2 . A key differences in the biosynthesis of these two metabolites would be in the formation of the cyclic 2-aminoimidazol-5-one moiety from guanidine acetic acid in STU, but the sap gene cluster does not encode any proteins that could feasibly catalyze the cyclization reaction. Another critical difference is in the regiochemistry of the attachment of these two units, but the high sequence identity of 72% between the two GAA transferases (sapF and pumM) suggests that the genes are orthologous and responsible for PUM formation. The characteristics of PUM and dPUM proved to be the same previously reported for STU and dSTU. Hence, the existence of STU may be questioned and STU, in fact, is PUM. www.nature.com/scientificreports www.nature.com/scientificreports/
Reagents and oligonucleotides. HPLC purified DNA oligonucleotides were purchased from Eurofins
Genomics GmbH (Ebersberg, Germany). PAGE purified ATTO-680 labeled RNA primer was purchased from IBA Biotech (Göttingen, Germany). NTPs were from Jena Bioscience (Jena, Germany). DNA oligonucleotides and RNA primers are presented in Supplementary Fig. S1. All other reagents used were molecular biology grade.
General remarks. The NMR spectra were recorded with a Bruker Avance 500 MHz spectrometer. The high resolution mass spectra were recorded with a Bruker Daltonics microTOF-Q instrument. The LC-MS Production and isolation of the secondary metabolites. Cultivations of S. albus DSM 40763 were performed using media described for production of STU 3 with minor modifications. Precultures were cultivated in 50 ml of media containing 1% mannitol, 0.5% Bacto TM peptone, 0.5% yeast extract and 0.092% CaCl 2 * 2 H 2 O in 250-ml Erlenmeyer flasks for 2 days. Production of secondary metabolites was performed in three-liter volume in Fermentec FMT ST Series bioreactors using 5% inoculum and medium containing 1% mannitol, 1% soy meal, 0.6% Nutrient Broth (Biokar Diagnostics), 0.1% CaCO 3 and 0.1% polypropylene glycol P 2,000 as antifoam. Temperature was set to 30 °C, stirring to 300 r.p.m., and aeration to 1 v.v.m. (volume per volume per minute). After 2-3 days of fermentation the broth was collected and cells were removed by centrifugation. To the remaining broth activated charcoal, 2 g l −1 was added and the mixture was stirred overnight at 4 °C, after which the supernatant was removed by centrifugation and subsequent filtration. The collected charcoal was placed in a Büchner funnel and eluted with copious amounts of a 1:1 mixture of H 2 O and acetone to extract the compound. The extracts were concentrated to a small volume and then fractioned on a reverse phase silica column eluting with a mixture of H 2 Reduction of the N-hydroxy group. Solution of PUM (2.5 mg, 5.1 μmol) in 1 M sodium acetate buffer (pH 6.75) (40 μl) was treated with TiCl 3 (1.6 mg, 10.4 μmol) and the reaction was allowed to proceed for 2 hours at room temperature (the reaction was completed according to LC-MS analysis). The reaction mixture was diluted with water (0.5 ml), filtered, and the product was isolated without further work up by RP-HPLC (Phenomemex 250 × 10 Synergi TM 4 μm Fusion-RP 80 Å, flow rate 3 ml min −1 ), eluting with a gradient of H 2 O and MeCN. The collected fractions were freeze-dried to yield dPUM as a white solid (1.0 mg, 2.1 μmol, 41%). The spectral data was identical to that measured for isolated dPUM.
Proteins. E. coli RNAP was expressed in E. coli Xjb(DE3) (Zymo Research, Irvine, CA, USA) bearing pVS10 plasmid and purified by Ni-, heparin and Q-sepharose chromatography as described previously 7 , dialyzed against the Storage Buffer (50% glycerol, 20 mM Tris-HCl pH 7.9, 150 mM NaCl, 0.1 mM EDTA, 0.1 mM DTT) and stored at −20 °C. S. cerevisiae RNA polymerase II was purified from S. cerevisiae strain SHy808 (kindly provided by the laboratory of Mikhail Kashlev, NIH, National Cancer Institute, Frederick, MD, USA) largely as described previously 8,9 . Human mitochondrial RNA polymerase lacking 213 N-terminal amino acids (mitochondrial localization signal and an unstructured regulatory domain) was expressed in E. coli as follows. Plasmid pGB163 (T7 promoter-His 6 -Δ213mtRNAP) was transformed into E. coli T7 Express lysY/I q cells from New England Biolabs (Ipswich, MA, USA). Cells were grown in 1 L LB medium supplemented with 50 μg/ml kanamycin at 37 °C until OD 0.6, the culture was transferred to 25 °C, and protein expression was induced for 5 h by the addition of 0.8 mM IPTG. Cells were harvested by centrifugation at 6,000 × g, 4 °C for 10 min, resuspended in Lysis Buffer (50 mM Tris-HCl pH 6.9, 500 mM NaCl, 5% glycerol) supplemented with 1 mM β-ME, a tablet of EDTA-free protease inhibitors (Roche Applied Science, Penzberg, Germany), 1 mg/ml lysozyme, incubate on ice for 30 min and disrupted by sonication. The lysate was cleared by centrifugation at 18,000 × g, 4 °C for 30 min. The supernatant was supplemented with 10 mM imidazole and loaded onto Ni-sepharose (GE Healthcare, Chicago, IL, USA) column pre-equilibrated with Lysis Buffer. Protein was eluted using a strep gradient (20, 50, 250 mM) of imidazole in lysis buffer. The 250 mM imidazole fraction containing RNAP was further purified using Heparin and Resource-S column in Buffer A (50 mM Tris-HCl pH 6.9, 5% glycerol, 1 mM β-mercaptoethanol, 0.1 mM EDTA) and Buffer B (Buffer A supplemented with 1.5 M NaCl). Δ213mtRNAP eluted at ≥ 50 and ≥ 30% Buffer B from Heparin and Resource-S columns, respectively. The fractions containing the purified protein were concentrated using Amicon Ultra-4 centrifugal filters (Merck Milipore, Burlington, MA, USA), dialyzed overnight in Storage Buffer (10 mM www.nature.com/scientificreports www.nature.com/scientificreports/ Tris-HCl pH 7.5, 50% glycerol, 100 mM NaCl, 0.1 mM EDTA, 0.1 mM DTT) and stored at −80 °C. Plasmids used for protein expression are listed in Supplementary Table S2. TEC assembly. TECs (1 μM) were assembled by a procedure developed by Komissarova et al. 10 . The assembly was carried out in TB10 buffer (40 mM HEPES-KOH pH 7.5, 80 mM KCl, 10 mM MgCl 2 , 5% glycerol, 0.1 mM EDTA, and 0.1 mM DTT). An RNA primer (final 1 μM) was annealed to the template DNA (final 1.4 μM), incubated with RNAP (1.5 μM) for 10 min, and then with the non-template DNA (2 μM) for 20 min at 25 °C. The nucleic acid scaffolds used for assembling the TECs are presented in Supplementary Fig. 1.
In vitro transcription reactions, single nucleotide addition assay. The transcription reactions were initiated by the addition of 5 μM NTP to 0.1 μM TEC in TB10 buffer (total final volume 20 μl) pre-incubated with the indicated concentrations of compound B for 2 minutes, samples were incubated for the indicated time intervals at 25 °C, and the reactions were stopped by the addition of 30 μl of Gel Loading Buffer (94% formamide, 20 mM Li 4 -EDTA and 0.2% Orange G). RNAs were separated on 16% urea-PAGE gel and visualized with Odyssey Infrared Imager (Li-Cor Biosciences, Lincoln, NE, USA); band intensities were quantified using ImageJ software 11 .
In vitro transcription reactions, processive transcript elongation. The transcription reactions were initiated by the addition of 5 μM NTP with or without B (100 μM final concentration) to 0.5 μM TEC in TB10 buffer at 25 °C. 10 μl aliquots were withdrawn at the indicated time points and quenched with 30 μl of Gel Loading Buffer. RNAs were separated on 16% urea-PAGE gel and visualized with Odyssey Infrared Imager (Li-Cor Biosciences).
Isolation of genomic DNA and genome sequencing. Streptomyces albus DSM40763 was cultured in 30 ml of GYM media with 0.5% glycine at 30 °C for 2 days shaking at 300 rpm. The cells were pelleted and frozen at −20 °C for 4 days. Genomic DNA was extracted using the protocol developed by Nikodinovic et al. 12 with slight modifications. Quality control and the PCR-free shotgun library (Illumina) was prepared at Eurofins Scientific (Ebersberg, Germany). A single lane of an Illumina MiSeq v3 sequencer was used to produce 2 × 300 bp reads.
The quality of the reads were manually checked before and after error correction using FASTQC (v0.11.2) 13 . | 2019-06-21T23:05:38.252Z | 2019-06-20T00:00:00.000 | {
"year": 2019,
"sha1": "154b95e4d5e1dbdf7c398e96d2d32ad57b945142",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-45375-w.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "db0b358b6402248c99e8e66b50f9cd1377eb80eb",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
14416558 | pes2o/s2orc | v3-fos-license | Recruitment Variability of Coral Reef Sessile Communities of the Far North Great Barrier Reef
One of the key components in assessing marine sessile organism demography is determining recruitment patterns to benthic habitats. An analysis of serially deployed recruitment tiles across depth (6 and 12 m), seasons (summer and winter) and space (meters to kilometres) was used to quantify recruitment assemblage structure (abundance and percent cover) of corals, sponges, ascidians, algae and other sessile organisms from the northern sector of the Great Barrier Reef (GBR). Polychaetes were most abundant on recruitment titles, reaching almost 50% of total recruitment, yet covered <5% of each tile. In contrast, mean abundances of sponges, ascidians, algae, and bryozoans combined was generally less than 20% of total recruitment, with percentage cover ranging between 15–30% per tile. Coral recruitment was very low, with <1 recruit per tile identified. A hierarchal analysis of variation over a range of spatial and temporal scales showed significant spatio-temporal variation in recruitment patterns, but the highest variability occurred at the lowest spatial scale examined (1 m—among tiles). Temporal variability in recruitment of both numbers of taxa and percentage cover was also evident across both summer and winter. Recruitment across depth varied for some taxonomic groups like algae, sponges and ascidians, with greatest differences in summer. This study presents some of the first data on benthic recruitment within the northern GBR and provides a greater understanding of population ecology for coral reefs.
Introduction
Coral reefs exhibit remarkable biodiversity [1]. Although the conspicuous scleractinian corals form key structural components of coral reefs numerous other groups play important functional roles. Notably, reef-consolidating algae [2] and sponges play vital roles in nutrient cycling and aid in benthic-pelagic energy coupling [3]. The underlying resilience of coral reefs, in part, relies on the maintenance and persistence of these coral reef communities through space and time [4], particularly for sessile benthic taxa with dispersive larval or propagule phases.
Knowledge of recruitment, larval dispersal and population connectivity of benthic sessile invertebrates is critical to the management and conservation of coral reefs [5][6][7][8]. Population connectivity of marine sessile invertebrates has largely been determined from population genetics [5], which often depicts complicated patterns of larval dispersal. Larval dispersal of coral reef invertebrates is often characterised by endogenous recruitment, but with enough long-distance dispersal to provide variable levels of population subdivision for scleractinian corals [9,10], octocorals [11] and sponges [12,13]. While assessments of larval dispersal are important to establish levels of population maintenance, collecting data on spatio-temporal variability in larval recruitment is also important [8].
Determining the spatial scales of community recruitment contributes to our understanding of resilience, maintenance and persistence of coral reefs, however, there is a large focus on documenting recruitment dynamics of scleractinian corals (e.g. [14][15][16][17]). The dedicated effort to understanding population demographics of scleractinian corals has resulted in valuable knowledge aiding how we manage these ecosystems, particularly when data show patterns of reef degradation [18][19][20].
Scleractinian coral recruitment studies have relied on a combination of the use of recruitment tiles (e.g. [21]) and field surveys (e.g. [15]); while broad scale interpretations of coral recruit variability are difficult, the resilience of reefs is strongly linked to recruitment potential [22,23]. Moreover, the potential for shifting taxonomic states in coral reefs following disturbance, such as coral-algae phase shifts [24], highlights the importance of understanding the dynamics and scales of recruitment variability [22]. Coral recruitment can vary greatly across many spatial scales, including between coral reefs [15,25,26], among reefs patches with reef systems [27], within reef patches [28] and between experimental recruitment tiles [21,29,30]. Recruitment variability also occurs among depths [16] and over time [15,27,31]. Interpreting drivers that contribute to coral recruitment variability is complex, but can include both abiotic (e.g. light intensity and water flow) and biotic (e.g. competition and predation) influences [15,31], as well as spatio-temporal environmental stochasticity [32]. In contrast to the many published studies examining coral recruitment, there are few studies that have investigated recruitment patterns of other sessile organisms, such as sponges, bivalves and ascidians, on coral reefs [33]. Often this recruitment data for non-scleractinian organisms is incidental to more focused coral recruitment studies (e.g. [27,29]). As such, our overall knowledge of nonscleractinian coral reef invertebrates is poorly developed, thereby hindering a broader understanding of community coral reef recruitment.
The broad objective of this study was to begin to meet some of those knowledge gaps of recruitment patterns of benthic coral reef communities (i.e. scleractinian and non-scleractinian coral reef taxa) within a region of Torres Strait, northern Australia. Torres Strait forms the northern most region of the Great Barrier Reef (GBR). While there is limited peer reviewed data on distribution and abundance of sessile coral reef taxa in Torres Strait (e.g. [34,35]), information of non-scleractinian coral reef recruitment studies are, to our knowledge, non-existent in central Torres Strait. Therefore, the specific aim of this study was to examine and quantify recruitment assemblage structure of sessile organisms across a range of spatial and temporal scales, to establish spatio-temporal variability between and within coral reefs in central Torres Strait.
Study site and plate deployment
The study was conducted at Masig and Marsden Islands in central Torres Strait, Australia (S1A Fig). Both islands consist of sand cays with fringing coral reefs, with a reef profile typically comprising a slope descending at an angle ranging from 20-60°from 6 m, terminating at a sand bottom at 15 m. To examine differences in recruitment patterns of sessile invertebrates at a range of spatial scales relevant to the two islands, the design of the study allowed us to examine variation in recruitment patterns at spatial scales from 5 km (between islands), 200 m (between locations), 20 m (between sites), depth (6 m vs. 12 m) and 1 m between tiles (S1B Fig). Settlement plates were deployed at each of the three locations on the northern side of each island. Each location was further divided into three sites, with each site having two depth categories: shallow (6 m) and deep (12 m). Five settlement plates, placed 1 m apart, were deployed at each site x depth combination, using the direct attachment method of [21]. Briefly, 11x11 cm terracotta tiles with pitted surfaces were anchored 1 cm above the reef to provide settlement surfaces on both sides of each plate.
Assessment of temporal patterns was made possible by deploying seasonal sets of plates at the start of the Australasian summer (November) and winter (May). Plates were deployed for six months to allow comparisons over summer and winter over a two-year period (November 2006 to May 2008). At the end of each season, the top and underside of each plate were photographed in situ and a new plate was deployed. Representative sponge specimens were removed from tiles during the winter 2007 sampling and preserved in 70% ethanol to facilitate higher taxonomic identifications.
The study area lies within Australian jurisdiction of the Torres Strait Protected Zone, where marine resource management is undertaken by the Australian Fisheries Management Authority (AFMA) under the Torres Strait Fisheries Act 1984. AFMA officers were consulted prior to the commencement of this study, and confirmed that the deployment of settlement of tiles required for this study was not a matter for their regulation, and did not require a permit under their act. The study area also lies within the traditional lands and seas of Torres Strait traditional owners. Their consent to the study was obtained via a consultative process coordinated by the Reef and Rainforest Research Centre, which administered all Torres Strait research conducted through the Marine and Tropical Science Research Facility (MTSRF), which funded the study. This study did not involve endangered or protected species.
Photographic analysis
An underwater close-up frame, adapted to accommodate either an Olympus C-7070 or Canon IXUS 850IS camera in underwater housings, was constructed to photograph settlement tiles at a fixed distance and to record site and tile information. Both cameras have identical lenses and sensor-resolution; hence images produced are comparable in quality and view. The recruitment of sessile invertebrates was determined for both abundance and percent cover. To determine the abundance of each taxon, an overhead transparency marked with a square was overlaid on a PC-screen. All images of tiles were displayed by Microsoft Windows XP "Picture and Fax View-erTM" and enlarged by clicking the zoom-in button sufficient times to identify each organism. To measure the surface area occupied by each taxon, a 40-point grid was overlaid on the PCscreen image. When analysing images for both abundance and percent cover, the square grid was reduced by a 1 cm margin to eliminate any potential edge effects. Identification to species or genus level could not be established for many of the recruits due to their small size, which is not uncommon in recruitment studies [36,37]. Therefore, recruit assemblages were categorized into broad taxonomic groups (e.g. sponges, ascidians, bryozoans, corals, polychaetes, bivalves, algae and diatoms). In addition, sponges were identified to species level if possible. analysis (PERMANOVA) was used to examine differences in invertebrate recruitment patterns over various spatial scales using a balanced 5-factor nested design. Factors in the model were Season (fixed), Island (fixed), Location (random, nested within Island), Site (random, nested within Location) and Depth (random, nested within Site) and permutations were based on the Bray-Curtis resemblance matrix generated from log (x+1) transformed data. The unconstrained principal coordinate analysis (PCoA) was used to visually compare recruitment patterns of sessile invertebrates from both islands. Individual PERMANOVA tests (9999 permutations) based on the Euclidean distance matrix were performed to examine variability in recruit abundance, assemblage structure, and the individual taxa between all spatial scales at each season separately. For the individual tests, differences were considered significant at a lower p-value of <0.01 to reduce the risk of a Type 1 error. The nested design used in this study allowed for (pseudo) variance components to be compared between spatial scales and seasons [38,39]. All analyses were performed using PRIMER 6/PERMANOVA+ v1.0.2 (Plymouth, UK). Initial analyses revealed no significant difference in invertebrate recruitment patters (e.g. recruit abundance and percent cover) between the two years; therefore, only data from the second year (e.g. May 2007 to May 2008) is presented.
Results
While both the top and underside of the settlement plates were photographed, >90% of the top side was bare space with very low recruitment of unidentified algae; no other organisms recruited to the top of the plate. Due to the very low recruitment on the top side of tiles, only data from the undersides are presented. In total, eight broad taxonomic groups recruited to the tiles over the course of the two-year study, including sponges, ascidians, scleractinian corals, bryozoans, polychaetes, bivalves, algae and diatoms (Fig 1; S2 Fig). Polychaetes were the most numerically dominant taxa observed, with average numeric recruitment abundances being four times higher than any other taxa ( Fig 1A). Scleractinian corals displayed the lowest recruitment with an average of 1 recruit per tile observed (Fig 1A). Although polychaetes were the most numerically abundant taxa, they occupied a very low percentage cover (means ± 1 S.E., 3.0 ± 0.6%) of the settlement tiles ( Fig 1B). On the other hand, groups with more encrusting prostrate morphologies such as algae (22.0 ± 1.1%), sponges (16.7 ± 1.4%) and ascidians (16.4 ± 1.4%) comprised a greater percentage of the tile surface ( Fig 1B).
Recruitment abundances were similar at the highest spatial scale (e.g. between Islands) and across seasons; however, there was a significant effect of location and depth on recruit abundances (Table 1). Variation in recruitment abundance between locations was more pronounced for certain taxa including polychaetes, algae and diatoms, particularly during summer (Fig 2). This finding was also apparent in the PCoA, with the same groups contributing most to the discrimination (Fig 3A). The PCoA showed 64.6% of the variation explained in the first two axes, with no clear patterns separating recruitment between Islands or Seasons ( Fig 3A). When examining assemblages using percent cover data, recruitment was remarkably similar across multiple spatial scales and between seasons, with PERMANOVA revealing depth to be the only significant source of variation (Table 2). This was further demonstrated with PCoA, with the ordination displaying no distinct separation between assemblages at the highest spatial scale (Fig 3B). Sixty percent of the total variation was explained in the first two factors, with algae, sponges, ascidians and bivalves contributing the most to the discrimination (Fig 3B).
When recruitment cover was examined separately for each season, PERMANOVA revealed no significant differences at the higher spatial scales (e.g. islands, locations or sites) for summer or winter (Table 3). In fact, depth was the only significant factor, but only during the summer sampling period (Table 3). Similarly, recruitment abundances were not significantly different at the highest spatial scale for either season; there was a significant difference between locations during the summer only ( Table 3). Examination of the pseudo-variance components from the PERMANOVA model revealed that the largest source of variation could consistently be attributed to the smallest spatial scale (i.e. between tiles, 1-m apart) for both recruitment abundance and percent cover (Fig 4A and 4B). The patterns of variation were inconsistent for both measures between the two seasons. For instance, Site contributed to the variation in abundances during winter, but not during summer (Fig 4A), and location contributed to the variation in percent cover during the summer, yet had no contribution during winter (Fig 4B). The only consistent source of variation between both measures and seasons (excluding tiles) was depth; however, it was higher for both during the summer (Fig 4A).
Individual PERMANOVA tests for each taxa revealed that depth was a significant source of variation during the summer for only three out of the seven taxa: algae, sponges and ascidians (Table 3). For instance, sponges covered a larger percentage of deep tiles, whereas ascidians covered a larger percentage of shallow tiles (Fig 1B). During the summer, there was also a significant difference in the cover of bivalves between locations (Table 3). Interestingly, the only significant source of variation in the winter was depth, but only for algae (Table 3).
In total, eight different sponge species were positively identified over the course of the twoyear study (S3 Fig). Species spanned six families and included: Chalinula nematifera, Coscinoderma matthewsi, Dysidea avara, Dysidea sp. 1 grey, Haliclona turquiosia, Hyrtios erecta, Iotrochota purpurea and Iotrochota sp. 1 green. Average recruitment abundances of all species were very low, with H. turquiosia recruitment being the highest (Fig 5A). Interestingly, H. turquiosia also occupied the highest percentage (overall mean = 2.12%) of the tiles, along with Dysidea sp. 1 grey (0.92%) (Fig 5B). H. erecta was the least abundant and occupied the smallest percentage (0.03%) of the tile out of all the sponges species identified (Fig 5A and 5B). Notably, higher recruitment of sponges like Coscinoderma matthewsi and Dysidea species at 12 m compared with 6 m agrees with the abundance patterns of adult sponges across depth [40] (Fig 1B).
Discussion
A notable finding of this study was that the highest levels of recruitment variation occurred at the lowest spatial scale examined, with recruitment varying more between experimental tiles 1 m apart than between sites, locations and islands. Recruitment variability of sessile benthic taxa, at small, within-habitat scales is a consistent finding in recruitment studies [38,39,41,42], despite the use of uniformly sized settlement tiles that provide a standardised habitat that limits recruitment variability associated with complex heterogeneous natural reef habitats [33,36]. Interpretations of small scale (i.e. highly localised) recruitment variability can be linked to a range of environmental processes, including competition for space or predation [41,43,44]. Physical processes including, boundary flow hydrodynamics and habitat surface topography can also play roles in recruitment variability at small spatial scales [45][46][47]. Tiles in this study had similar habitat topography; however, factors such as flow rates and light intensity were possibly different [45], which likely affected recruitment patterns.
The finding of recruitment heterogeneity among experimental tiles provides an important insight into the dynamics of recruitment. Heterogeneity at local scales, covering metres, suggests that local drivers (e.g. predation and competition) play a role in contributing to community assemblages at these spatial scales. Here, grazing (direct and incidental) from herbivorous fish can contribute to coral recruit mortality [14], which arguably translates to recruitment variability. The spatial scale of recruitment variability through predation/grazing may depend on the home range of herbivores. The two conspicuous groups of herbivores on coral reefs, fishes and urchins, show foraging patterns over a range of spatial scales [48,49] with both groups playing likely roles in contributing to recruitment variability over smaller within habitat scales [42], particularly to upper tile surfaces. In the present study there was little evidence of recruitment to the upper surfaces of tiles. It is likely that the very low recruitment of organisms (e.g. algae) to the upper surfaces of tiles resulted from high grazing pressure and other post-settlement mortality [14]. Although not quantified, the upper surfaces of many tiles in this study possessed noticeable feeding scars. Similarly, nearly all recruits (98.8%) settled to the bottom of the tiles in a study done in the southern Persian Gulf [50]. In comparison, the underside tile surfaces, with clear signs of invertebrate and algal assemblages, potentially provided protection from larger grazers, with this pressure potentially being less important than other processes, including competition. In addition, light no doubt played an important role for coral and algal recruitment. Recruitment to under sides of tiles may also reflect recruitment of cryptic taxa to shaded habitats [42]. The small numbers of scleractinian coral recruits (<1 recruit per tile), to either the upper or bottom surfaces of tiles, was surprising, and is in contrast to other recruitment studies which demonstrate coral recruitment on both upper and under sides of settlement tiles and in higher numbers than observed in this study [16,17,27,29,31]. While patterns of scleractinian coral recruitment to individual experimental tiles can reflect nil, or very low numbers of recruits [17], average numbers of recruits in these comparative studies are conspicuously higher than scleractinian coral recruitment found in the present study, despite the study sites being located within a thriving coral reef community. For instance, coral recruitment on the GBR can range from 36 to 7000 recruits per m 2 per year depending on the study and method employed (See Table 4, [51]). Incidental grazing from herbivores may be indicative of the low number of recruits, including corals, to the upper surfaces of tiles in this study, and the lower number of coral recruits underneath tiles may be a reflection of competition with other sessile invertebrate taxa [27]. Although not quantified as a part of this study, a relatively low abundance of crustose coralline algae (CCA) was observed on the underside of the tiles, potentially contributing to the low coral recruitment observed. Polychaetes were by far the most abundant taxa observed, with abundances four times higher than any other taxa. However, this group occupied less than 5% of the tile surface. Polychaetes are known to recruit to dead coral in northern Australia [52] and are often found dominating recruitment of artificial structures [36,53]. This is an interesting finding given polychaetes are not a conspicuous group occurring on substrata in the immediate vicinity of experimental recruitment tiles. The immediate sessile reef community is predominantly comprised of cnidarians and sponges [34]. Nevertheless, polychaete diversity, and the apparent common occurrence of this group within coral reef micro habitats, is noted at Lizard Island, northern GBR [54]. That polychaetes, and to a lesser extent diatoms, dominate recruitment tiles may be a reflection of these groups excelling as colonisers rather than competitors. The dynamic between colonisers and competitors is routinely reported, particularly when recruitment surfaces represent bare space [36]. Therefore, tiles deployed for the six months in this study are likely to confer advantages to important colonising taxa such as polychaetes. However, the fact that polychaetes occupy a small area of space on tiles suggests less capacity for polychaetes as competitors, particularly when compared to taxa with noted encrusting habits or allopathic capacities such as ascidians, sponges and bryozoans resulting in overgrowth of poor spatial competitors [55][56][57].
Spatial recruitment variability was not evident beyond the smallest scales examined in this study (experimental tiles), suggesting that recruitment between islands and among locations within islands is less heterogeneous. While processes such as predation contribute to coral reef recruitment patterns at both small and large spatial scales, it is more likely that processes governing larval supply, and larval dispersal, are more uniform between islands or among locations within islands thereby contributing to less differentiation in overall recruitment. Despite the distance of several kilometres between islands, or among locations within islands, it is likely that there is enough localised dispersal and recruitment, limiting spatial heterogeneity among sessile groups. Larval dispersal for many coral reef sessile invertebrates can be highly endogenous, but with enough long-distance dispersal and recruitment to maintain population connectivity over regional scales [9,58].
This study also examined recruitment variability between shallow (6 m) and deep (12 m) sites. In this case, depth was found to be an important factor influencing recruitment during Table 3. Results of PERMANOVA tests to examine differences at both time points (i.e. summer & winter) separately. Permutations for abundance and percent cover were based on a Bray-Curtis similarity matrix generated from log(x+1) transformed data, while permutations for the individual taxa (using percent cover data) were based on a Euclidian distance matrix using untransformed data. Significant p-values (<0.01 to account for multiple tests) in bold. Is = Island, Lo = Location and Si = Site. Due to their very low abundances, corals were excluded from the individual PERMANOVA tests. physical factors such as flow rates and light intensity would likely be different [59], which might affect recruitment patterns. Moreover, larval dispersal for some sponges can be driven by clear larval settlement behaviours that can cue larvae to settle in accordance to light and reef associated environmental/habitat cues [61][62][63][64][65][66]. Key environmental settlement cues suitable to sponges may be more commonly encountered at deeper sites and therefore may play important contributing roles to successful recruitment there.
Conclusions
The finding that recruitment variability was highest at the smaller spatial scales examined in this study highlights the heterogeneity that occurs within habitats (i.e. at spatial scales of metres). While a range of both biotic and abiotic processes may contribute to patterns of recruitment in marine benthic community assemblages over small and regional scales, the higher variability within habitats suggests localised processes associated with competition and predation may play important roles in the heterogeneity of community assemblages on very fine scales. Torres Strait is situated at the northern boundary of the GBR with shallow reef habitats dominated by scleractinian corals. The very low numbers of coral recruits found in this study, and that it differs from other recruitment studies undertaken on the GBR, identifies a need for further work to bridge the complex temporal and spatial patterns of recruitment on coral reefs [51]. The low presence of organisms on the surface of tiles and encrusting organisms on the underside of tiles further highlights the potential role of both predation (through incidental grazing) and competition in defining the community assemblages. The documentation of non-coral sessile invertebrate recruitment patterns provides much-needed information on these groups within the northern GBR and more broadly coral reef systems. In addition, this study provides knowledge of key performance indicators related to coral community recruitment patterns that depict variability over time and space, which are valuable to how coral reefs are managed and conserved. | 2016-05-12T22:15:10.714Z | 2016-04-06T00:00:00.000 | {
"year": 2016,
"sha1": "08579d11a42a48082947051b11efc11c2c08d3fe",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0153184&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "08579d11a42a48082947051b11efc11c2c08d3fe",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
53377627 | pes2o/s2orc | v3-fos-license | Superconducting metamaterials and qubits
Superconducting thin-film metamaterial resonators can provide a dense microwave mode spectrum with potential applications in quantum information science. We report on the fabrication and low-temperature measurement of metamaterial transmission-line resonators patterned from Al thin films. We also describe multiple approaches for numerical simulations of the microwave properties of these structures, along with comparisons with the measured transmission spectra. The ability to predict the mode spectrum based on the chip layout provides a path towards future designs integrating metamaterial resonators with superconducting qubits.
INTRODUCTION
Metamaterials having both a negative permeability and permittivity, and thus a negative index of refraction with left-handed transmission properties, were first described several decades ago. 1 More recently there have been numerous investigations of a variety of counterintuitive optical properties in these systems, including cloaking 2 and superlensing. 3 Research in this direction is closely connected to the field of photonic band-gap engineering. 4 In the microwave regime, left-handed transmission lines have been proposed and studied with one-and twodimensional arrays of room-temperature lumped-element components. 5,6 There have also been implementations of one-dimensional metamaterial microwave transmission lines with high-temperature superconducting films. 7 Since 1999, there has been substantial progress in the field of quantum coherent superconducting circuits, or qubits. 8 Many superconducting qubit implementations involve couplings between the qubits and superconducting microwave resonator circuits. In this architecture, referred to as circuit quantum electrodynamics, or cQED, [9][10][11] the qubit behaves as an artificial atom that can couple to photons in the microwave resonant cavity, analogous to atomic QED with natural atoms in microwave or optical cavities. 12 In cQED, the microwave resonators are typically either distributed coplanar waveguide cavities patterned from superconducting thin films 13 or threedimensional waveguide cavities. 14,15 A recent theoretical work suggested the possibility of implementing one-dimensional superconducting metamaterial resonators in the microwave regime with high quality factors for coupling to superconducting qubits. 16 Such a system could allow for the generation of large-scale entanglement between multiple photon modes in the metamaterial and could have potential applications in the emerging field of quantum simulation.
Here we report on the fabrication of thin-film superconducting metamaterial resonators and test oscillators along with microwave measurements at millikelvin temperatures. We also present numerical simulations of the metamaterial circuits and compare the simulated and measured spectra. The metamaterial architectures that we present here are compatible with future circuit designs for exploring the coupling to superconducting qubits.
Metamaterials with microwave circuit components
Although typical planar cQED implementations involve distributed coplanar waveguide (CPW) resonators fabricated from superconducting thin films on silicon or sapphire substrates, 13 a resonator could also be formed from a lumped-element transmission line, consisting of a one-dimensional chain of unit cells each with a series inductor L l and a capacitor C l to ground [ Fig. 1(a)]. 17 Such a resonant circuit would exhibit a fundamental resonance where N is the number of unit cells between the input/output coupling capacitors. Beyond the fundamental resonance, there are evenly spaced harmonics at integer multiples of f 0 , as shown in a circuit simulation performed in AWR Microwave Office [ Fig. 1(c)]. The dispersion relation for such a circuit is an increasing function of the wavenumber, characteristic of right-handed transmission.
As described in Refs [5,16], 5, 16 by interchanging the positions of the capacitors and inductors in the lumpedelement right-handed transmission line resonator described above [ Fig. 1(b)], one obtains a circuit with dramatically different transmission properties. There is now a low-frequency infrared cut-off at f IR = 1/4π √ L l C l , below which there is no transmission through the structure. For frequencies just above f IR , there is a dense forest of resonances that get further apart for higher frequencies. The density of resonances for f > f IR increases as more unit cells are added to the transmission line. As with a conventional right-handed lumped-element transmission line, the impedance is still given by Z 0 = L l /C l .
Metamaterial design and fabrication
We design our metamaterial resonators with a target range of f IR ∼ 5 GHz in order to place the densest portion of the metamaterial spectrum in the range of tunability for a typical superconducting transmon qubit. 18 The use of an asymmetric transmon with two different sizes of junctions 19 would allow the IR cut-off for the metamaterial resonator to be located between the upper and lower high-coherence flux-insensitive sweetspots. Our target impedance for the metamaterial lines is Z 0 = 50 Ω, which, when combined with f IR = 5 GHz, results in C l = 300 fF and L l = 0.8 nH.
We fabricate our metamaterial structures on a Si wafer with conventional photolithographic patterning using a DUV stepper followed by a lift-off process with electron-beam deposition of Al thin films, a common material in superconducting qubit circuits. The Al films are 90 nm thick. For our initial metamaterial structures we have used interdigitated capacitors, which allow for a straightforward single-layer fabrication process, although the capacitors occupy a relatively large area and their physical size limits the number of unit cells that we are able to fit on a chip. Our present capacitor design has fingers that are 4 µm wide, with a separation of 1 µm between adjacent fingers and an overlap length of 50 µm [ Fig. 2(a, c)]. For the inductors, we use a meander-line design In order to explore the impact of the lumped-element parameter values on the resonance spectrum, we have fabricated two different versions of metamaterial resonators: type A and type B. In both cases, we have attempted to maintain the ratio of L l and C l for Z 0 ∼ 50 Ω while varying their product to change f IR . On the type A (B) structures, each capacitor has 32 (26) pairs of fingers and each inductor has 12 (7) turns of the meander line.
We have chosen the lumped-element parameters based on a variety of techniques for estimating the resulting capacitance and inductance values. For the capacitors, numerical simulations based on ANSYS Q3D yield a capacitance per finger pair of 8.35 fF, corresponding to C A l = 267 fF and C B l = 217 fF. For the inductors, from an analysis with Sonnet, we obtain inductance estimates L A l = 0.9 nH and L B l = 0.6 nH. We note that these estimates do not account for kinetic inductance. However, for our relatively wide traces on the inductors and the short penetration depth of Al, we estimate the kinetic inductance contribution to be less than 5% of the total inductance for each L l .
Both metamaterial structures consist of 42 unit cells extending across the width of a 6 mm-wide chip [ Fig. 2(d)] with gap coupling capacitors at either end of the metamaterial [ Fig. 2(b)]. For the type A(B) metamaterial resonators, the gap capacitor is 1 (5) µm wide, in order to target a coupling capacitance of 50 (28) fF. Each chip has an Al ground plane surrounding the metamaterial structure and CPW leads for probing the microwave transmission. As is typically done in many cQED implementations, the ground plane contains a lattice of holes to avoid the trapping of Abrikosov vortices in regions of large microwave currents that could contribute excess loss. 20
Measurements of test oscillators
Before measuring the metamaterial resonators, we have designed and tested a series of lumped-element test oscillators fabricated with the same process and lumped-element parameters as in the metamaterials. The test oscillator chips each consist of four lumped-element LC oscillators, each one capacitively coupled to a CPW feedline. Each test oscillator has the same inductor as in a unit cell of metamaterial A or B and the capacitor has the same interdigitated finger parameters as in the corresponding metamaterial line, although the capacitor is split in two halves that are arranged in parallel on either side of the inductor. The total number of finger pairs in the capacitor for oscillator number 1-4 is 29, 33, 37, or 41, respectively, thus allowing us to study the variation in capacitance for different numbers of fingers [ Fig. 3(a, b)].
We perform our measurements of the test oscillators on an adiabatic demagnetization refrigerator (ADR) at a temperature of ∼ 50 mK. We probe the microwave transmission S 21 through the feedline with a vector network analyzer, sending microwave signals to the chip through a coaxial driveline with 53 dB of cold attenuation for thermalization. The transmission signal is amplified by a HEMT mounted on the 3 K plate of the ADR and again with a room-temperature amplifier. A cryogenic mu-metal can mounted on the 3 K plate surrounds the sample for magnetic shielding.
The resonance of each test oscillator results in a dip in S 21 and we fit the resonance trajectory in the complex plane with a standard form, 21 then extract the resonance frequency f 0 and quality factor Q for each test oscillator. The internal loss values extracted from these fits are in the range of 10 −5 , consistent with loss due to two-level systems at interfaces and surfaces in thin-film superconducting circuits at low temperatures. 21,22 Upon obtaining f 0 for each test oscillator, we are able to plot 1/f 2 0 vs. the number of finger pairs in the capacitor of each test oscillator [ Fig. 3(c)]. The resulting linear variation should have a slope related to the product of the specific capacitance per finger pair and the common inductance value for each test oscillator. The slope in Fig. 3(c) combined with the specific capacitance per finger pair of 8.35 fF from the Q3D numerics, corresponds to an inductance of L A l = 1.15 nH. A similar analysis for the type B metamaterial parameters results in L B l = 0.7 nH. These inductance values are slightly larger than our Sonnet estimates from the previous section, which could be due in part to the lack of kinetic inductance in the Sonnet treatment. Also, since our analysis of the test oscillator data only gives us the product of the specific capacitance per finger pair and the inductance, it is possible that our capacitance values from Q3D could be off, which would result in a shift of the inductance values we extract.
Measurements of metamaterials
We cool our metamaterial resonators on the ADR to ∼ 50 mK with the same measurement configuration as described for the test oscillators in the previous section. Using the vector network analyzer, we probe S 21 (f ) from 2 − 15 GHz through each metamaterial resonator, type A and B (Fig. 4). In order to normalize the spectra, we measure the baseline transmission on our ADR below 100 mK on a separate cooldown with a superconducting feedline in place of the metamaterial resonator. Subtracting this baseline for each chip accounts for any sources of loss off of the chip, such as cables, connectors, and attenuators, as well as the gain from the amplifiers.
From the measured S 21 (f ) spectra, there is a clear stop band at low frequencies, with low transmission below ∼ 4 GHz for type A and ∼ 5 GHz for type B, characteristic of an IR cut-off. Beyond this point, there are many resonance peaks that initially get closer together for increasing frequency, reaching a minimum peak spacing of 120 (160) MHz around 5 (6) GHz for type A (B) before spreading out to larger separations at higher frequencies.
Simulations of metamaterials: numerical solutions to lumped-element circuit
For our initial analysis of the measured S 21 (f ) for the two metamaterial resonators, we compare these with numerical solutions to the lumped-element circuit model using AWR Microwave Office (Fig. 5). In this approach, we take the values for C l , L l and C c from the Q3D numerics and test oscillator analysis described in Sec. 2.2 and 3.1. The AWR simulations clearly indicate an IR cut-off, although this is roughly 1 GHz above the first peak in the measured spectra for both type A and type B. Also, the peak density in the AWR curves is highest just beyond the IR cut-off, whereas the measured S 21 spectra exhibit an increasing peak density that reaches a maximum ∼ 1 − 2 GHz beyond the frequency at which the first peak occurs. This difference between the measurements and the AWR simulations is likely due to deviations from the ideal lumped-element model with the standing-wave patterns being influenced by the distributed nature of the transmission in the physical structures. In order to explore this, we need another approach to the simulations that accounts for the actual circuit layouts.
Simulations of metamaterials: finite-element approach
We simulate our physical metamaterial circuit layouts using the Sonnet software tool, which takes the CAD file for our circuit design as an input and solves Maxwell's equations to obtain the field distributions that can then be used to compute simulated S 21 (f ) curves. From the comparison between the measured S 21 (f ) and the Sonnet simulations in Fig. 6, we see that the Sonnet spectra exhibit a first peak that is within ∼ 0.5 GHz of the measured spectra. Also, the Sonnet spectra correctly capture the location of the maxima in the peak density being somewhat beyond the frequency of the first peak, in contrast to the lumped-element simulations from the previous section. The remaining deviations between the measured S 21 (f ) and the Sonnet spectra could be due to various effects, including kinetic inductance in the Al traces, disorder in the values of C l and L l between the different unit cells, and coupling to chip modes in the Si substrate.
Conclusions
We have demonstrated that superconducting metamaterial resonators fabricated from Al thin films with interdigitated capacitors and meander-line inductors can exhibit a transmission spectrum with an IR cut-off and a dense mode spectrum of high-Q peaks. By varying the capacitor and inductor values in the metamaterial, we are able to modify the spectrum in a predictable way. Numerical simulations of the metamaterial structures agree with the measured transmission spectra reasonably well. The ability to predict the mode spectrum for a metamaterial transmission-line resonator from the circuit design parameters is an important stepping stone towards the integration of similar structures with superconducting qubits for applications in quantum information science, for example, following the strategy described by Egger & Wilhelm. 16
Acknowledgments
We acknowledge useful discussions with F.K. Wilhelm, D. Egger, and B. Taketani. This work was supported by the Army Research Office under Contract No. W911NF-14-1-0080. M.D.L. and F.R. also acknowledge support provided by the National Science Foundation under Grant No. DMR-1056423. Device fabrication was performed at the Cornell NanoScale Facility, a member of the National Nanotechnology Infrastructure Network, which is supported by the National Science Foundation (Grant ECS-0335765). | 2015-05-05T14:18:49.000Z | 2015-05-05T00:00:00.000 | {
"year": 2015,
"sha1": "44a955ff37eaced3ac11d8f8de574cc04632da87",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1505.01015",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "80665771f76479b36abcf57c070d4a2755d5b9a3",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Engineering"
]
} |
247212184 | pes2o/s2orc | v3-fos-license | Existence and Uniqueness Results for Fractional ( p , q ) -Difference Equations with Separated Boundary Conditions
: In this paper, we study the existence of solutions to a fractional ( p , q ) -difference equation equipped with separate local boundary value conditions. The uniqueness of solutions is established by means of Banach’s contraction mapping principle, while the existence results of solutions are obtained by applying Krasnoselskii’s fixed-point theorem and the Leary–Schauder alternative. Some examples illustrating the main results are also presented.
Introduction
Fractional calculus, dealing with the integrals and derivatives of arbitrary order, constitutes an important area of investigation in view of its extensive theoretical development and applications during the last few decades.For some interesting results on fractional differential equations ranging from the existence and uniqueness of solutions to the analytic and numerical methods for finding solutions, we refer the reader to the following articles: [1][2][3][4][5].Concerning the applications of fractional differential equations in engineering, clinical disciplines, biology, physics, chemistry, economics, signal and image processing, and control theory, for example, see [6][7][8][9][10] for more details.
The study of q-calculus was introduced by Jackson in 1910, see [11,12] for more details.As one of the major driving forces behind the modern mathematical analysis, q-calculus has played important roles in both mathematical and physical problems.For instance, Fock [13] has studied the symmetry of hydrogen atoms using the q-difference equation.The concepts of q-calculus found numerous applications in a variety of fields, such as combinatorics, orthogonal polynomials, basic hypergeometric functions, number theory, quantum theory, quantum mechanics, and theory of relativity for details, see [14][15][16], and the references cited therein.One can find the basic concepts of q-calculus in the text by Kac and Cheung [17], while some details about fractional q-difference calculus can be found in [14,[18][19][20][21].
In 2021, Neang et al. [31] considered the nonlocal boundary value problem of nonlinear fractional (p, q)-difference equations with taking care of solutions of existence and uniqueness results obtained by c D α p,q u(t) = f (t, u(p α t)), t ∈ [0, T/p α ], 1 < α ≤ 2, (1) where ) are constants, c D α p,q denoted by Caputo fractional (p, q) type, while D p,q denoted by first-order (p, q)-derivative.
Qin and Sun [32] studied on a nonlinear fractional (p, q)-difference Schrödinger equation in 2021, given by the following: where 0 < q < p ≤ 1, 2 < α ≤ 3, D α p,q is a Riemann-Liouville-type fractional (p, q)difference operator, and Moreover, Qin and Sun [33] studied positive solutions for fractional (p, q)-difference boundary value problems given by the following: where 0 However, even though Neang et al. [31] investigated and proved the nonlocal boundary value problems by considering on existence results of a class of fractional (p, q)difference equations, it still was a bit complicated with the domain of a function when the authors applied the fractional (p, q)-integral operators.In this paper, to make this paper more smooth and convenient, we have investigated the existence and uniqueness of solutions for the local boundary value problem of fractional (p, q)-difference equation with a new function obtained g ∈ C ([0, b] × R, R), given by the following: where α i , β i , γ i (i = 1, 2) are constants, c D α p,q denoted by Caputo fractional (p, q) type, while D p,q denoted by the first-order (p, q)-derivative.
Preliminaries
In this part, some fundamental results and definitions of the (p, q)-calculus, which can be found in [14,23,25] are given.
The definition of (p, q)-beta function for s, t > 0 is defined by and (13) can also be written as see [34,35] for more details.
Definition 2 ([23]
).Let 0 < q < p ≤ 1, g be an arbitrary function, and t be a real number.The (p, q)-integral of g is defined as t 0 g(s)d p,q s = (p − q)t ∞ ∑ n=0 q n p n+1 g q n p n+1 t (16) provided that the series of the right-hand side in (16) converges.
Definition 5 ([34]
).Let g be a continuous function defined on [0, b].If α > 0, then the Caputo fractional (p, q)-derivative is stated by where α is the smallest integer greater than or equal to α.Notice that if α = 0, then c D 0 p,q g (t) = g(t).
To obtain the sufficient condition of existence and uniqueness of solutions of ( 7)-( 8), employing the following Lemmas of fractional (p, q)-calculus play an important role in those main results.
Lemma 4. In order to prove ( 7) and ( 8), we first give a useful Lemma, as follows: where and it is supposed that Proof.Applying fractional (p, q)-integral on (20), we obtain the following: where c 0 , c 1 are constants and t ∈ [0, b].Utilizing (20) again, we obtain Solving the above system of equations to find the constants c 0 , c 1 , we have Substituting the values of c 0 , c 1 in ( 22), we derive (21).By direction computation, we obtain for the converse.Therefore, this completed the proof.
Main Results
Let C := C ([0, b], R) denote the Banach space of all continuous functions from [0, b] to R, endowed with norm, defined by In view of Lemma 4, we define an operator F : C → C as g(ps, x(ps)) d p,q s g(ps, x(ps)) d p,q s where Observe that x is a solution to (7) and ( 8) if-and only if-x is a fixed-point of F .For convenience, we denote where If k < 1, then ( 7) and ( 8) has a unique solution.
Proof.We transform the problem ( 7) and ( 8) into a fixed-point problem F x = x, where the operator F is given by (24).Applying Banach's contraction mapping principle, we will show that F has a unique fixed point.Define a ball, B r = {x ∈ C : x ≤ r}, with the radius, r, satisfying and Now, we shall show that F ⊂ B r .For any x ∈ B r , consider p ( α 2 ) Γ p,q (α) |g(ps, x(ps))| d p,q s 25) and ( 26), we obtain This shows that F B r ⊂ B r .Now, for x, y ∈ C, we obtain which, in view of (25), we obtain This is because k ∈ (0, 1), F is a contraction.Therefore, ( 7) and ( 8) has a unique solution.The proof is completed.7) and ( 8) has a unique solution, if k < 1.
Lemma 5 (Kranoselskii's fixed-point theorem [36]).Let M be a closed, bounded, convex, and non-empty subset of a Banach space X.Let A, B be two operators, such that: Ax + By ∈ M, whenever x, y ∈ M; (ii) A is compact and continuous; (iii) B is a contraction mapping.
Then, there exists z ∈ M, such that z = Az + Bz.
Observe that P x + Qx = F x.For x, y ∈ B r , we have |g(ps, y(ps))| d p,q s Thus, P x + Qy ∈ B r .By (A 1 ) and ( 27), Q is a contraction mapping.By continuity of f , we obtain that P is continuous.It is easy to see that Thus, the set P (B r ) is uniformly bounded.P is compact.First, Let Then, we obtain which is independent of x, and tends to zero as t 1 → t 2 .So, the set P (B r ) is equicontinuous.By the Arzelá-Ascoli theorem, P is compact on B r .Thus, ( 7) and ( 8 Lemma 6 (Nonlinear alternative for single value maps [37]).Let E be a Banach space, C a closed, convex subset of E and U an open subset of C with u ∈ U .Suppose that F : U → C is a continuous, compact function; that is, F U is a relatively compact subset of C map.Then, either (i) F has a fixed point in U , or (ii) there is a u ∈ ∂U (the boundary of U in C) and λ ∈ (0, 1) with u = λF u.
Proof.Notice that F : C → C defined by (24).F is continuous.Let {x n } be a sequence of the function, such that Therefore, we obtain g p 2 s, x n (p 2 s) − g(p 2 s, x(p 2 s)) d p,q s, which implies that F x n − F x → 0 as n → ∞.
Thus, the operator F is continuous.Next, we show that F maps a bounded set into a bounded set in C ([0, b], R).For a positive number r > 0, let B r = {x ∈ C ([0, b]) : x ≤ r}.Then, for any x ∈ B r , we have Next, F maps bounded sets into equicontinuous sets of C ([0, b], R).Let t 1 , t 2 ∈ [0, b] with t 1 < t 2 be two points and B r be a bounded ball in F .For x ∈ B r , we obtain p,q p ( α−1 2 ) Γ p,q (α − 1) u 1 p 2 s Ψ(r) + u 2 p 2 s d p,q s.
Conclusions
In this paper, we investigated the local separated boundary value problem of a class of fractional (p, q)-difference equations involving the Caputo fractional derivative.By applying some well-known tools in fixed-point theory, such as Banach's contraction mapping principle, Krasnoselskii's fixed-point theorem, and the Leary-Schauder nonlinear alternative, we derive the existence and uniqueness of solutions for the problem.Moreover, some illustrating examples were also presented.
Remark 2 .
If g is a continuous function on [0, b] × R, and there exists a constant L > 0 with |g(t, x) − g(t, y)| ≤ L|x − y|, then ( | 2022-03-03T16:12:21.827Z | 2022-02-28T00:00:00.000 | {
"year": 2022,
"sha1": "c1e3d907a53d2a87e3f21f7bdb0d001d79d32ed7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-7390/10/5/767/pdf?version=1646038309",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bb00edc629cfb0f8d71f1754723dd83be328fd5f",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
260527753 | pes2o/s2orc | v3-fos-license | Figurative Language and Its Meaning Found in The Novel “It Starts with Us”
Figurative language is one of the literary devices used to enrich the meaning of literature work. The research problem of this research is to find the figurative language and its meaning in the novel “It Starts with Us”. This is qualitative research and used literary study. The data collection is taken from the novel “It Starts with Us” by Collen Roover, published in 2022. The data shows that there were five kinds of figurative language simile, metaphor, personification, hyperbole, and repetition. Out of 86 data; 76 data (86%) were simile that consisted of using “like” 65 data (75,5%), using “as” 6 data (6,9%), and using “as…as” 3 data (3,4%). Data figurative language on metaphor and hyperbole were 4 each (4,65%). The last data were personification and repetition which with 2 data each (2,32%). It can be concluded that the novelist emphasized the expressions of metaphor to compare the situation and emotions in the story of the novel.
INTRODUCTION
This research is aimed to find out the figurative language used in the novel "It Starts with Us" and its' meaning. Besides, it identifies the percentage of kinds of figurative used in the novel. This romance novel was written by Colleen Hoover. This is the sequel to her bestseller in 2016 "It Ends with Us" (Hoover, 2022). This novel brings the readers to the intricacies of life after divorce and domestic abuse. The main characters of this novel are Lily, Ryle (her abused husband), and Atlas (Lily's first love). The synopsis of the story in the novel begins with the line between respecting her ex-husband, Ryle, and his position in her life. She wants to move on in her love life and eventually sees that she deserves a romantic relationship and takes the steps in order to have a better life. She is navigating single motherhood, running her flower business, and the hope that she could have a healthy relationship after all. Atlas and Lily attempt to rekindle the love they felt for each other as teenagers.
However, it must deal with the repercussions of the love they have now as adults. Atlas and Lily love each other so much and explore the reality of their relationship. Falling in love is easy but loving is hard work. In addition, in some way, all relationships are flawed and we simply must choose which imperfections we are willing to accept. It shows that Lily deserves second-chance love.
According to (Abrams, M. H. Harpham, 2009), the novel is a great variety of writing that have in common only the attribute of being extended works of fiction written in prose. As an extended narrative, the novel is distinguished from the short story and from the work of middle length its magnitude permits a greater variety of characters, a greater complication of the plot (or plots), ampler development of the milieu, and more sustained exploration of character and motivates that do the shorter, more concentrated modes. While according to (Peter, 2006), a novel is one of the three main kinds of literature (poetry, drama, novel), the novel is the last to evolve and the hardest to define, for reasons suggested in the name. 'A fiction in prose of a certain extent'. The fascination of the novel is that, because of its representational dimension, it raises the problem of the nature of fiction at a point very near to familiar, unfictionalized versions of reality. In addition, (Cuddon, 2013) stated that novel is a wide variety of writings whose only common attribute is that they are extended pieces of prose fiction. In other words, novel is an invented prose narrative of considerable length and a certain complexity that deals imaginatively with human experience, usually through a connected sequence of events involving a group of persons in a specific setting.
Humans characteristically use language, and a characteristic feature of the use of language is that it is meaningful. This research uses semantic theory in order to explore the meaning of figurative language found in the novel. According to (Maria, Aloni. Paul, 2016), semantics is the study of meaning, of the structural ways in which it is realized in natural language, and of the formal logical properties of these structures. The area of formal semantics finds its roots in logic and the philosophy of language and mind, but it has also become deeply entrenched in linguistics and the cognitive sciences. While according to (Jacobson, 2014), each expression that is proven well-formed in the syntax is assigned a meaning by the semantics, and the syntactic rules or principles which prove an expression as well-formed are paired with the semantics which assigned the expression a meaning. An interesting consequence of this view is that every well-formed syntactic expression does have a meaning. While according to (Riemer, 2010), semantics is the study of meaning, besides, it is one of the richest and most fascinating parts of linguistics. In addition (Dixon, 2005), stated that underlying both words and grammar there is semantics, the organization of meaning. A word can have two sorts of meanings. First, it may have 'reference' to the world: red describes the color of blood; chair refers to a piece of furniture, with legs and a back, on which a human being may comfortably sit. Secondly, a word has 'sense', which determines its semantic relation to other words, for example: narrow is the opposite (more specially: the antonym) of wide, and crimson refers to a color that is a special sort of red (we say that crimson is a hyponym of red).
According to (Colton, 2015), figurative language provides a lot of bang for its buck (idiom).
611
Figurative language expresses meaning beyond its correct figurative interpretationcorrectly understanding "I couldn't be better" as a negative when spoken by someone feeling miserable (verbal irony). This extra meaning includes all kinds of thing (hyperbole), such as speaker attitudes and emotions, contextual enhancements and elaboration, social revelations and influences and new meaning arising from interactions between or among these things. Furthermore, (Colton, 2015) stated that there are ten kinds of figurative which are metaphor, simile, verbal irony, oxymoron, hyperbole, contextual expression, understatement, idiom, indirect request and repetition. In addition, (Colton, 2015) explains the meaning and gives examples of kinds of figurative language that stated as follows: 1) Metaphor is a comparison between one thing to another or two different things. Metaphors for people or other things commonly use animals as source domains, on occasion as terms of affection (e.g., "He's my little koala bear"), but also frequently as a means of impolite derision; or whether by leveraging undesirable characteristics of particular animals in reference to people or things (for example, "He's a skunk" or "My car is a turtle") or by the general view that animals are somehow lesser in epistemic quality than people. 2) Simile is a figure of speech that compares two things. The difference is that similes use comparative words "likelike, as". A simple example is "He eats like a horse". Simile is much less investigated the metaphor, although it occurs in frequently in discourse.
Like metaphor, it is a semantic figure, a mental process playing a central role in the way we think and talk about the world, which often associates different spheres. It can may have an affirmative or a negative form: the affirmative form asserts likeness between the entities compared, as -"The sun is like an orange" and the negative one denies likeness, as "The sun is not like an orange". 3) Verbal irony is the use of words occurs when a speaker's intention is the opposite of what he or she is saying.
It comes in several forms and is used to bring humor to a situation and it is used by a speaker intentionally. For example, a character stepping out into a hurricane and saying, "What nice weather we're having!" even though there was a storm outside but he said the weather was good. 4) Oxymorons are typically less negative than verbal irony, though, because their contradictory propositions do not as readily correspond, respectively, to expectations and reality as verbal irony, for example: "Take your time, but hurry it up" or "She's killing me with kindness.". 5) Hyperbole is a figure of speech in which an author or speaker purposely and obviously exaggerates to an extreme. It is used for emphasis or as a way of making a description more creative and humorous. It is important to note that hyperbole is not meant to be taken literally; the audience knows it's an exaggeration. For example, "I'm so hungry I could eat a horse." This example of hyperbole exaggerates the condition of hunger to emphasize that the subject of this sentence is, in fact, very hungry. 6) Contextual expressions comprise is a class of utterances with a variety of structures (e.g., noun-noun combinations and denominal verbs) whose meanings depend completely on discourse contexts, for example, "Their senses depend entirely on the time, place, and circumstances in which they are uttered". As such, contextual expressions are among the most dependent on common ground of all the types of figurative language. 7) Understatement, was worse than comprehension of verbal irony. It is unclear whether the measured imperfection in hyperbole comprehension at very early ages is an artifact of item authenticity and laboratory tasks with compromised contextual supporthyperbole comprehension may occur earlier with more subtle and realistic measures. But the issue is probably moot for present purposesthat hyperbole production develops relatively earlier than other figurative production is revealing for a consideration of pragmatic effect causes. For example: "Seems to be a bit chilly". 8) Idiom is a figure of speech that is used to help express a situation with ease, but by using expressions that are usually completely unrelated to the situation in question. For example, 'Don't worry, driving out to your house is a piece of cake.' The expression of the piece of cake would be understood that it is easy. Normally, the expression obviously wouldn't associate the word 'cake' when it is on its own as anything other than dessert. But in this context, it's a well-known idiom. Another example "We should let sleeping dogs lie", which means to avoid restarting a conflict. 9) Indirect Request is a figure of speech that is used to express of some desire or inclination. Indirect request happens when a person asks another person to tell, order or ask something to a third person. For example, "Could you tell me how much you earn?" meaning here is someone who wants to know the person's income but by using pleasantries. 10) Repetition is something that should take a relatively short time. The repetition also iconizes the redundancy. The repetition is an interesting and fairly novel means of inflating a discrepancy between expectations/preferences and reality -the man's first echoed statement explicitly draws attention to the fact that the woman is currently wondering about something -a state of affairs that is normally finite, for example "Run, run, run!" Previous studies about figurative language were done in eight studies with different object of study, those were poetry, news article, short story, online newspaper and songs' lyrics. First study was done by (Hutahean, Minar. Manik, 2015) entitle "Figurative Meaning Found in Sport News Article".
The aim of the study is to analyze the use of figurative language meaning in sports articles. The data showed that there were seven types of figurative language, those were metaphor, simile, synecdoche, metonymy, hyperbole, personification, and irony. While the percentages of the data from the most to the least were metonymy (51.90%), hyperbole (18,95%), simile (19,25%), personification (9,6%), while irony, metaphor, and synecdoche (2,35%). The sports news is reported emotionally to give impression of the team and the quality of the competition to the readers. Second study was done by (Nurhaida, Marlina, 2017) The method used in data collection is the method of observation. After analyzing the data, the writer found that there are six types of figurative language used in Jamie Miller's song lyrics which consist of 2 metaphors (16,6%), 2 similes (16,6%), 2 personifications (16,6%), 1 paradox (8,3%), 2 apostrophes (16,6%), and 3 hyperboles (24,9%). All of the figurative languages have a connotation that implicitly conveys hidden messages and values of life.
METHOD
The novel "It Starts with Us" is the object of this research. It contains 250 pages with thirtyseven chapters and was published by Simon and Schuster in 2022. It Starts with Us" is Colleen Hoover's sequel to her best-selling novel and the book took a sensation, "It Ends with Us.". The writer uses qualitative research in conducting this research. Literature is different from natural sciences or social sciences. It is the product of the creative writer. According to (Sinha, 2018), literary research, therefore, cannot confine itself to either the literary text or the writer; it has to study both. When its material (the object of study) is the creative writer, it applies the tools of social sciences and when the object of study is the text, it applies the tools which are specific to it.
According to (Yin, 2016) qualitative research is driven by a desire to explain social behavior and thinking, through existing or emerging concepts, which means exploring and understanding the meaning of an individual or group described as a social or human problem. It gives closer attention to the interpretive nature of inquiry and situates the study within the participants. In addition (Taylor et al., 2016), stated that qualitative research methods refer to research that produces descriptive data, written and spoken works, and behavior that can be observed. Data collecting steps are as follows: 1) reading the novel, 2) highlighted sentences in the novel that contain figurative language, 3) distinguish it based on its type, 4) find its meaning, and 5) put them into percentage based on the findings.
RESULTS AND DISCUSSION
The finding of this research based on the novel were 86 data that consists of five figurative language those were metaphor, simile, hyperbole, personification, and repetition. The number and percentage of the data were 1) simile with 74 data (86%), 2) metaphor and hyperbole with 4 data each (4,65%), and personification and repetition with 2 data each (2,32%). The percentage can be seen in Furthermore, the data showed that simile has highest number of data which is 74 data (86%) that consists of simile using "like" 65 data (75,5%) as the highest percentage of simile, second is using "as" 6 data (6,97%), and third is using "as…as" 3 data (3,4%).
Figure 2. Simile Number of Data
The findings about 3 kinds of simile are discussed as follows: first discussion starts with simile using "like". Examples of simile using "like" is "Two minutes ago, you acted like eight hours without a text was too long. Now you're telling me to calm down?" (ch 3, p. 13). In this sentence, the meaning is the comparison is "the situation before is described with eight hours". Second example is "My blood feels like it freezes in my veins". (ch. 6, p. 27). In this sentence, "my blood vessel is compared with it freezes in my veins" which means the person is shocked. Third sentence is "We're using it like a conch shell as if we need it for permission to speak". (ch. 6 p. 31). In this sentence, the way the person using the thing is compared with a conch shell when they want to talk. Fourth data is "He's like a bully to the bullies if that makes sense". (Ch.19,p. 127). In this sentence, the subject "He" is compared with "a bully the bullies. Fifth data is "I'm worried for Lily. I'm worried Ryle is a little bit like my mother, and that he's going to retaliate by fighting for the sake of fighting, and for no other Simile number of data using "like" using "as" using "as..as" reason". (ch. 25, p. 162). In this sentence, Ryle is compared to my mother that will have a fight with no reason.
Second data about simile is using "as" with 6 data. The example of the data is as follows: first sentence is "That's what it feels like-as if these wonderful things happen, but as they start to sink in, they eventually reach a part of me that is still making decisions based on Ryle and his potential reactions." (ch 2, p. 11). In this sentence, the feeling or emotion is compared with wonderful things happen. While second "That one word was meant as a putdown, as if she was saying, Wow, Atlas.
You're not smart enough for something like this. (ch. 13, p. 83). The subject, the word one is meant to be the same as a putdown or don't understand better enough. Furthermore, in the third sentence, "And then, as if it's the most natural thing in the world". (ch. 22 p.142). In this sentence, the situation is compared with the most natural thing in the world as something pure and sincere.
Third data is of simile is the using of "as…as" that consist of 3 data is discussed as follows: first data is "Her snore is as endearing as she is". In this sentence, the snore is compared as endearing in other words inspiring of love or affection" . Second data is "I could see his chest moving just as hard as mine was. (Ch. 11,p. 58). The sentence is described that the hard moving of the chest of two of them as the same. While the third data, the sentence "As happy as I know I can make Lily, she will never be fully until she has your acceptance and cooperation". In this sentence, the happiness of Lily is compared with the acceptance and cooperation.
Second data is metaphor with 4 data (4,65%). The data of metaphor found in this research are discussed as follows. First data is "Apology flowers are my least-favorite kind of bouquets to assemble". (ch. 2, p.6). In this sentence, the comparison is made about two things, those are apology flowers and the feeling of it is as the least-favorite kind of bouquets to resemble. Second sentence, "Life with them was a nightmare". (ch. 12 p. 64). In this sentence though it has negative meaning where the comparison is made between a life with them and a nightmare. In third sentence, "Life is a funny thing". (ch. 15 p. 93). In this sentence which has meaning because is compared life as a funny thing. While, fourth sentence, "This kiss is hope". (ch. 20 p. 133), stated that the writer compares kiss and hope as the same quality of life.
Third data is hyperbole with 4 data (4,65%), those are: first "I just smile and wait for him to reach us, but a walk from the door to the front corner seems like it is expanded by a mile". (ch. 6 p. 16). In this sentence, the writer exaggerates things by saying that "a walk from the door to the front corner is a mile. In the second sentence "Never in a million years did I imagine it will feel like this" (ch. 11 p. 59). In this sentence the writer exaggerates the expression of million years that impossible for a person to live. Third sentence, "I'd run five miles just to give you hug". (ch. 13 p. 87). In this sentence, the writer exaggerates the expression that he had run for five miles. The last sentence, "I am sure her brain is running a mile a minute, searching for an insult or a threat of her own but she has got nothing" (ch. 33 p. 213). In this sentence the expression of a mile a minute is a form of exaggeration of a situation.
617
Fourth and fifth data are personification and repetition with 2 data each (2,32%). The data about 2 examples of personification in this research is stated as follows, first in the sentence, "The coffee never kicked in, I guess". (ch 11 p. 57). In this sentence, human characteristic 'kicked in" is attached to "coffee". (ch 12 p 76). The second sentence, "I feel the guilt swallowing me". In this sentence, human ability "swallowing" is attached to the emotion "guilt". Furthermore, 2 data about repetition are: first in the sentence, "Just keep swimming, swimming, swimming". (ch. 12 p. 73). In this sentence, the verb "swimming" is repeated three times which emphasized that it has to be done. While in the second sentence, "I am sorry, I am sorry, I am sorry". (ch. 24 p. 151). The writer emphasized that the expression of "I am sorry" is repeated three times shown that the person really sorry about the things that he/she has done.
The research found five figurative languages in the data out of ten proposed by Colton. The number of data can be seen on table 1 below. The findings of metaphor in this research are based on the comparison of two nouns using "like, as or as…as". In this novel, the writer compares two nouns in order to give more explanation about the situation of the main character Lily that has to face the reality of her life that must to get divorce from her abused husband, Ryle. Besides, she is in love again with her former boyfriend of her high school, Atlas. While the use of metaphor is more on the expressing of positive and negative emotion like "Life with the was a nightmare" and "This kiss is hope". These positive and negative emotions are based on the experience and circumstance when Lily has to encounter. In addition, the use of hyperbole is stated as the expression of how the main characters, Lily and Atlas, have to fight for their love and be certain that Lily's daughter, Emmy, is safe. Furthermore, the expression personification is used to make things more alive in this story, like in the sentences: "The coffee never kicked in, I guess" and "I feel the guilt swallowing me". The last one is repetition which is used to give more emphasize on the instruction, warning, asking for attention or showing intention.
CONCLUSION
The research problem of this research is to find the figurative language and its meaning in the novel "It Starts with Us". This is qualitative research and used literary study. The data shows that | 2023-08-05T15:12:25.326Z | 2023-05-24T00:00:00.000 | {
"year": 2023,
"sha1": "5aecbd2f674fb2b98d989fd665b471c0fc376f27",
"oa_license": "CCBYSA",
"oa_url": "https://jonedu.org/index.php/joe/article/download/2972/2523",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d3ca0689e555eab3b34422208c7145959a1f12b7",
"s2fieldsofstudy": [
"Art",
"Education"
],
"extfieldsofstudy": []
} |
236212943 | pes2o/s2orc | v3-fos-license | Antibacterial Properties of a Honeycomb-like Pattern with Cellulose Acetate and Silver Nanoparticles
This study involved the preparation and characterization of structures with a honeycomb-like pattern (HCP) formed using the phase separation method using a solution mixture of chloroform and methanol together with cellulose acetate. Fluorinated ethylene propylene modified by plasma treatment was used as a suitable substrate for the formation of the HCP structures. Further, we modified the HCP structures using silver sputtering (discontinuous Ag nanoparticles) or by adding Ag nanoparticles in PEG into the cellulose acetate solution. The material morphology was then determined using atomic force microscopy (AFM) and scanning electron microscopy (SEM), while the material surface chemistry was studied using energy dispersive spectroscopy (EDS) and wettability was analyzed with goniometry. The AFM and SEM results revealed that the surface morphology of pristine HCP with hexagonal pores changed after additional sample modification with Ag, both via the addition of nanoparticles and sputtering, accompanied with an increase in the roughness of the PEG-doped samples, which was caused by the high molecular weight of PEG and its gel-like structure. The highest amount (approx. 25 at %) of fluorine was detected using the EDS method on the sample with an HCP-like structure, while the lowest amount (0.08%) was measured on the PEG + Ag sample, which revealed the covering of the substrate with biopolymer (the greater fluorine extent means more of the fluorinated substrate is exposed). As expected, the thickness of the Ag layer on the HCP surface depended on the length of sputtering (either 150 s or 500 s). The sputtering times for Ag (150 s and 500 s) corresponded to layers with heights of about 8 nm (3.9 at % of Ag) and 22 nm (10.8 at % of Ag), respectively. In addition, we evaluated the antibacterial potential of the prepared substrate using two bacterial strains, one Gram-positive of S. epidermidis and one Gram-negative of E. coli. The most effective method for the construction of antibacterial surfaces was determined to be sputtering (150 s) of a silver nanolayer onto a HCP-like cellulose structure, which proved to have excellent antibacterial properties against both G+ and G− bacterial strains.
Introduction
Natural patterns and structures provide inspiration for scientists of diverse technological backgrounds to create artificial products (from different materials) with similar properties as naturally occurring products [1,2]. One such pattern is the naturally occurring honeycomb-like pattern (HCP) [2,3]. The surfaces of products with this pattern consists of thousands of interconnected hexagonally formed cells that create an efficient structure with a large surface area. The HCP, due to its excellent properties, such as structural and mechanical strength, low density, and porosity, has found applications in and tissues [22,23]. Additionally, it is resistant to protein absorption, making it suitable for in vivo and in vitro studies [24]. PEG is mostly used in the form of hydrogels. Its properties imitate a three-dimensional environment similar to soft tissues and enable the diffusion of nutrients and cell waste [25,26]. PEG is a biodegradable polymer only when copolymerized with other biodegradable polymers, such as polyglycolic acid (PGA) and poly-L-lactide acid (PLA) [27]. Many scientists have reported that PEG-based surfaces offer protection from external contamination; however, the protection level is not very high and a certain number of bacteria can get onto the polymer [28][29][30][31]. This is why the main goal is to find an antibacterial agent that can efficiently eliminate bacterial contamination, while at the same time being biocompatible with the human body [32]. One such effective option is silver nanoparticles (AgNPs), since they have good antibacterial, antiviral, and antifungal activity [33][34][35][36]. Ag acts as an antibacterial agent in its ionic form at low concentrations, although no significant antibacterial effect was found in the Ag 0 form. The deposition of an Ag layer on a substrate's surface is mostly achieved by sputtering in a vacuum environment [37].
In this study, we focused on the effects of Ag nanoparticles sputtered on the surfaces of HCP-like structures or incorporated into their morphology. The changes of the surface morphology of the modified structures, their chemical compositions, and their antibacterial effects were investigated and compared with an unmodified sample with HCP-like structures. The effects of the combination of both aspects increased the effective surface area of the HCP-like pattern, while the nanocluster surface formation effectively inhibited the growth of the selected bacteria. To the best of our knowledge, the antibacterial properties of the silver-sputtered HCP cellulose acetate pattern have not been reported to date.
Surface Morphology and Roughness
The goal of this study was to create an HCP-like pattern on FEP polymers and to modify these structures with Ag. Modification by plasma discharge can substantially affect the chemical composition and structure of a material; therefore, the criteria for substrate modification are significant for the resulting polymer surface [38]. The most suitable parameters for plasma etching were selected according to the study reported by Neznalová et al. [39], i.e., material modification at 8 W for 240 s, in order to create a suitable surface for HCP-like structures. The modified honeycomb pattern was prepared by sputtering Ag nanoparticles onto the HCP-like pattern ( Figure 1C,D) or by incorporating them into PEG. This mix was added to the solution for preparation of the HCP-like structures ( Figure 1B).
The surface morphology, roughness, and effective surface area for HCP and modified HCP structures with Ag were determined using the AFM method (see Figure 1). From Figure 1, we can see that the "pristine" HCP structure (A) contains hexagonal pores formed on its surface using the IPS method, with slight variations from the optimal hexagonal pattern; however, in comparison to other samples, it exhibited the most representative HCP-like structure without any further modification. After additional sample modification with Ag ( Figure 1B-D), one can see the change from the optimal HCP-like pattern, which was mostly caused by the presence of Ag deposited in PEG added into the source polymer solution ( Figure 1B). PEG has a high molecular weight and a gel-like structure, which may have resulted in the increase in the roughness of sample B to 358.0 nm when compared to unmodified sample A, with a roughness of 271.0 nm. The second variation of the honeycomb pattern was achieved through direct sputtering on the surface with the HCPlike structure ( Figure 1C,D). The AFM scans in Figure 1 show Ag nanostructures and their reduced roughness, which were created during the before mentioned process (Ag thin layer). In sample C, the HCP-like structure was destroyed through the disintegration of the walls, while the "hexagonal" pores formed had smaller diameters and heights than unmodified sample A. Comparing the sputtered sample ( Figure 1D) with the unmodified sample ( Figure 1A), we observed a significant decrease in pattern homogeneity. The difference was confirmed repeatably, which may have been caused by either the impacts of silver atoms with high energy filling the cavities with silver nanoclusters; however, this difference in morphology was still surprising. The surface morphology, roughness, and effective surface area for HCP and modified HCP structures with Ag were determined using the AFM method (see Figure 1). From Figure 1, we can see that the "pristine" HCP structure (A) contains hexagonal pores formed on its surface using the IPS method, with slight variations from the optimal hexagonal pattern; however, in comparison to other samples, it exhibited the most representative HCP-like structure without any further modification. After additional sample modification with Ag ( Figure 1B-D), one can see the change from the optimal HCP-like pattern, which was mostly caused by the presence of Ag deposited in PEG added into the source polymer solution ( Figure 1B). PEG has a high molecular weight and a gel-like structure, which may have resulted in the increase in the roughness of sample B to 358.0 nm when compared to unmodified sample A, with a roughness of 271.0 nm. The second variation of the honeycomb pattern was achieved through direct sputtering on the surface with the HCP-like structure ( Figure 1C,D). The AFM scans in Figure 1 show Ag nanostructures and their reduced roughness, which were created during the before mentioned process (Ag thin layer). In sample C, the HCP-like structure was destroyed through the disintegration of the walls, while the "hexagonal" pores formed had smaller diameters and heights than unmodified sample A. Comparing the sputtered sample ( Figure 1D) with the unmodified sample ( Figure 1A), we observed a significant decrease in pattern homogeneity. The difference was confirmed repeatably, which may have been caused by either the impacts of silver atoms with high energy filling the cavities with silver nanoclusters; however, this difference in morphology was still surprising.
Surface Morphology Analysis Using SEM and Surface Chemistry Analysis
The SEM analysis of the prepared samples is depicted in Figure 2, showing different views of the morphologies of the prepared samples. Compared to the AFM scans in Figure 1, in the SEM scans, the unmodified sample A indicates a structure with an expressive hexagonal shape. New arrays were formed between the "hexagonal" structure, which can be observed on the modified sample B. After sputtering of AgNPs, small spherical pores
Surface Morphology Analysis Using SEM and Surface Chemistry Analysis
The SEM analysis of the prepared samples is depicted in Figure 2, showing different views of the morphologies of the prepared samples. Compared to the AFM scans in Figure 1, in the SEM scans, the unmodified sample A indicates a structure with an expressive hexagonal shape. New arrays were formed between the "hexagonal" structure, which can be observed on the modified sample B. After sputtering of AgNPs, small spherical pores were created. On the other hand, similar HCP structure destruction can be seen, especially in their walls, in comparison to the AFM scans. The destruction and new pores may have been formed by the high rate of sputtered AgNPs from the target onto the structure.
The surface chemistry of the prepared samples with the HCP-like structure was determined by EDS analysis and the results are shown in For the EDS method, the spatial resolution ranges from 50 nm to 1 µm [40]. The main characteristic of the FEP polymer is the large amount of fluorine introduced in its structure. This meant that all samples (except the sample with PEG + Ag) containing the fluorine element were measured through the structure up to the substrate polymer. The highest amount (approximately 25 at %) of fluorine was detected on the sample with an HCP-like structure, while the lowest amount (0.08%) was measured on the PEG + Ag sample. Samples with Ag layers (150 and 500 s) had similar fluorine concentrations of approximately 14%. As expected, the thickness of an Ag layer on the HCP surface depended on the length of sputtering (either 150 s or 500 s) of Ag. The sputtering time for Ag (150 s and 500 s) corresponded to layers with heights of about 8 nm (3.9 at % of Ag) and 22 nm (10.8 at % of Ag), respectively. The sample with PEG incorporated in the structure did not reveal the presence of AgNPs, which may have been caused by AgNPs penetration deeper into the bulk of the material. On the other hand, the sample with PEG + Ag had the highest oxygen concentration (21.4%). This was probably caused by the presence of PEG in the biopolymer foil and the content of oxygen atoms in its natural structure. The fluorine was also not detected, since even in the "lower" area (with lower thickness) of the honeycomb-like unit, the thickness of it was still higher than approximately 100 nm; therefore, it was able to "shield" the fluorine signal from the substrate below. We can deduce from the EDS analysis that AgNPs bound with oxygen were located in the atmosphere and according to the study by Thijssen et al., who proved that oxygen atoms can be incorporated into atomic chains with noble atoms [41]. This effect may be the reason for the higher oxygen concentrations (around 12%) for samples with a thin Ag layer. The lowest oxygen concentration (4.5%) was detected on the surface of the sample containing HCP. were created. On the other hand, similar HCP structure destruction can be seen, especially in their walls, in comparison to the AFM scans. The destruction and new pores may have been formed by the high rate of sputtered AgNPs from the target onto the structure. The surface chemistry of the prepared samples with the HCP-like structure was determined by EDS analysis and the results are shown in . For the EDS method, the spatial resolution ranges from 50 nm to 1 µm [40]. The main characteristic of the FEP polymer is the large amount of fluorine introduced in its structure. This meant that all samples (except the sample with PEG + Ag) containing the fluorine element were measured through the structure up to the substrate polymer. The highest amount (approximately 25 at %) of fluorine was detected on the sample with an HCP-like structure, while the lowest amount (0.08%) was measured on the PEG + Ag sample. Samples with Ag layers (150 and 500 s) had similar fluorine concentrations of approximately 14%. As expected, the thickness of an Ag layer on the HCP surface depended on the length of sputtering (either 150 s or 500 s) of Ag. The sputtering time for Ag (150 s and 500 s) corresponded to layers with heights of about 8 nm (3.9 at % of Ag) and 22 nm (10.8 at % of Ag), respectively. The sample with PEG incorporated in the structure did not reveal the presence of AgNPs, which may have been caused by AgNPs penetration deeper into the bulk of the material. On the other hand, the sample with PEG + Ag had the highest oxygen concentration (21.4%). This was probably caused by the presence of PEG in the biopolymer foil and the content of oxygen atoms in its natural structure. The fluorine was also not detected, since even in the "lower" area (with lower thickness) of the honeycomb-like unit, the thickness of it was still higher than approximately 100 nm; therefore, it was able to "shield" the fluorine signal from the substrate below. We can deduce from the EDS analysis that AgNPs bound with oxygen were located in the atmosphere and according to the study by Thijssen et al., who proved that oxygen atoms can be incorporated into atomic chains with noble atoms [41]. This effect may be the reason for the higher oxygen concentrations (around 12%) for samples with a thin Ag layer. The lowest oxygen concentration (4.5%) was detected on the surface of the sample containing HCP. We selected for presentation the typical HCP pattern sputtered with 500 s Ag ( Figure 6). The FTIR spectrum of the CA honeycomb material covered with Ag showed the characteristic bands attributed to the vibrations of the acetate group: the carbonyl stretching at 1750 cm −1 (νC=O), methyl bending at 1370 cm −1 (δC-CH3), and peak at 1211 cm −1 , attributed to C-O stretching of the acetyl group. A strong band at 1155 cm −1 (due to C-O antisymmetric bridge stretching and C-O-C pyranose ring skeletal vibration) was also detected, as well as a band at 1045 cm −1 (C-O-C stretching of pyranose ring), while We selected for presentation the typical HCP pattern sputtered with 500 s Ag ( Figure 6). The FTIR spectrum of the CA honeycomb material covered with Ag showed the characteristic bands attributed to the vibrations of the acetate group: the carbonyl stretching at 1750 cm −1 (νC=O), methyl bending at 1370 cm −1 (δC-CH3), and peak at 1211 cm −1 , attributed to C-O stretching of the acetyl group. A strong band at 1155 cm −1 (due to C-O antisymmetric bridge stretching and C-O-C pyranose ring skeletal vibration) was also detected, as well as a band at 1045 cm −1 (C-O-C stretching of pyranose ring), while the broad hydroxyl group absorption appeared at approximately 3470 cm −1 . The results indicated that even after sputtering of the acetate cellulose honeycomb pattern, the spectrum was in good accordance with typical acetate cellulose spectra presented in [42,43].
the energy-dispersive X-ray spectroscopy method for the sample with a honeycomb-like (HCP) structure, samples with Ag thin layers with the different time lengths of metal deposition (150 s and 500 s), and the sample containing polyethylene glycol (PEG) with sputtered Ag.
We selected for presentation the typical HCP pattern sputtered with 500 s Ag ( Figure 6). The FTIR spectrum of the CA honeycomb material covered with Ag showed the characteristic bands attributed to the vibrations of the acetate group: the carbonyl stretching at 1750 cm −1 (νC=O), methyl bending at 1370 cm −1 (δC-CH3), and peak at 1211 cm −1 , attributed to C-O stretching of the acetyl group. A strong band at 1155 cm −1 (due to C-O antisymmetric bridge stretching and C-O-C pyranose ring skeletal vibration) was also detected, as well as a band at 1045 cm −1 (C-O-C stretching of pyranose ring), while the broad hydroxyl group absorption appeared at approximately 3470 cm −1 . The results indicated that even after sputtering of the acetate cellulose honeycomb pattern, the spectrum was in good accordance with typical acetate cellulose spectra presented in [42,43].
Wettability
Contact angle measurement is one of the effective methods used to understand surface properties such as wettability, adhesion, and surface energy. A hydrophobic surface (high contact angle) indicates poor sample wettability, while a hydrophilic surface (low contact angle) indicates better physical properties [44]. Figure 7 shows the contact angles of the prepared FEP samples as well as pristine FEP over the 45 days that the aging study was performed. The contact angle on pristine FEP was determined to be 104 • based on further studies. Additionally, other samples underwent changes in their surface wettability during the aging study. The plasma treatment used for the sample preparation decreased the contact angle by creating radicals on the surfaces and reactions leading to oxygencontaining groups. PEG also contains hydroxyl groups in its structure, which give it a more hydrophilic surface compared to pristine biopolymer while having the lowest contact angles (ranging from 30 • to 50 • ) during the aging process. After 45 days, structures with thin Ag layers (150 s and 500 s) on the surface had very similar contact angles (approx. 90 • ). Their surfaces were more hydrophobic than other prepared samples however, they still had more oxygen in their structures (see Figure 6, HCP + Ag 150 s/+ Ag 500 s). This observation indicated certain differences between the contact angle measurements (sample surface, approx. top ten atomic layers) and the EDS treatment (in-depth analysis). The possible reason may be that AgNPs with bound oxygen rotate into the structure or migrate deeper into polymer chains. The HCP sample maintained a mean value of about 70 • . It can be concluded that the contact angle depends on the physicochemical properties rather than on the surface structure. The lowest contact angles were observed in samples with silver nanoparticles incorporated in combination with oxygen-containing PEG chains, which were maintained during the aging process over 45 days. depth analysis). The possible reason may be that AgNPs with bound oxygen rotate into the structure or migrate deeper into polymer chains. The HCP sample maintained a mean value of about 70°. It can be concluded that the contact angle depends on the physicochemical properties rather than on the surface structure. The lowest contact angles were observed in samples with silver nanoparticles incorporated in combination with oxygen-containing PEG chains, which were maintained during the aging process over 45 days.
Antibacterial Properties
Two commonly occurring bacterial strains were chosen as model microorganisms in order to evaluate the antibacterial activity of the prepared nano-and microstructures; specifically, these were Gram-negative (G−) E. coli and Gram-positive (G+) S. epidermidis.
Antibacterial Properties
Two commonly occurring bacterial strains were chosen as model microorganisms in order to evaluate the antibacterial activity of the prepared nano-and microstructures; specifically, these were Gram-negative (G−) E. coli and Gram-positive (G+) S. epidermidis. In this study, we aimed to identify the antibacterial properties of HCP-sputtered AgNP structures and samples with Ag incorporated into the polymer bulk against selected G+ and G− bacteria. Figure 8 represents the average numbers of colony-forming units for both bacterial strains incubated in the selected samples.
Ag + in a monoatomic-ionic state created through the oxidative dissolution of the Ag 0 NPs on the HCP surface was the appropriate antibacterial agent (see results in Figure 8) [45]. Differences occurred in the sample with AgNPs sputtered into PEG. The results of the EDS analysis showed no Ag elements in the sample structure and induced a low level of bacterial inhibition. PEG has antibacterial properties by itself, but is not as effective as ionic Ag + . Vasudevan et al. designed patterned structures in different sizes and observed that all prepared patterned surfaces were covered with a considerably low number of bacterial colony-forming units compared to flat surfaces [46]. In the same way, we prepared an excellent antibacterial patterned structure and were able to detect antibacterial activity on the modified HCP-like structure. Regarding the antibacterial activity of nanostructured Ag, the interaction is usually based on two synergistic processes. The first one is connected with the direct contact of the bacterium with noble metal Ag, while the second effect, which influences the bacteria, is the release of Ag + ions into the medium and their subsequent interaction with the bacteria, depending on the specific conditions [45][46][47][48]. Ag ions may interact with four major components of cell bacteria, which are the plasma membranes, cell walls, bacterial DNA, and proteins. For our study, we assumed that the antibacterial activity of E. coli is based on the direct interactions of Ag atoms with bacteria and partially on the release of Ag + into the environment, as both processes take place. The Ag + also causes degradation of the peptidoglycan cell wall leading to cell lysis (cell death), preventing further proliferation of bacteria. The ions may also penetrate into the inner part of the bacteria and are bound on the basis of their DNA. In this study, we aimed to identify the antibacterial properties of HCP-sputtered AgNP structures and samples with Ag incorporated into the polymer bulk against selected G+ and G-bacteria. Figure 8 represents the average numbers of colony-forming units for both bacterial strains incubated in the selected samples. Ag + in a monoatomic-ionic state created through the oxidative dissolution of the Ag 0 NPs on the HCP surface was the appropriate antibacterial agent (see results in Figure 8) [45]. Differences occurred in the sample with AgNPs sputtered into PEG. The results of the EDS analysis showed no Ag elements in the sample structure and induced a low level of bacterial inhibition. PEG has antibacterial properties by itself, but is not as effective as ionic Ag + . Vasudevan et al. designed patterned structures in different sizes and observed that all prepared patterned surfaces were covered with a considerably low number of bacterial colony-forming units compared to flat surfaces [46]. In the same way, we prepared an excellent antibacterial patterned structure and were able to detect antibacterial activity on the modified HCP-like structure. Regarding the antibacterial activity of nanostructured Ag, the interaction is usually based on two synergistic processes. The first one is connected with the direct contact of the bacterium with noble metal Ag, while the second effect, which influences the bacteria, is the release of Ag + ions into the medium and their subsequent interaction with the bacteria, depending on the specific conditions [45][46][47][48]. Ag ions may interact with four major components of cell bacteria, which are the plasma membranes, cell walls, bacterial DNA, and proteins. For our study, we assumed that the antibacterial activity of E. coli is based on the direct interactions of Ag atoms with bacteria and partially on the release of Ag + into the environment, as both processes take place. The Ag + also causes degradation of the peptidoglycan cell wall leading to cell lysis (cell death), preventing further proliferation
Pattern Preparation and Modification
The polymer film was treated by Ar + plasma using the SCD 050 sputtering device from BAL-TEC. The purity of gas in the chamber was 99.997% and the pressure was 8 Pa. The samples were placed on a circular holder (anode) with a diameter of 10 cm and a distance of 5 cm from the cathode. They were modified at 8 W for 240 s.
In the next step, HCP-like structures were formed using the improved phase separation method via immersion of plasma-modified polymeric substrates in the prepared 100 mL solution for 10 s. The solution was a mixture of two solvents, namely chloroform and methanol, at a volume ratio of 85:15. Then, 2 g of cellulose acetate was added to the mixture while stirring, which gave a homogeneous solution. Then, the HCP-like structures were removed and left to air dry in Petri dishes. After complete evaporation of the solvents, the samples were prepared for further modification and were also subjected to examination using various analytical methods.
Silver Nanostructure Preparation
Sputtering of thin Ag layers on the surfaces of the prepared HCP-like structures was performed using the Quorum Q300T ES with cathodic sputtering. Different sputtering times were used (150 s and 500 s), with a constant sputtering current of 20 mA. A set of samples was prepared to determine the thicknesses of the sputtered thin Ag layers using the scratch test method. Ag layers were deposited on glass slides, while the experimental conditions were the same as in the previous case for the deposition of Ag onto HCP-like structures.
Sputtering of Ag nanoparticles into 2 mL of PEG solution was performed using a Quorum Q 150RS instrument. The deposition time was 300 s and the current was 30 mA. The concentration of Ag nanoparticles in 2 mL of PEG was 1.05 mg·mL −1 . Subsequently, the solution was added to the polymer solution (CHCl 3 , MeOH, CA). We determined the size of the gold nanoparticles to be 8 nm [49].
We investigated the differences between the surface morphologies of the samples including an unmodified substrate (pristine), a substrate with HCP-like structures (HCP), a substrate with sputtered Ag layers on the surfaces of HCP-like structures (HCP Ag; layers sputtered for 150 s and 500 s), and a substrate with Ag nanoparticles sputtered into the PEG and subsequent incorporation into HCP structures (HCP (PEG + Ag)).
Analytical Methods
The wettability of all samples was studied via goniometric measurements of the contact angles, with a drop of distilled water applied to the surface of each sample (6 positions). The contact angles of the samples were determined using a See System goniometer (Advex Instruments) at room temperature. A drop of water (8 µL) was applied onto each sample with a Transferpette ® automatic pipette (Brand, Wertheim, Germany).
Atomic force microscopy (AFM) was used to study the surface morphology of the FEP substrate with HCP microstructures. The movement of the tip as it passed over the sample was recorded and a point-by-point image of the surface was compiled. A Dimension ICON atomic force microscope with a SCANASYST-AIR Si tip from Bruker Corp. (Billerica, MA, USA) was used for the measurement. At the same time, the mean surface roughness (R a ) was determined, which represents the arithmetic mean of the absolute values of the height deviations measured from the central plane. Samples were measured in tapping mode (RTESPA probe with constant elasticity 40 N m −1 ) or QNM (Scan Asyst air probe, elasticity constant of 0.4 N m −1 ). The thicknesses of the sputtered Ag layers on glass samples were also measured using the scratch test and subsequent AFM analysis.
Scanning electron microscopy (SEM) (Tescan, Brno, Czech Republic) and energydispersive X-ray spectroscopy (EDS) were used for detailed analysis of the morphology and chemical characterization of the FEP substrate with HCP microstructures. We used a scanning electron microscope LYRA3 GMU (Tescan, Brno, Czech Republic) with accelerating voltage of 10 kV for the electrons that bombarded the samples and an F-MaxN analyzer and SDD detector (Oxford Instruments, Abingdon, UK) with an applied accelerating voltage of 10 kV for EDS. Platinum (target, purity 99.999%, Safina, Vestec, Czech Republic) was sputtered onto the samples before analysis using a Quorum Q300T sputtering device at a current of 30 mA for 400 s.
The FTIR system we used was a Nicolet iS5 (Fisher Scientific, Waltham, MA, USA) with a diamond crystal iD7 ATR accessory. The spectra were obtained as averages from 128 measurement cycles in the 4000-600 cm −1 spectral range with 0.964 cm −1 data intervals. An atmospheric suppression feature was employed to eliminate ambient CO 2 and H 2 O concentration changes.
Antibacterial Tests
Gram-negative and Gram-positive bacterial strains of E. coli (DBM 3138) and S. epidermidis (DBM 2124), respectively, were used for the evaluation of antibacterial tests of the prepared HCP-like structures. The bacterial strains were transferred from stock agar plates into LB liquid medium and incubated at 37 • C over 24 h while gently shaking. Bacterial growth was confirmed by measuring the optical density (OD) at 600 nm. Subsequently, the bacterial suspensions were diluted in prewarmed (37 • C) PBS. Then, 125 µL of the bacterial suspension with PBS was applied on the surface of each sample. Five 25 µL drops of bacterial suspension from each sample were then loaded onto LB and PCA agar plates used for E. coli and S. epidermidis, respectively. The samples were then cultured overnight at 37 • C. The tests were performed on three samples from each preparation step, which means that there were 15 drops for one preparation phase. At the same time, a control was performed by applying 15 drops of the bacterial suspension incubated only in PBS (with no sample added) onto agar plates, which were then treated in the same way as the samples.
Conclusions
We prepared an HCP-like patterned structure with sputtered AgNPs on the plasmaactivated surface of an FEP polymer. The Ag nanostructure was prepared in two forms-as a thin layer on the HCP-like surface or sputtered into PEG, which was used for HCP preparation itself. Through combinations of the proposed modification methods (plasma exposure, addition of AgNPs into the source solution, direct Ag deposition, and isolated cluster formation), we managed to prepare HCP-like structures with differences in morphology, surface chemistry, wettability, and antibacterial properties. The plasma deposition process created an optimal surface for the formation of an HCP-like cellulose acetate structure. The HCP samples also had good surface wettability, and surprisingly the HCP-like pattern from cellulose acetate significantly suppressed the colonization of both S. epidermidis and E. coli. Sputtering of thin Ag layers increased the contact angle of the pattern, causing particular disruption but combined with remarkable effects against both evaluated bacterial strains. The greatest decreases of CFU for both bacterial strains were determined for HCP-like units sputtered with Ag for only 150 s. The incorporation of AgNPs into the polymer solution with PEG also decreased the uniformity of the HCP pattern. The selected samples are good candidates for testing in vitro for scaffold applications in tissue engineering. | 2021-07-25T06:17:03.984Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "840ec8d17df04d1433ca65e5fd4e1085c5059913",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/14/4051/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e80d53703adb4707fdb2b7f2b8bb39d333a01d35",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
239072436 | pes2o/s2orc | v3-fos-license | Exploring the Link between Novel Task Proceduralization and Motor Simulation
Our ability to generate efficient behavior from novel instructions is critical for our adaptation to changing environments. Despite the absence of previous experience, novel instructed content is quickly encoded into an action-based or procedural format, facilitating automatic task processing. In the current work, we investigated the link between proceduralization and motor simulation, specifically, whether the covert activation of the task-relevant responses is used during the assembly of action-based instructions representations. Across three online experiments, we used a concurrent finger-tapping task to block motor simulation during the encoding of novel stimulus-response (S-R) associations. The overlap between the mappings and the motor task at the response level was manipulated. We predicted a greater impairment at mapping implementation in the overlapping condition, where the mappings’ relevant response representations were already loaded by the motor demands, and thus, could not be used in the upcoming task simulation. This hypothesis was robustly supported by the three datasets. Nonetheless, the overlapping effect was not modulated by further manipulations of proceduralization-related variables (preparation demands in Exp.2, mapping novelty in Exp.3). Importantly, a fourth control experiment ruled out that our results were driven by alternative accounts as fatigue or negative priming. Overall, we provided strong evidence towards the involvement of motor simulation during anticipatory task reconfiguration. However, this involvement was rather general, and not restricted to novelty scenarios. Finally, these findings can be also integrated into broader models of anticipatory task control, stressing the role of the motor system during preparation.
INTRODUCTION
Following instructions is key for our flexible adaptation to changing environments. Generating actions from instructions allows the success at the very first try with a task, in sharp contrast with more time-consuming trial-and-error learning, which mostly drives non-human apes' behavior (Cole et al., 2013). The behavioral relevance of this skill has motivated a growing body of literature aiming to understand the cognitive and neural mechanisms allowing instructed performance (Brass et al., 2017). In the present study, we aimed to extend these efforts and address whether motor simulation underpins our ability to achieve new tasks using instructions.
Novel instructed performance relies on control mechanisms that exploit the instruction information to prepare us for the upcoming task (Cole et al., 2017(Cole et al., , 2018. While both novel (Cole et al., 2017(Cole et al., , 2018Meiran et al., 2015a) and already-known demands (Meiran, 1996;Monsell, 2003) benefit from anticipatory task control, previous research has stressed the role of a particular preparatory mechanism engaged during first encounters with a task: the proceduralization. This process consists of the generation of action-based (or procedural) task representations from novel instructions. In novel task contexts, where no experience has been accumulated yet, these representations are assembled from scratch, by quickly transforming the instruction content from a declarative format into an action-oriented one (Brass et al., 2017). Once the procedural representation is built, it induces a preparedness state in which stimuli reflexively trigger the relevant responses (Hommel, 2000). As a consequence, instruction proceduralization leads to novel actions that are not only fast and efficient but also automatic. Robust evidence supports the presence of the proceduralization process, identifying signatures of instructions-induced automaticity . Specifically, the mere encoding of novel stimulus-response (S-R) mappings interferes with the performance in secondary tasks sharing the same stimuli, generating task compatibility effects Liefooghe et al., 2012Liefooghe et al., , 2013Meiran et al., 2015a;Meiran & Cohen-Kdoshay, 2012). Nonetheless, despite these results successfully capture the behavioral consequences of the proceduralization, the mechanisms mediating this transformation are uncertain. Thus, an open question in the field is how novel action-based task representations emerge in the absence of any physical experience.
An intriguing possibility is that instruction proceduralization relies on anticipatory motor simulation (Moran & O'Shea, 2020). It has been proposed that the neural system devoted to action-control is not only in charge of overt execution, but also replays (or simulates) actions covertly (Jeannerod, 1994(Jeannerod, , 2001. Both theoretical (Grush, 2004;Jeannerod, 2001) and empirical work (Guillot & Collet, 2005;Hardwick et al., 2018;Hétu et al., 2013) support that equivalent movements and kinesthetic representations are shared by action execution and simulation. Accordingly, instructions could induce the covert activation of the relevant responses, which could be bound with the stimulus' one, enabling action-based task coding. This possibility resonates with neuroimaging results showing activity across the motor cortices during novel task preparation (Hartstra et al., 2011(Hartstra et al., , 2012Ruge & Wolfensteller, 2010). Moreover, it has been recently shown that motor simulation, engaged by imagery, automatizes S-R association processing and benefits novel task implementation (Theeuwes et al., 2018). However, in these studies, the mappings are covertly practiced on multiple occasions, whereas instructions-induced automaticity is reported before the first implementation (Meiran et al., 2015a). Furthermore, participants were externally asked to imagine their responses. In consequence, it remains unaddressed whether we engage in motor simulation as a by-default strategy during instruction preparation.
In the current work, we explored the role of motor simulation for novel task proceduralization. Our strategy was to prevent motor simulation while participants encoded novel mappings, by loading the motor system with a finger-tapping task. This dual-task approach follows previous studies in which motor imagery tasks are combined with overt demands to investigate the cognitive processes underpinning motor simulation (Gabbard et al., 2009;Kunz et al., 2009;Stevens, 2005). For instance, Stevens (2005) showed that actions performed in imagery are sensitive to the effector involved in a dual, overt motor task. Imaging running is disrupted by a leg-related motor task, and imaging clapping, by an arm-related motor task (Stevens, 2005). These results stress the overlap between the representations engaged by covert and overt performance, suggesting that the engagement of a particular action representation by motor
EXPERIMENT 1
In a first experiment, we assessed the impact of concurrent motor demands on novel mapping proceduralization. We used a paradigm where novel S-R mappings were encoded while participants performed a finger-tapping task, and assessed its impact on the first time the mappings were implemented. Our first prediction was to find a general impairment at mapping implementation due to the dual motor demands. To do so, we included a control block, where the S-R mapping task was kept identical, but no finger-tapping was carried out. We expected a better performance in the control block than in the remaining ones, where the finger-tapping was performed. Our second, and more critical hypothesis was to find a magnified impact of the finger-tapping task when it overlapped at the response level with the S-R mappings. This was assessed by comparing two conditions: one in which the same effectors were required by the mappings and the motor task (overlapping response sets), and another one in which an independent set of effectors were required by each task (non-overlapping response sets). We hypothesized an impoverished performance in the overlapping response set condition in comparison with the non-overlapping one.
METHODS Participants
The online study was completed by 100 participants (33 females, 66 males, 1 non-binary individual) on the Prolific Academic Website (https://www.prolific.co/). The mean age was 25.36 years old (SD = 7.27 years). Participation was compensated with £6 (£5 as a fixed rate and a £1 bonus offered for high performance, but that all participants received). The sample size was set to detect a small effect size (Cohen's d = 0.3) with a 90% power in a paired-sample t-test (see Data Analysis section).
Material
We generated 224 pairs of novel S-R mappings (168 for the experimental procedure, and 56 for the practice sessions) per participant. Each pair consisted of two pictures (two animals, two inanimate objects, or an animal and an inanimate object) located at both sides of the word "index" or "middle", indicating the relevant fingers to respond. The picture located at the left was linked to a left-hand response with the indicated finger, and the picture at the right, to a right-hand response. Critically, we employed different pictures for every trial, ensuring that the individual S-R associations were always new. In this sense, even when the more general task remained invariant across the experiment (i.e.: to associate the left-side picture with a left-hand response, and the right-side picture, with a right-hand response), the specific S-R associations changed on a trial-by-trial basis, requiring that novel procedural mapping representations were always created.
Pictures were drawn from a database of 1550 images of animate (non-human animals) and inanimate (vehicles and music instruments) objects used in previous studies González-García et al., 2020. All images were in grayscale, with a white background, and centered in a 150*150 pixels square. The response word was typed in Open Sans font, 26 pixels size. The experiment was programmed in JsPsych v.6.1.0 (de Leeuw, 2015).
Procedure
In each trial, the participants needed to encode and implement novel S-R mappings, while concurrently performing a finger-tapping task ( Figure 1A). Each trial started with a blank interval (1300 ms) and afterward, a black dot (from now onward, pacing signal) appeared rhythmically on the screen at 1.54 Hz. The pacing signal indicated the finger-tapping pace, and participants were instructed to tap every time it flashed on the screen. For each tap, the pacing signal was presented during 100 ms and followed by a blank screen lasting 550 ms (i.e. one tap was required every 650 ms). To entrain the rhythm, participants first tapped three times following the pacing signal before the mappings were shown. Then, the pair of S-R mappings also appeared on the screen, and participants had to memorize them while they kept performing the finger-tapping. The mappings were displayed for 5200 ms, and during this time, the pacing signal flashed eight times. When the encoding time was finishing (3250 ms after the mappings onset), the pacing signal was shown progressively bigger and reddish to warn the participants. Then, the mappings disappeared and a red dot (from now on, reset signal) flashed three times on the screen (1.54 Hz, 100 ms of signal followed by 550 ms of blank). This reset signal indicated the participant to tap with both index and middle fingers simultaneously. Reset taps were included to ensure that all fingers were used immediately before responding and to avoid potential response priming effects. Finally, a probe image was presented, and participants had 3000 ms to respond. In ~86% of the trials (regular trials), probes were either the left or the right picture from the mapping. In the remaining ~14% of trials (catch trials), a novel picture was shown, and participants should not respond. Catch trials ensured that both stimuli from the encoding screen were encoded. After the probe, a 500 ms ITI preceded the next trial. The different motor responses required by each trial event are depicted in Figure We manipulated the overlap between the finger-tapping task and the S-R mappings at the response level (Figure 2). To do so, we randomized within blocks the response required by the mappings, either with the index or the middle fingers. Then, we used three modalities of the finger-tapping task in separate blocks: finger-tapping with the index fingers, fingertapping with the middle fingers, and a control condition without finger-tapping. In the control block, participants were still required to perform the reset taps and to respond to probes (see Figure 2C). This way, we ensured that the only difference with the other conditions was the absence of finger-tapping during mapping encoding. By manipulating the responses required by the mappings and the finger-tapping task, we generated three response set overlap conditions: overlapping response sets (when the same effectors were involved in the mappings and the finger-tapping; Figure 2A), non-overlapping response sets (when different effectors were involved in the mappings and the finger-tapping; Figure 2B), and a control condition (when Palenciano et al. Journal of Cognition DOI: 10.5334/joc.190 no finger-tapping was performed during mapping encoding, Figure 2C). Mapping and probe category (animate, inanimate) and response laterality (left, right) were counterbalanced across these three experimental conditions. Participants completed three blocks, one per finger-tapping modality, of 56 trials each (48 regular trials, 8 catch ones). Information about the finger-tapping modality was provided at the beginning of each block and every 16 trials. At the end of each block, participants saw their mean accuracy rate and could take a pause. Overall, we collected 48 trials per response set overlap condition. All possible block orders were used, ensuring that a balanced number of participants were assigned to each order.
Before the main task, participants completed an extensive three-session practice protocol. Participants were first trained in the S-R mapping task alone, then in the finger-tapping alone, and in a final session, they practiced the two tasks combined. A minimum of 80% correct responses was required to continue with the experimental session. All the mappings used during the practice procedure were never employed during the experiment.
Data analysis
We excluded participants with missing data, or whose mean accuracy in the S-R mapping or the motor task (finger-tapping during the encoding period and/or the reset signal) fell below two standard deviations from the sample average. Thirteen participants were excluded from our sample and not further replaced. Within participants, we discarded trials with a reaction To illustrate the overlapping response set condition, we display a trial from a block in which the index fingers are used for the finger-tapping task. In this trial, an index finger response is also required by the novel S-R mappings, and hence, the response sets overlap between the two tasks. The bottom row shows the responses required by each trial event (finger-tapping task, reset taps, and probe response). B. To illustrate the non-overlapping response set condition, we display a trial from a block in which the middle fingers are used for the finger-tapping task. In this trial, an index finger response is required by the novel S-R mappings, and hence, the response sets do not overlap between the two tasks. C. In control blocks, no finger-tapping is required during mapping encoding. However, as it is depicted in the bottom row, participants also performed the reset taps and responded to probes. time (RT) below or above two standard deviations from their average, and those in which the motor task was not completed (less than seven taps during the encoding period, or less than one reset tap). Catch trials were not included. An average of 9% of trials was excluded per participant. We carried out a one-way repeated-measures ANOVA, with response set overlap (non-overlapping response sets, overlapping response sets, and control) as factor, to explore differences in trial exclusion between conditions. A marginally significant main effect of response set overlap was found, F(1.22, 105.10) = 3.473, p = .06, η p 2 = .04, driven by more excluded trials in the overlapping (M = 10%, SD = 9%) than in the control condition (M = 7%, SD = 12%), t(86) = 2.53, p = .04, Cohen's d = .27. No differences were found between the non-overlapping condition (M = 9%, SD = 10%) and the rest (non-overlapping vs. control: t(86) = 1.91, p = .12, Cohen's d = .21; non-overlapping vs. overlapping: t(86) = -0.62, p = .54, Cohen's d = .04).
Error rates and RT data were analyzed with separates repeated-measures ANOVAs using response set overlap (non-overlapping response sets, overlapping response sets, and control) as a within-subject factor. A Greenhouse-Geisser correction was used whenever sphericity was violated. Planned paired-sample t-tests were conducted to address the three pair-wise comparisons between conditions. Despite these comparisons were planned and preregistered, we followed a Bonferroni-Holm correction (Holm, 1979) to control for the multiple comparisons carried out.
Data from the finger-tapping task were analyzed to control for differences in motor performance between the overlapping and non-overlapping response set conditions. We focused on three variables: tapping accuracy (mean percentage of correct taps, computed within trials), tapping delay (averaged taps' reaction time, taking into account each tap latency regarding its corresponding pacing signal's onset, and computed within trials), and tapping variability (standard deviation of taps' reaction time, computed within trials). The three variables were extracted after filtering the data following the approach stated above, and thus, only trials in which the finger-tapping task was substantially performed (above 7 taps) were included. We compared these variables between non-overlapping and overlapping trials with paired-sample t-tests. All the analyses were performed using the software JASP (JASP Team, 2020).
For data visualization, 95% confidence intervals were computed after normalizing participants' data to exclude between-subjects variability (Cousineau, 2005).
RESULTS
On average, participants responded correctly to probes on 88% of the trials (SD = 9%), and the mean RT was 767 ms (SD = 272 ms). Participants correctly identified and did not respond to catch probes on an average of 94% of trials (SD = 7%). In the finger-tapping task, mean accuracy was 93% (SD = 7%), and 77% (SD = 11%) of reset taps were performed. Overall, participants understood and fulfilled both the S-R mapping and the finger-tapping demands.
DISCUSSION
In this experiment, we expected an impairment at mapping implementation due to the dual finger-tapping task, and that this effect would be sensitive to the mappings' relevant response sets. Confirming our first hypothesis, performance was generally better in the control condition. This effect was robust in error rates, while we could not confirm faster responses in the control than in the non-overlapping condition. This could reflect that in this particular dualtask scenario, where participants' main task was to assemble procedural representations from scratch, the general dual costs affected processes better captured by error rates. Nonetheless, it is worth noticing that in previous literature, dual-task costs have been typically reported in response speed (Pashler, 1994). Alternatively, this result could also relate to less reliable RTs in control trials (see Figure 3, error bars) due to the absence of finger-tapping, which could have entrained participants' responses in the other conditions. While no robust conclusions can be drawn in this regard, we can nonetheless confirm that the dual motor demands had a general impact on mapping implementation.
More importantly, we also confirmed our second hypothesis. Performance was slower and more prone to errors in the overlapping than in the non-overlapping condition. In this regard, we ruled out that the finger-tapping facilitated or primed the overlapping fingers' responses. The inclusion of reset taps before the probe, together with the exclusion of trials in which such reset taps were not performed, ensured that overlapping and non-overlapping fingers were equally primed before responding. Moreover, if response priming had obscured our results, we should have found opposite results. Hence, a priming-based account seems unlikely. Alternatively, a differential engagement in the finger-tapping task could have also contaminated our findings, if participants had performed the tapping to a lesser extent in the non-overlapping condition. To avoid that, we excluded trials in which the finger-tapping was substantially not performed. To control for more subtle differences, we also analyzed tapping accuracy and rhythmicity, not finding differences between overlapping conditions. Thus, an explanation based on fingertapping performance was ruled out.
Overall, this dataset showed that novel mapping implementation was affected by concurrent motor demands, especially when the response sets engaged by the mapping and the fingertapping overlapped. This pattern suggested that the proceduralization may entail anticipatory motor simulation, hindered in the overlapping condition. Nonetheless, other cognitive processes also engaged during instructions implementation, as the declarative encoding of the mappings (Brass et al., 2017), could be also the source of our findings. Hence, we carried out a second experiment to directly test that the overlap manipulation disrupted the proceduralization.
EXPERIMENT 2
To clarify the link between the response set overlap manipulation and the proceduralization, we next manipulated the necessity to prepare in advance the novel mappings. We assumed 8 Palenciano et al. Journal of Cognition DOI: 10.5334/joc.190 that the proceduralization would be stronger for mappings better prepared (Liefooghe et al., 2013;Meiran et al., 2015a). The preparation demands were varied by using different response deadlines. Under a more restrictive, early response deadline, participants needed to prepare the mappings to a higher degree to be able to respond to probes in a shorter time window. These preparation demands were lower in a more relaxed, late deadline condition. Similar approaches have been used in the past to manipulate task preparation during novel (Liefooghe et al., 2013) and practiced (Lien et al., 2005) mapping implementation.
We adapted our paradigm to include the two response deadlines. Since the previous experiment already provided evidence about the general motor demands cost, we excluded the control condition, and finger-tapping was performed in all blocks. First, we expected to replicate the response set overlap effect, i.e., more errors and slower responses in overlapping than nonoverlapping response set trials. Second, we predicted that the response set overlap would interact with the response deadline, with an increased impairment of overlapping response sets under the early response deadline -where higher preparation demands were imposed.
METHODS Participants
The online study was completed by 92 participants (36 females, 55 males, 1 non-binary individual). The mean age was 25.56 years old (SD = 9.00 years old). All participants received an economic compensation of £6 (£5 as a fixed rate and a £1 bonus offered for high performance, but that all participants received). The sample size was set to detect a small interaction effect (Cohen's d = 0.2) with 90% power in a repeated-measures ANOVA (see Data analysis section).
Material 260 pairs of S-R mappings were created per participant, using the same stimuli and procedure as in Experiment 1. We assigned 54 of the mappings to the practice sessions, and the remaining 208, to the experimental task.
Procedure
We used the paradigm from Experiment 1, with two modifications. First, the probe's maximum duration was set to 2000 ms in late response deadline blocks and adapted to each participant's performance in early response deadline blocks, using data from the initial practice procedure. Second, we provided feedback after each trial. The words "Correct!", "Wrong!" or "Too slow!" appeared 500 ms after the participants' response. Slow response feedback was used only during the early deadline blocks.
To compute the early response deadline, we focused on the third practice session, in which participants were trained with the S-R mapping task in combination with the finger-tapping (see Experiment 1 -Procedure). Participants completed 20-trial blocks until they achieved an 80% accuracy. The early deadline was adjusted to the mean RT from correct trials during the last practice block. The mean early deadline used was 867 ms (SD = 257 ms), ranging from 326 ms to 1418 ms.
The experimental task consisted of four blocks of 52 trials each (48 regular trials, 4 catch ones). Participants completed two blocks per response deadline, one using the index fingers for the motor task, and another using the middle fingers. The response required by the mappings (index, middle fingers), and as a consequence, the response set overlap (overlapping, nonoverlapping response sets) were randomized within blocks. We arranged blocks according to the response deadline, with participants completing first the two early deadline blocks and then the two late deadline ones, or vice-versa. We pseudorandomized block order regarding the finger-tapping modality. At the beginning of the block and every 13 trials, the motor task and deadline condition were presented on the screen. Overall, we collected 48 trials per experimental condition.
Data analysis
We used the same criterion as in Experiment 1 to exclude participants, discarding data from nine participants. Due to unequal RT distributions, trial trimming was performed independently for each response deadline condition, excluding trials with an RT two standard deviations above 9 Palenciano et al. Journal of Cognition DOI: 10.5334/joc.190 or below the condition's average. Trials in which the finger-tapping was not performed (less than seven taps during the encoding period, or less than one reset tap) were also discarded. An average of 8% (SD = 6%) trials were excluded per participant. We carried out a repeatedmeasures ANOVA, with response set overlap (non-overlapping, overlapping) and response deadline (early, late) as factors, to explore differences in trial exclusion across conditions. The main effect of response set overlap was significant, F(1,82) = 6.34, p = .014, η p 2 = .07, reflecting that more trials were rejected in the overlapping (M = 8%, SD = 6%) than in the non-overlapping condition (M = 7%, SD = 6%). Neither the main effect of response deadline, F(1,82) = 6.34, p = .014, η p 2 = .07 (early: M = 8%, SD = 8%; late: M = 8%, SD = 7%), nor the interaction were significant.
To address our main hypothesis, we ran repeated-measures ANOVAs on error rates and RT data using response set overlap (non-overlapping, overlapping) and response deadline (early, late) as within-subjects factors. Planed comparison included paired-sample t-tests contrasting nonoverlapping and overlapping trials separately for early and late response deadline blocks. In all further exploratory analyses, a Bonferroni-Holm correction for multiple comparisons (Holm, 1979) was used. Figure 4A and in Supplementary Table 2. The repeated-measures ANOVA on error data showed a significant main effect of response set overlap, F(1,82) = 16.57, p < .001, η p 2 = 0.17, and response deadline, F(1,82) = 25.31, p < .001, η p 2 = 0.24. Participants committed more errors in the overlapping than in the nonoverlapping condition. The error rates were also higher in the early response deadline than in the late one. However, the interaction term was not significant, F(1,82) = 0.03, p < .875, In RT data, we also found a significant main effect of response set overlap, F(1,82) = 4.68, p = .033, η p 2 = 0.05, with faster responses in the non-overlapping than in the overlapping condition. As expected, response deadline was also significant, F(1,82) = 75.69, p < .001, η p 2 = 0.48, with faster responses with the early response deadline than with the late one. Finally, we found a tendency toward a significant interaction, F(1,82) = 3.39, p = .069, η p 2 = 0.04. Following our preregistered analyses, we assessed the effect of response set overlap within each response deadline condition (Figure 4A, right panel). In late response deadline blocks, responses were faster in non-overlapping than in overlapping trials, t(82) = 2.33, p = .022, Cohen's d = 0.26. We did not find evidence supporting this effect in early response deadline blocks, t(82) = 0.38, p = .708, Cohen's d = 0.04, Further exploratory analyses addressed whether the order in which the response deadlines were experienced entailed a carry-over of the preparation strategy across blocks. To do so, we ran repeated-measures ANOVAs on errors and RT data with response set overlap and response deadline as within-subjects factors, and response deadlines' order (early-late, late-early) as between-subject factor. In error rates, neither the main effect of the response deadlines' order, Figure 4 Results from Experiment 2. A. Mean error rate (left) and RT (right) for non-overlapping and overlapping trials in the early and late deadline conditions. B. Averaged RTs from the first two blocks and including the response deadline condition as a between-subject factor. Asterisks indicate significant differences in the corresponding pairedsample t-test (p < .05). Error bars display 95% confidence intervals. F(1,81) = 2.45, p = .121, η p 2 = .03, nor its interaction with other terms were significant (response deadlines' order * response set overlap: F(1,81) = 3.17, p = .08, η p 2 = 0.04; response deadlines' order * response deadline:, F(1,81) = 0.71, p = .401, η p 2 = 0.01; three-way-interaction: F(1,81) = 0.64, p = .426, η p 2 = 0.01). The main effects of response set overlap, F(1,81) = 16.07, p < .001, η p 2 = 0.17, and response deadline, F(1,81) = 25.64, p < .001, η p 2 = 0.24, remained significant, indicating a stable pattern irrespectively of block order. In RT data, however, we found a significant interaction between the response deadline and the response deadlines' order, F(1,81) = 24.94, p < .001, η p 2 = 0.24, and also a significant three-way-interaction, F(1,81) = 5.35, p = .023, η p 2 = 0.06. Posthoc comparisons showed that for participants performing first the early response deadline, a trend towards faster responses in non-overlapping than overlapping trials was found in early deadline blocks, t(38) = 2.54, p = 0.015, p corrected = 0.06, but not in the late deadline ones, t(38) = 0.90, p = .377, p corrected = 0.396. Conversely, for participants performing first the late response deadline condition, a trend toward the effect of response overlap was found in the late deadline blocks, t(43) = 2.26, p = .029, p corrected = 0.087, but not in early deadline ones, t(43) = -1.31, p = .198, p corrected = 0.396).
Mean error rates and RTs across conditions are displayed in
Next, to avoid the potential carry over-effect, we focused on the first half of the experiment, in which only one of the response deadlines was used. The first two blocks' RT data were analyzed in an ANOVA with response set overlap as a within-subjects factor, and response deadline as a between-subjects factor. Mean RTs across conditions from the two first blocks are shown in Figure 4B. We found significant main effects of response set overlap, F(1,82) = 9.02, p = .004, η p 2 = 0.10, and response deadline, F(1,82) = 21.03, p < .001, η p 2 = 0.21. Nonetheless, the interaction term was not close to significance in this analysis, F(1,82) = 0.85, p = .360, η p 2 = 0.01.
Finally, we analyzed finger-tapping performance with separate repeated-measures ANOVAs for tapping accuracy, variability, and delay, using response set overlap and response deadline as within-subject factors. The three variables' mean and standard deviation across conditions are shown in Supplementary
DISCUSSION
Experiment 2 replicated the response set overlap effect found in the first dataset. When overlapping response sets were used, participants committed more errors and responded slower. However, the data did not support our prediction that the overlap effect was heightened under high-preparation demands. Error rates were equally affected by the response set overlap under the two response deadlines. RT data showed a tendency toward a significant interaction, which nonetheless went in the opposite direction, with a greater overlap effect in the late response deadline blocks.
Further exploratory analyses showed that RT data could reflect a carry-over of the preparatory strategy between response deadline conditions. First, when the response deadlines' order was included in the analysis, it modulated the effect of the response set overlap. Second, when only data from the first half of the experiment (when one response deadline was applied) was analyzed, the response set overlap affected equally both response deadline conditions. Thus, these results call for extra caution when interpreting the tendency toward an interactive pattern found in RT.
More critically, the response deadlines induced a speed-accuracy trade-off, with the early response deadline increasing the response speed, but at the cost of more errors. This result contrasts with the benefit in both accuracy and RT previously reported (Liefooghe et al., 2013), and could indicate that our manipulation was not optimal in inducing differential preparatory strategies. As a consequence, it is difficult to infer how the early response deadline affected the participants' performance. The error rate results and the exploratory RT analyses supported a response set overlap effect in this condition, suggesting that mappings were prepared following a similar simulation strategy as with the late response deadline. Nonetheless, the observed speed-accuracy trade-off could also reflect that the response time constraints induced a nonoptimal preparatory strategy or an impoverished probe processing. This ambiguous pattern could have been caused by the early response deadline used here, computed from a reduced amount of trials. More sophisticated, but also more time-consuming calibration procedures may have generated a response deadline better fitted to individual performance. Here, we aimed to find a compromise between the overall experiment duration and adapting the response deadline to our participants. However, this may have hindered our capacity to manipulate task preparation.
Overall, Experiment 2 successfully replicated the response set overlap effect. However, we did not confirm the hypothesized enhancement of this effect under stringent preparation demands. This may reflect that the finger-tapping task did not interfere with task preparation in general, nor with the proceduralization in particular. Nonetheless, taking into account the speed-accuracy trade-off induced by the deadline procedure, we believe that is more cautious to conclude that we did not succeed in inducing different preparatory strategies. In consequence, this dataset was inconclusive regarding the relationship between the overlap effect and proceduralization.
EXPERIMENT 3
To overcome the previous limitations, we carried out a third experiment using a more direct manipulation of task proceduralization: the novelty of the S-R mappings. Comparing novel and practiced instructions is a common manipulation in the instructed-behavior literature to isolate the proceduralization process (Brass et al., 2017;Cole et al., 2013). For novel S-R associations, the procedural task set must be quickly assembled from scratch. Practiced task sets, on the contrary, can be directly retrieved from long-term memory, bypassing the proceduralization (Meiran et al., 2012). Thus, while both novel and practiced tasks are prepared in advance, the processing chain differs, with new mappings additionally requiring their proceduralization. Based on this view, we included in our paradigm new and multiple-times-applied mappings. We predicted that the interference due to overlapping response sets would be greater for the novel mappings, supporting the link between motor simulation and novel task proceduralization.
METHODS Participants
Ninety-two participants (38 females, 53 males, 1 non-binary gender participant) completed the online experiment. The mean age was 28.58 years old (SD = 9.59 years old). Participants received an economic compensation of £6 (a £5 fixed rate, and a £1 bonus offered for high performance, but that all participants received). The sample size was set to detect a small interaction effect (Cohen's d = 0.2) with 90% power in repeated-measures ANOVAs (see Data analysis section).
Material
Ninety-six pairs of S-R mappings were created per participant. Mappings were composed of pictures of either two animate or two inanimate objects, associated either with an index or middle fingers response. Eight mappings were assigned to the practiced condition, and the remaining 88, to the novel one. The eight practiced mappings were split into two sets, one per motor task modality (index, middle finger-tapping). Each practiced mapping set was the result of crossing the two possible categories (animate, inanimate) and responses (index, middle fingers).
Procedure
We use the paradigm from Experiment 1 (Figure 1A), but now including both novel and practiced S-R mappings. All practiced mappings were learned during a previous practice protocol (see below).
The experiment consisted of four blocks, of 44 trials each (40 regular and 4 catch trials). Participants completed two blocks with novel mappings, and another two repeating the same set of learned mappings. Within each novelty condition, participants fulfilled one block per finger-tapping modality (index, middle fingers). In the practiced blocks, independent subsets of mappings were used for each finger-tapping modality. Within blocks, we randomized the response required by the mappings (index, middle fingers) and in consequence, the response set overlap condition (overlapping, non-overlapping response sets). Block order was arranged according to the novelty manipulation, with participants first fulfilling two novel blocks and then two practiced ones, or vice-versa. Block order was pseudorandomized regarding the finger-tapping modality. At the beginning of each block, and every 11 trials, participants read the mapping novelty and fingertapping conditions. Overall, we collected 40 trials per experimental condition. Palenciano et al. Journal of Cognition DOI: 10.5334/joc.190 The eight practiced mappings were learned during the initial practice protocol (see Experiment 1 -Procedure). In the first and the third session, participants repeatedly implemented the practiced S-R mappings, alone and combined with the finger-tapping task, respectively. Across these two sessions, each mapping was presented at least eight times.
We conducted repeated-measures ANOVAs with response set overlap (non-overlapping, overlapping response sets) and mapping novelty (novel, practiced) as factors, on error rates and RT data. Planned comparison included paired-sample t-tests to contrast between nonoverlapping and overlapping response set trials, separately for novel and practiced mappings.
Numerically, the effect of response set overlap on RT was greater for practiced instead of novel mappings (see Figure 5). To explore this finding, we analyzed performance during practiced Figure 5 Results from Experiment 3. Mean error rate (left) and RT (right) for non-overlapping and overlapping trials in the novel and the practiced mapping conditions. Asterisks indicate significant differences in the corresponding pairedsample t-test (p < .05). Error bars display 95% confidence intervals.
13 Palenciano et al. Journal of Cognition DOI: 10.5334/joc.190 blocks and assessed whether the overlap effect was sensitive to the experience accumulated during the time-on-task. Each practiced mapping was implemented ten times during the experiment. We split our data between the first five and the last five repetitions and ran a repeated-measures ANOVA with response set overlap and mapping repetition (first five, last five) as within-subject factors on RTs of practiced blocks. As expected, the main effect of response set overlap was significant, F(1,83) = 13.76, p < .001, η p 2 = .14. Mapping repetition was also significant, F(1,83) = 6.92, p = .010, η p 2 = .08, with response speed improving across the experiment (First half: M = 683 ms, SD = 238 ms; Last half: M = 667 ms, SD = 377 ms). The interaction was, however, non-significant, F(1,83) = 0.01, p < .940, η p 2 < .001.
DISCUSSION
In Experiment 3, we aimed to assess the impact of the overlapping motor demands on the proceduralization process, by comparing this effect across new and practiced S-R associations (e.g. Cole et al., 2018). We predicted a magnified overlap effect linked to novelty. The current dataset further replicated the previous experiments, showing an impoverished mapping performance in overlapping response set trials. The hypothesized interaction with task novelty was, however, not found. Finally, we explored whether the experience accumulated with the practiced mappings along the experiment modulated the overlapping effect. However, the overlap effect was constant over time.
Overall, we did not evidence a differential impact of response set overlap depending on novelty. This null result could be related to the practiced condition that we used, generated by repeating a set of mappings at least eight times before starting the experiment. Theoretically, S-R associations change their status after the first implementation, when the procedural task set can be traced into long-term memory (Meiran et al., 2015a). From this view, eight repetitions should suffice to differentiate novel and practiced associations. Having said that, previous empirical works employed more extensive practice procedures (e.g. Cole et al., 2018;González-García et al., 2017). This leaves open the possibility that our distinction between novel and practiced tasks was less salient due to lack of experience.
Nonetheless, it seems more plausible that task novelty was indeed unrelated to our overlap manipulation. This implies that the detrimental effect caused by the finger-tapping was mediated by a mechanism common to novel and practiced task settings. On one hand, it could be a preparatory-related process, as the maintenance of the procedural task set, or more downstream motor planning. On the other, our results could be caused by preparatoryunrelated mechanisms. To decide between these accounts, we ran a final experiment.
EXPERIMENT 4
Experiments 1-3 replicated the effect of response set overlap. However, when we attempted to better characterize its significance for novel task proceduralization, we obtained inconclusive evidence. This opens the possibility that instead of disrupting task preparation, the impact of our effect relies on pure motor processes. The intensive and repetitive finger-tapping could have generated a negative priming effect, with impoverished responses to probes with the effectors primed by the finger-tapping. Similarly, effectors' fatigue could operate in the same direction. Both accounts predict the detrimental effect of overlapping response sets. To test
Procedure
The paradigm used in the fourth experiment is displayed in Figure 1B. Trials started with the finger-tapping task, following the same timing parameters (tapping frequency and number of taps) as in Experiments 1-3, but without presenting any S-R mappings. After the three reset signal taps, participants saw a response cue indicating the required key press. The cue remained on the screen until the participants' response or up to a maximum of 3000 ms.
Participants completed four 24-trial blocks, two using the index fingers for the finger-tapping task, and two using the middle fingers. The relevant finger-tapping modality was indicated at the beginning and in the middle of each block. Within blocks, we randomized the response required by the cues (left index, right index, left middle, right middle finger), and in consequence, the response set overlap condition (overlapping, non-overlapping response sets). Block order was pseudorandomized. Overall, 48 trials were collected per experimental condition.
Participants completed three practice sessions: one with the response cues alone, another with the finger-tapping task, and a final session with the combined dual task.
Data analysis
We followed the same criteria as in Experiment 1 to exclude participants and trials. Data from ten participants were discarded. Within participants, an average of 6% of trials was excluded. The percentage of excluded trials was similar in the non-overlapping (M = 6%, SD = 5%) and overlapping conditions (M = 5%, SD = 5%), t(81) = 0.98, p = 0.329, Cohen's d = 0.11s.
We ran paired-sample t-tests contrasting error rates and RTs between non-overlapping and overlapping trials. Since one of our hypotheses predicted equivalent means between response overlap conditions, we also conducted two Bayesian paired-sample t-tests (Dienes, 2014). We interpreted BF 01 above three as moderate evidence for the null hypothesis (Jeffreys, 1939).
RESULTS
Mean error rates and RTs in the two response overlap conditions are displayed in Figure 6 and Supplementary
DISCUSSION
This fourth, control experiment aimed to confirm or discard alternative accounts of the findings obtained in Experiment 1-3, like fatigue or negative priming. Frequentists and Bayesian evidence supported the absence of a response overlap effect in the absence of task preparation. Consequently, these results support that the effect found in the previous experiments was mediated by preparatory mechanisms -and not task-unspecific motor processing.
In this dataset, we found more rhythmical tapping in the overlapping condition. Since the participants were unaware of each trial's overlapping condition (i.e., the cue was unpredictable), the interpretability of this result is uncertain. Nonetheless, a better, more rhythmical fingertapping in overlapping trials should have boosted the impact of this condition. Consequently, it is unlikely that the reported null results were associated with a disengagement from the motor task in overlapping trials.
In a series of online experiments, we manipulated the availability of motor representations during novel mapping preparation. We hypothesized an impairment on performance when the mappings' relevant response sets were already engaged by a dual finger-tapping task (Stevens, 2005). This prediction was robustly replicated across three datasets. Despite its ubiquity, we disentangled the response set overlap effect from general dual-task costs and purely motoric accounts. Critically, we additionally manipulated two proceduralization-related variables, task preparation (Liefooghe et al., 2013) and mapping novelty (Cole et al., 2018), to clarify whether we were tapping into this stage of novel instruction processing. Nonetheless, the overlap effect did not interact with these variables. Our task preparation manipulation led to an ambiguous trade-off effect on performance, questioning its validity. Nonetheless, task novelty successfully modulated behavior, with practice improving mapping implementation. The response set overlap effect was, however, insensitive to this variable. This null result suggests that while motor simulation may indeed be used during general task setting, its involvement is not specifically linked to novel task proceduralization, as we hypothesized.
Our null findings may relate to the abstract nature of novel tasks' procedural representations (Cole et al., 2013;Meiran et al., 2017). It has been recently shown that novel mapping implementation is disrupted by concurrent verbal demands (van't Wout et al., 2013;van't Wout & Jarrold, 2020) or increasing the declarative working memory load . These results stress the role of a verbal component during instructed performance, in line with the idea that more abstract new task sets could require from verbal rehearsal to proper maintenance (Cragg & Nation, 2010;Kompa & Mueller, 2020). More directly related, it has been shown that the instructions' procedural representations generalize across response modalities that are conceptually overlapping (Liefooghe et al., 2012). Taking all together, it may be the case that novel task proceduralization entails more high-level, abstract response representations than those engaged by our dual motor task. That would explain why the response set overlap effect was not specific for novelty.
Despite our results substantially deviated from our predictions, we still found robust evidence supporting that the covert activation of the relevant action representations is engaged by a preparatory mechanism, which generalizes across different cognitive contexts. Task preparation is a multidimensional process, acting at several hierarchical levels (De Baene & Brass, 2014). First, the procedural task representation must be activated. Nonetheless, this process differs depending on mapping novelty: new mappings require its assembly (via proceduralization; Brass et al., 2017) while for practiced ones, these representations can be retrieved from longterm memory (Mayr & Kliegl, 2000). Once the procedural representations are instantiated, they trigger a series of preparatory adjustments, biasing the processing across several downstream systems (Miller & Cohen, 2001;Sakai, 2008). Taking into account the nature of our manipulation, the preparation of the task's motor component is the most likely stage affected by the overlap manipulation. Previous literature stresses that skilled performance is associated with a readiness state across the motor systems, allowing automatic response activation upon targets (Hommel, 2000). Importantly, recent evidence with electroencephalography recordings supports a similar mechanism for novel tasks, showing that novel instructions also lead to automatic response activation during preparation (Everaert et al., 2014;Meiran et al., 2015b). Taking into account that the advanced response reconfiguration seems to be common to both practiced and novel tasks, and the more general role of motor simulation in motor planning (Grush, 2004), blocking simulation during mapping encoding may have hindered this mechanism. In this regard, multiple components of the mappings' actions were activated by our dual task: the effectors, the movement itself, and the movement's kinesthetic contingencies. Several proposals emphasize the role of the latter -also known as action effects -for the goal-oriented control of behavior (Grush, 2004;Hommel et al., 2001). Future research disentangling the contribution of the individual action's components would be of high relevance for current debates in the field.
Finally, our findings could also be relevant for theories of proactive cognitive control, a broader construct conveying task preparatory processes (Braver, 2012). Traditionally, proactive control is conceived as a domain-general function, exerting top-down influences in motor and perceptual interface systems (Miller & Cohen, 2001), implicitly assuming a serial, unidirectional processing chain (Norman & Shallice, 1986). While this theoretical view is parsimonious and straightforward, it does not incorporate the bidirectional and recursive influences between higher-level control systems and lower-level sensorimotor ones (Kilner et al., 2007;Summerfield & De Lange, 2014).
In line with this perspective, we showed that the availability of motor representations may be necessary for task preparation -suggesting a role for the motor system in proactive control. Hence, our results advocate for a more embodied, action-oriented perspective on control processes. This partially overlaps with previous literature in other high-level cognitive domains, like semantics and language, which has evidenced that the involvement of sensorimotor representations is critical during information processing (Barsalou, 2008;Meteyard et al., 2012).
Our data leaves open whether that view could also be extended to cognitive control processes.
In this regard, we consider that more abstract task or goal representations are required to flexibly orchestrate behavior. However, these task sets may be built upon or enriched with actionbased representation originated in the sensorimotor interface systems. Further behavioral and neuroimaging research would be key to shed some light upon this issue. Palenciano et al. Journal of Cognition DOI: 10.5334/joc.190
CONCLUSION
In the present work, we addressed whether novel instruction proceduralization was based on the motor simulation of the upcoming task. Four experiments suggest that optimal task preparation may rely on motor simulation, but in a more general fashion -and not strictly in novel scenarios. We propose that the advanced reconfiguration of the task motor component is the candidate preparatory mechanism which may require from simulation. While we could not extract further insights on the proceduralization process, we suggest that the abstraction level of novel task sets should be taken into account in future research. Finally, we integrate our findings within broader theoretical accounts, emphasizing the role of the motor system in proactive cognitive control.
DATA ACCESSIBILITY STATEMENT
Each experiment preregistration, experimental stimuli, data, and analysis code are available on the Open Science Framework (https://osf.io/qfshr/).
ADDITIONAL FILE
The additional file for this article can be found as follows:
ETHICS AND CONSENT
All the participants gave their informed consent according to the Declaration of Helsinki and following the General Ethics Protocol of the Faculty of Psychology and Educational Sciences of the Ghent University.
FUNDING INFORMATION
This research was supported by grant G00951N of the Flemish Government attributed to Baptist Liefooghe and Jan de Houwer. | 2021-10-19T15:25:46.609Z | 2021-09-27T00:00:00.000 | {
"year": 2021,
"sha1": "726130332649c909b4239da84aa1bf432ea2e3b8",
"oa_license": "CCBY",
"oa_url": "http://www.journalofcognition.org/articles/10.5334/joc.190/galley/544/download/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1065b87b4c1e833e76b7e42b0790f12ddc4db188",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118622858 | pes2o/s2orc | v3-fos-license | Quantum models of classical systems
Quantum statistical methods that are commonly used for the derivation of classical thermodynamic properties are extended to classical mechanical properties. The usual assumption that every real motion of a classical mechanical system is represented by a sharp trajectory is not testable and is replaced by a class of fuzzy models, the so-called maximum entropy (ME) packets. The fuzzier are the compared classical and quantum ME packets, the better seems to be the match between their dynamical trajectories. Classical and quantum models of a stiff rod will be constructed to illustrate the resulting unified quantum theory of thermodynamic and mechanical properties.
Introduction
There are some features of the classical world that seem to be incompatible with quantum mechanics: Realism Properties such as position and momentum can be ascribed to a chair, say, independently of whether they are observed or not.
Sharp Trajectories By a common interpretation of classical mechanics, the real chair is even at a sharp point of its phase space at each time. Attempts to model this property by a quantum state with minimum uncertainty leads to coherent states that are pure.
No Superpositions
The chair is never observed in a linear superposition of being, e.g., simultaneously in the kitchen as well as in the bedroom. However, pure states in quantum mechanics can be superposed in this manner.
Robustness Measurement of every classical observable can be done in such a way that the state of the observed system is arbitrarily weakly disturbed. However, pure quantum states are not disturbed only by measurements of very few very special observables.
Thus, attempts to solve the problem of Sharp Trajectories aggravate problems of Robustness and of No Superpositions. There is a vast literature about the problems. Let us list examples of the most popular ideas: macroscopic systems do not obey quantum mechanics [1]; quantum decoherence theory [2]; only coarse-grained operators represent classical measurements [3]; Coleman-Hepp theory [4]; dynamical collapse theory [5,6]. The list is incomplete.
Our theory is different. It rejects sharp trajectories and seeks quantum mechanical derivation of classical properties possessed by fuzzy mechanical states. The present paper is a short review of [7,8] as well as of some new results.
Hypothesis of high entropy states
To motivate our approach, let us briefly recapitulate some ideas of statistical thermodynamics. Consider rarefied equilibrium gas in a vessel. There is a classical model S c of this gas offered by phenomenological thermodynamics, called "ideal gas", and the properties of S c are examples of classical properties. They are described by thermodynamic quantities such as internal energy E, volume Ω, pressure, entropy, temperature, specific heats, etc.
To obtain the values of such quantities from quantum mechanics, we need a quantum model S q of the gas. As S q , we can choose a system of N spin-zero point particles, each with mass µ, in a deep potential well of volume Ω with Hamiltonian where p k is the momentum of k-th particle in the rest system of Ω. H is then the operator of the internal energy and the classical internal E energy is an average of H.
The most important assumption of the quantum model is the choice of state. It is the state that maximises the (von Neumann) entropy for fixed value E of the average of the internal energy. It is called "Gibbs state". All properties of the S c can then be calculated from S q as properties of the Gibbs states.
The main (heuristic) principle of our theory is a generalisation of this idea to all classical properties, including the mechanical ones. Thus, we state the following hypothesis: Assumption 1 Let a real system S has a classical model S c . Then, there is a quantum model S q of S such that all properties of S c are selected properties of some high-entropy states of S q .
An important reason for accepting this hypothesis is that it suggests ways in which all four problems mentioned in the introduction can be solved. Indeed, the Realism Problem could be approached as follows. Our theory of objective properties of quantum systems [9,10] justifies the assumption that quantum states are objective. If classical properties are properties of some states of the quantum model, they will also be objective. The No-Superposition Problem is based of some properties of pure quantum states. But high-entropy states are not pure: they cannot be superposed. As for the robustness problem, we can use the fact that very many quantum states correspond to a single classical state. Even if quantum states may be disturbed by observation, the corresponding classical states need not be. Finally, there is no Sharp-Trajectories Problem for thermodynamics. All these points are just suggestions and must be more carefully studied on some mathematically welldefined models.
Assumption 1 might work for thermodynamics, but what could be the highentropy states for Newtonian mechanics?
Classical ME packets
Let us consider the classical mechanical model S c defined as a system with a single degree of freedom and Hamiltonian The classical equations of motion arė and their solution is a sharp trajectory for every initial values q(0) and p(0). Let us choose the corresponding quantum model S q to be a system of one degree of freedom with position operator q, momentum operator p and spin 0. Let the Hamiltonian be The Heisenberg equations of motion arė Then the time dependence of position and momentum averages Q = q and P = p in a state |ψ isQ To evaluate the right-hand side of the second equation, let us expand the potential function in powers of q − Q: If we take the average of the last equation and use relations (q − Q) = 0 and (q − Q) 2 = ∆Q 2 , where ∆Q is the variance of q in state |ψ , we obtain Let us assume that coordinate q and momentum p of S c are obtained from the quantum model by formulas q = Q , p = P .
Then, already for potentials of the third order, the quantum equations of motion for averages deviate from classical equation of motion for sharp trajectories. This deviation would be negligible for small ∆Q, that is, the spread of the wave packet |ψ over the space must be as small as possible. However, if the variance ∆P is large, ∆Q will quickly increase with time. This implies that the minimum-uncertainty wave packets may give the best approximation to classical sharp trajectories. Let us stop here and ask: what is the reason for trying to get sharp trajectories from quantum mechanics? Clearly, it is the popularity of the specific form of classical realism mentioned in the Introduction: a real mechanical system possesses a sharp position and momentum at any instant of time. Let us call this assumption Sharp Trajectory Hypothesis (STH). There are many tacitly assumed consequences of STH, for example that a probability distributions on phase space is only an expression of insufficient knowledge of the real state.
However, there is no evidence supporting STH: indeed, as yet, any real observation of macroscopic bodies has been compatible with where "≫" represents many orders of magnitude. This is well known but there can be two attitudes to Eq. (5): 1. With improving techniques, the left-hand side of Eq. (5) will approach zero. This must be false if quantum mechanics holds true.
2. Sharp trajectory is just a handy model of a real, fuzzy, one. That is, it lies within a tube associated with the fuzzy trajectory. But then, a more realistic model of any Newtonian motion would be a probability distribution.
References [9,8] assume the second attitude. For us, the most important consequence is that it is sufficient to approximate fuzzy Newtonian trajectories by quantum mechanics, where fuzzy trajectories are some probability distributions on the phase space of the system. Such a theory of classical properties can do without pure states. Of course, this probability distribution is not completely knowable and measurable: in any case, the sharp points do not exist [11,12]. The fact that the points of the phase space do exist mathematically and must be used for mathematical description of a real state is only an unrealistic feature of Newtonian mechanics. Thus, instead of a sharp point of the phase space a distribution on the phase space can be considered as the real state of a mechanical system. It is determined by preparation similarly as in quantum theory. In this way, we preserve the realism (for more details on realism, see [10]) as such but change the form of it as expressed by STH.
This opens the problem to application of Bayesian methods, see, e.g., [13]. These methods recommend maximising entropy in the cases of missing knowledge. Let us define a fuzzy state called maximum-entropy packet (ME packet) as a phase-space distribution maximising entropy for given averages and variances of mechanical state coordinates.
More precisely, for the classical model S c , we consider the states described by distribution function ρ(q, p) on the phase space spanned by q and p. The function ρ(q, p) is dimensionless and normalized by dq dp where v is an auxiliary phase-space volume to make ρ dimensionless. The entropy of ρ(q, p) can be defined by The value of entropy will depend on v but most other results will not. Classical mechanics does not offer any idea of how to fix v. We shall get its value from quantum mechanics.
Definition 1 ME packet is the distribution function ρ that maximizes the entropy subject to the conditions: and p = P , where Q, P , ∆Q and ∆P are given values.
We have used the abbreviation x = dq dp v xρ .
The explicit form of ρ can be found using the Lagrange-multiplier and partitionfunction method [7]: The distribution function of the classical ME packet for a one-degreeof-freedom system with given averages and variances Q, ∆Q of coordinate and P , ∆P of momentum, is In this way, to describe the mechanical degrees of freedom, we need twice as many variables as the standard mechanics. The doubling of state coordinates is due to the necessity to define a fuzzy distribution rather than a sharp trajectory.
The model can be generalised to any number of degrees of freedom. Also, the ME packet could be defined by different pairs of conjugate variables. It seems plausible that our main results would then remain valid. where Q, P , ∆Q and ∆P are given numbers, is called quantum ME packet.
Quantum ME packets
The following theorem can be proved by the method of Lagrange multipliers and partition function, but the proof is non-trivial because of non-commuting factors [7]: The state operator of the ME packet of a one-degree-of-freedom system with given averages and variances Q, P , ∆Q and ∆P is where Generalisation to any number of degrees of freedom is easy. It is amusing to observe how the forms of Eqs. (8) and (9) approach each other in the limit ∆P ∆Q → ∞. Indeed, The entropy of state (9) can be shown [7] to be an increasing function of ν ∈ (1, ∞) diverging for ν → ∞. For ν = 1 (minimum quantum uncertainty), T is a pure state with wave function This is just a Gaussian wave packet and the entropy is zero. Thus, quantum ME packets are generalization of Gaussian wave packets.
Comparing classical and quantum evolutions
Let us consider the time evolution of the averages and variances for ME packet (8) with initial data Q, P, ∆Q, ∆P at t = 0 and let us define the classical trajectory of the classical model S c by the quadruple Q c (t), P c (t), ∆Q c (t), ∆P c (t). Let Q q (t), P q (t), ∆Q q (t), ∆P q (t) be an analogous trajectory for the quantum model S q starting in state (9). Each of the two trajectories is described by four real functions so that they can be compared. The form of the Heisenberg equations of motion and of the ME packets motivate the following Conjecture.
ν-Conjecture Let X(t) be one of the averages and variances that define the trajectories. Then, where for large ∆Q and ∆P and t ∈ (0, ∞), where B is some real number.
ν-Conjecture implies: The fuzzier the compared ME packets are, the better their dynamical evolutions match each other. In other words, the classical limit for mechanical degrees of freedom is This seems to contradict the usual belief that the classical physics is best approximated by minimum-uncertainty (ν = 1) quantum states. But the explanation of this paradox is simple. The two answers to the question which state best approximates classical physics are different because the questions asked are, in fact, different: the first one compares two fuzzy states, the second one compares a quantum state with a sharp classical trajectory.
Statements that are weaker than ν-Conjecture have been proved. For example, let the potential be a polynomial of the n-th order. Then, the k-th time derivatives at the initial time of the classical coordinates and momenta, , calculated with the help of Eqs. (2) are polynomials in q and p. Similarly for the quantum case, , calculated with the help of Eqs. (4) are polynomials in q and p. The methods of calculating averages of such polynomials in ME-packet states shown in [7] help to calculate the averages of these polynomials. Next, an average of k-th time derivative of any (Heisenberg) quantity in an initial state is k-th time derivative of the average of the quantity in the state, e.g., The results are polynomials in Q, P , ∆Q and ∆P . In the quantum case, they can also depend on . If we substitute 2∆Q∆Q/ν for each , we obtain well-defined polynomials in Q, P , ∆Q, ∆P and ν −1 . Such calculations have been done in [7]. They have proved ν-Conjecture for n = 4 and for time derivatives of the order k = 1, 2, 3, 4. That is, the ν-Conjecture for instead of X(t) in Eq. (11). More precisely, the calculations show that all quantum corrections vanish for n = 4 and k = 1, . . . , 4. In this cases, ν-Conjecture holds in a trivial way. The only non-zero quantum correction that has been found as yet is in the following expression that appears for n = 4 and k = 5 (some errors in [7] have been corrected): As the first term is also contained in the classical version, this quantum correction cannot violate ν-Conjecture. No further corrections have been looked for because the calculations get very complicated with growing n and k.
In addition to the results of [7], it has been shown that the variances satisfy the relation for any analytic potential. Thus, the product ∆Q∆P tends, at least at the beginning, to stay constant. Moreover, all time derivatives of ∆Q at t = 0 are proportional to µ −1 . This means that the spread of heavy macroscopic ME packets is rather slow.
Mechanical and thermostatic properties unified
We have tried to prove that classical mechanical properties of an object can be obtained from its quantum model as properties of high-entropy quantum states. However, this also holds for classical thermostatic properties, such as internal energy, temperature, entropy, specific heats etc., which suggests that the quantum theory of classical properties can be based on a single principle. In the present section, we try to show in more detail how such "unified" theory could look like.
We use a very simple model so that its quantum equations are exactly solvable and we can concentrate on conceptual questions. As a real object S, consider a thin stiff rod of mass M and length L extended and moving freely in one space dimension. Let its classical model S c be a one-dimensional continuum. Its (classical) state is determined by the values of 5 quantities: internal energy E int , average X and variance ∆X of its centre-of-mass coordinate as well as average P and variance ∆P of its total momentum. Let the quantum model S q be a chain of N + 1 particles, each of mass µ. We denote the position operator of the n-th particle by x n and that of its momentum by p n , n = 1, . . . N + 1. Let the Hamiltonian of the quantum model be The potential represents nearest-neighbour elastic forces, κ being the oscillator strength and ξ the equilibrium inter-particle distance. The algebra of observables of S q is generated by x n and p n , a set of 2N + 2 operators. We assume that N ≈ 10 23 . This implies that the quantum state contains much more information than the classical one. A linear (in fact, Fourier) transformation of variables x n and p n to normal modes u n and q n diagonalizes the Hamiltonian [9,8]. Moreover, it becomes the sum of the total momentum part and the internal energy part E int (see [9,8]): where M = (N + 1)µ is the total mass of the chain, P its total momentum, and ω m , m = 1, . . . , N are "phonon" frequencies: The mechanical evolution thus decouples from the thermodynamics. As the state of S q , the tensor product T therm ⊗ T mech can, therefore, be chosen. Next, we apply the unifying principle: T therm is the maximum entropy quantum state for a given value E of the averages of E int and T mech is the maximum entropy state for given averages of X and P and variances ∆X and ∆P. It then follows that T therm is the Gibbs state and T mech is an ME packet.
The phonons of species m form statistically independent subsystems. Hence, the Gibbs state factorizes and the factors are Gibbs states T m of the species: where r is the number of phonons of species m, and λ is the Lagrange multiplier of the variational problem for the conditional maximum of entropy. The variational principle couples the value of λ with the value of the energy average E. As λ is interpreted as 1/kT , k being the Boltzmann constant and T the temperature, the relation between internal energy and temperature results. The average number r of photons in state T m is the Bose distribution. All properties of the classical model (such as the temperature and length of the rod, its dynamical trajectory etc.) have been obtained, in a good approximation, from the quantum one ( [9,8]). For example the length L of the rod is the average of a natural rod-length operator x N +1 − x 1 . The calculation in [9,8] yields This is independent of the parameter E of the Gibbs state. Hence, the model describes a rigid rod. The relative variances of the internal energy and length are indirectly proportional to N.
Conclusion and outlook
Our results suggest that there is a unified theory for both thermostatic and mechanical properties. It is based on the assumption that the states of quantum system that exhibit classical properties are some states with high entropy.
The fuzzier are the compared mechanical states, the better is the match between classical and quantum mechanical trajectories, if the ν-Conjecture holds true. This would confirm the feeling that quantum mechanics is more accurate and finer than Newtonian mechanics. We hope to be able to prove the full ν-Conjecture later.
The paper suggests promising ideas of how all four conceptual problems can be solved. More detailed models that would describe such solutions for some simple cases ought to be constructed.
The project is in its beginnings. Only extremely simple models have been studied. Also, a generalization of the idea to classical electro-and magnetostatic properties, as well as a generalisation to the relativistic classical electrodynamics is missing as yet. | 2015-08-06T11:22:30.000Z | 2014-12-12T00:00:00.000 | {
"year": 2014,
"sha1": "f412860ce0c06f97bb86e6e8e3c245240fa32d99",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/626/1/012036",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "f412860ce0c06f97bb86e6e8e3c245240fa32d99",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
234100801 | pes2o/s2orc | v3-fos-license | Mechanical and Thermal Properties of Hybrid Fibre-Reinforced Concrete Exposed to Recurrent High Temperature and Aviation Oil
Over the years, leaked fluids from aircraft have caused severe deterioration of airfield pavement. The combined effect of hot exhaust from the auxiliary power unit of military aircraft and spilt aviation oils have caused rapid pavement spalling. If the disintegrated concreted pieces caused by spalling are sucked into the jet engine, they may cause catastrophic damage to the aircraft engine or physical injury to maintenance crews. This study investigates the effectiveness of incorporating hybrid fibres into ordinary concrete to improve the residual mechanical and thermal properties to prevent spalling damage of pavement. Three fibre-reinforced concrete samples were made with micro steel fibre and polyvinyl alcohol fibre with a fibre content of zero, 0.3%, 0.5% and 0.7% by volume fraction. These samples were exposed to recurring high temperatures and aviation oils. Tests were conducted to measure the effects of repeated exposure on the concrete’s mechanical, thermal and chemical characteristics. The results showed that polyvinyl alcohol fibre-, steel fibre- and hybrid fibre-reinforced concrete suffered a 52%, 40% and 26.23% of loss of initial the compressive strength after 60 cycles of exposure to the conditions. Moreover, due to the hybridisation of concrete, flexural strength and thermal conductivity was increased by 47% and 22%. Thus, hybrid fibre-reinforced concrete performed better in retaining higher residual properties and exhibited no spalling of concrete.
Introduction
Concrete is a versatile building material of the 21st century. It is a durable material with excellent mechanical and thermal properties and has broad structural application. However, it has a significant drawback, being a quasi-brittle material, and an increase in the strength also increase its brittleness [1]. Moreover, ordinary concrete possesses lower tensile strength and propagates cracks easily when subjected to severe loading conditions [1][2][3]. Modern construction works demand concrete with a combination of qualities such as higher strength, excellent thermal properties, higher durability, and toughness [1,4,5]. The recent improvement in modern concrete, including high strength concrete (HSC), and fibre-reinforced concrete (FRC), offer some outstanding properties [4,5].
The production of HSC concrete requires the use of supplementary cementitious materials, such as ground granulated blast furnace slag, silica fume and fly ash as a A few earlier studies also reported the causes of the spalling damage of airfield pavements exposed to repeated high temperature and HC fluids. However, these studies focused on causes like vapour pressure, residual mechanical and strength degradation for the chemical reactions. Currently, there are no research data on spalling damage, residual mechanical and thermal properties of FRC under similar conditions. Such data play a significant role in selecting fibre type, volume fraction, fibre matrix and geometry while designing for airfield pavements. To fill this knowledge gap, this study investigates the effectiveness of incorporating hybrid fibres into ordinary concrete to improve the residual mechanical and thermal properties to prevent spalling damage. In this investigation, the different combination of micro steel fibre (SF), polyvinyl alcohol fibre (PF) and hybridisation of PF and SF are produced to examine the effect of two fibres in FRC in comparison with ordinary concrete. Moreover, to study the effect of hybridisation of concrete using PF and SF, three mixed fibre-reinforced concrete are produced with a fibre content of 0.3%, 0.5% and 0.7% from each type. All fibre-reinforced samples are compared with the control specimen that contains no fibre. The residual compressive strength, flexural strength, mass loss and spalling effect after repeated exposure to high temperature and HC fluids are evaluated. This study also explored the thermal performance of FRC and its effect on spalling due to repeated exposure to HC fluids and high temperature. A few earlier studies also reported the causes of the spalling damage of airfield pavements exposed to repeated high temperature and HC fluids. However, these studies focused on causes like vapour pressure, residual mechanical and strength degradation for the chemical reactions. Currently, there are no research data on spalling damage, residual mechanical and thermal properties of FRC under similar conditions. Such data play a significant role in selecting fibre type, volume fraction, fibre matrix and geometry while designing for airfield pavements. To fill this knowledge gap, this study investigates the effectiveness of incorporating hybrid fibres into ordinary concrete to improve the residual mechanical and thermal properties to prevent spalling damage. In this investigation, the different combination of micro steel fibre (SF), polyvinyl alcohol fibre (PF) and hybridisation of PF and SF are produced to examine the effect of two fibres in FRC in comparison with ordinary concrete. Moreover, to study the effect of hybridisation of concrete using PF and SF, three mixed fibre-reinforced concrete are produced with a fibre content of 0.3%, 0.5% and 0.7% from each type. All fibre-reinforced samples are compared with the control specimen that contains no fibre. The residual compressive strength, flexural strength, mass loss and spalling effect after repeated exposure to high temperature and HC fluids are evaluated. This study also explored the thermal performance of FRC and its effect on spalling due to repeated exposure to HC fluids and high temperature.
Materials and Mixing Procedure
Fresh M70 grade concrete samples were prepared using Boral general-purpose (AS 3972) OPC with a water-cement (w/c) ratio of 0.42. Tables 1 and 2 respectively describe the physical and basic properties of the OPC at ambient temperature. Maximum 10 mm-sized basalt coarse aggregate (CA) and local river sand (FA) with fineness modulus of 2.6 were used. The water absorption and specific gravity of the CA and FA were 0.34% and 2.66, and 0.72% and 2.62, respectively. Figure 2a shows the grading curves of the aggregates used in this experiment. ADVA 650, a synthetic carboxylated polymer, was used as a superplasticiser with variable dosages to maintain the slump around 120 mm. Straight-end 8-mm long SF and 12-mm long PF fibres were used. Figure 3 and Table 3 respectively show the geometry and properties of used fibres in this experiment. Cylinder specimens of 100 mm × 200 mm were cast to determine mass loss and spalling tests. Furthermore, 100 mm × 100 mm × 350 mm size beams were cast for the flexural test. Cube specimens of 50 mm × 50 mm × 50 mm were used to measure the residual compressive strength. When samples are exposed to HC fluids, it penetrates the concrete from all-direction, leaving a thin core intact in the cube samples, thus depicting the critical condition corresponding to residual compressive strength. Samples curing continued for four weeks in a fog room with relative humidity (RH) > 90% and 23 ± 1 • C. After 28 days of curing, all mechanical and thermal properties were tested. Then some samples were exposed to repeated high temperature only, and other samples were exposed to the combined effect of high temperature and HC fluids. The thermal properties of specimens were measured each after 20 cycles of exposures. In all tests, an average of the data of three samples was recorded. The mix design for different FRC specimens is given in Table 4. This table also indicates the percentage of superplasticiser used to mix the cementitious materials' total weight. and thermal properties were tested. Then some samples were exposed to repeated high temperature only, and other samples were exposed to the combined effect of high temperature and HC fluids. The thermal properties of specimens were measured each after 20 cycles of exposures. In all tests, an average of the data of three samples was recorded. The mix design for different FRC specimens is given in Table 4. This table also indicates the percentage of superplasticiser used to mix the cementitious materials' total weight. and thermal properties were tested. Then some samples were exposed to repeated hi temperature only, and other samples were exposed to the combined effect of high te perature and HC fluids. The thermal properties of specimens were measured each af 20 cycles of exposures. In all tests, an average of the data of three samples was record The mix design for different FRC specimens is given in Table 4. This table also indica the percentage of superplasticiser used to mix the cementitious materials' total weight (a) (b)
Exposure to Recurring High Temperature and HC Fluids
After 28 days of curing, all samples were exposed to two recurring thermal conditions: High-temperature cycles without oil exposure and combined actions of HC fluids and high temperature. According to McVay et al. [33], airfields are usually exposed to different aviation oils such as engine oil, hydraulic oil and aviation fuel. In this experiment, AeroShell Turbine Oil 500, AeroShell Fluid 31 and jet fuel (F-34 kerosene grade) were mixed in equal parts and sprayed on the specimens before each exposure to high temperatures. HC fluidsoaked samples were kept in a high-temperature electric oven for 15 min at 175 • C. The heating rate, duration of a thermal cycle and cooling procedures are presented in Figure 2b. Along with the inbuilt oven thermocouple, an external thermocouple was also used to monitor the sample surface temperature. After 15 min of heat exposure, the hot samples were cooled down by air and, once in a week, were cooled down by spraying water to simulate rainfall effects on airfield pavements. Recurrence of oil spray, heating and the cooling cycle continued until spalling occurred. Three samples of each type were used to determine different mechanical and thermal properties after every 20 cycles of exposure. The notation E0, E20, E40 and E60 denote the number of exposure cycles mentioned in individual cases.
Mechanical Properties
The samples' initial and residual compressive strength was measured as per AS 1012.9-1999 [34]. Samples weights and densities were measured in saturated surface dry condition. Then they were placed in the furnace at 105 • C until they reached a constant dry weight. A universal hydraulic testing machine with a loading rate of 0.3 MPa/s was used for the compressive strength test.
Four-Point Bending Test
Beam samples were assessed for flexural strength, according to AS 1012.11. Control's (manufacturer) flexural testing machine of 150 KN capacity with a deflection rate of 0.3 mm/min was used to measure the concrete beams' flexural strength. Figure 4a,b represents the test setup and the actual specimen test. Two transducers were attached with brackets to the beam specimen to measure the mid-span deflection, taken as the two readings' average. Two metallic cylinders positioned at 100 mm from the supports (1/3 distance of the span) were used to apply the load equally. Two identical metallic cylinders spaced at 300 mm were used to support the specimen. presented in Figure 2b. Along with the inbuilt oven thermocouple, an external thermocouple was also used to monitor the sample surface temperature. After 15 min of heat exposure, the hot samples were cooled down by air and, once in a week, were cooled down by spraying water to simulate rainfall effects on airfield pavements. Recurrence of oil spray, heating and the cooling cycle continued until spalling occurred. Three samples of each type were used to determine different mechanical and thermal properties after every 20 cycles of exposure. The notation E0, E20, E40 and E60 denote the number of exposure cycles mentioned in individual cases.
Mechanical Properties
The samples' initial and residual compressive strength was measured as per AS 1012.9-1999 [34]. Samples weights and densities were measured in saturated surface dry condition. Then they were placed in the furnace at 105 °C until they reached a constant dry weight. A universal hydraulic testing machine with a loading rate of 0.3 MPa/s was used for the compressive strength test.
Four-Point Bending Test
Beam samples were assessed for flexural strength, according to AS 1012.11. Control's (manufacturer) flexural testing machine of 150 KN capacity with a deflection rate of 0.3 mm/min was used to measure the concrete beams' flexural strength. Figure 4a,b represents the test setup and the actual specimen test. Two transducers were attached with brackets to the beam specimen to measure the mid-span deflection, taken as the two readings' average. Two metallic cylinders positioned at 100 mm from the supports (1/3 distance of the span) were used to apply the load equally. Two identical metallic cylinders spaced at 300 mm were used to support the specimen.
Measurement of Thermal Properties
The specimens were assessed for thermal properties like specific heat and thermal conductivity each after 20 exposures. As per ASTM C518 [35], the Netzsch (manufacturer, Selbu, Germany) Heat Flow Machine (HFM) 446 Lambda machine was used to test 150 mm × 150 mm × 25 mm samples for thermal conductivity and specific heat. A temperature gradient difference of 20 °C between two plates was used while measuring the thermal properties. Two heat-flow sensors fixed in the plates were used to measure the heat flow into the material and out of the material, respectively.
Measurement of Thermal Properties
The specimens were assessed for thermal properties like specific heat and thermal conductivity each after 20 exposures. As per ASTM C518 [35], the Netzsch (manufacturer, Selbu, Germany) Heat Flow Machine (HFM) 446 Lambda machine was used to test 150 mm × 150 mm × 25 mm samples for thermal conductivity and specific heat. A temperature gradient difference of 20 • C between two plates was used while measuring the thermal properties. Two heat-flow sensors fixed in the plates were used to measure the heat flow into the material and out of the material, respectively.
X-ray Diffraction (XRD) Analysis
The XRD test was used as a non-destructive method to determine crystal phases and mineral compounds present in the specimens [36]. The samples' crystalline phases were identified between 10 to 70 (2θ), applying a scanning speed of 0.2 degrees/min. The selected specimen was crushed using a ball mills machine, then sieved through a screen aperture of 75 µm. HC fluid exposed samples were oven-dried for 24 h before grinding them into powder. XRD instrument Rigaku Miniflex 600, operated at 40 kV and 45 mA, was used to record the crystal patterns using CuKα (λ = 0.15400 nm) radiation [37,38].
Thermogravimetric (TG) Analysis
After 20 cycles of high-temperature exposures, the differential scanning calorimetry (DSC) and TG tests for the specimens were conducted using NETZSCH (manufacturer) STA 449C Jupiter, Selbu, Germany. The DSC and TG spectra were collected in an inert nitrogen environment with a heating rate of 10 • C/min from 20 to 800 • C.
Microstructure Investigation
The Zeiss Axio Imager (Zeiss, Oberkochen, Germany) optical microscope was used to analyse fibre reinforced samples' microstructures at the original condition and after 60 cycles of high-temperature and HC fluid exposures. This machine analyses the microcrack and voids development in the samples due to high temperature and HC fluids' simultaneous effect. For microstructural analysis, samples were collected from the top (up to 20 mm) surface of the specimens. The samples' microstructure is also analysed by combining two main techniques: Direct observation with an optical microscope and scanning electron microscope (SEM). This combination method allows a better understanding of links between microstructure, composition and engineering behaviour. Various factors govern the behaviour of samples exposed to higher temperature and HC fluids. Figure 5 shows the initial compressive strength for different fibre-reinforced concrete. Figure 6 shows the residual compressive strength and % loss of compressive strength of samples exposed to high temperature only and the combined effect of high temperature and HC fluids. For repeated exposure up to 175 • C only, loss of significant compressive strength was recorded, as seen in Figure 6a,b. After subjecting specimens to a recurring high temperature for 60 cycles, the control specimens lost 32.65% of their initial compressive strength. For 0.3% SF and 0.5% SF reinforced specimens, compressive strength loss was 26.50% and 31.06%, respectively. For PF reinforced concrete, the loss was more significant. For 0.3%, 0.5% and 0.7% PF reinforced concrete, the compressive strength loss was 26.15%, 42.65% and 44.24%, respectively. The addition of extra PF would create more porosity inside the concrete matrix after melting, resulting in more deterioration of its residual properties. Whereas for SF, no melting of SF occurs at 175 • C, thus no extra porosity is created that reduces concrete strength significantly. Similarly, for HB specimens with 0.3%, 0.5% and 0.7% HB fibre reinforcement, the compressive strength loss was 27.42%, 25.40% and 22.05%, respectively. For repeated exposure to the high temperature and HC fluids' combined effect, residual compressive strength loss was more than only heat-exposed samples, as seen in Figure 6c For repeated exposure to the high temperature and HC fluids' combined effect, residual compressive strength loss was more than only heat-exposed samples, as seen in Figure 6c Figure 6. Residual compressive strength. (a) Heat exposed only samples, (b) % loss of compressive strength of heat exposed samples only, (c) heat and HC fluid exposed samples, (d) % loss of residual compressive strength of heat and HC exposed samples.
Results and Discussion
For repeated exposure to the high temperature and HC fluids' combined effect, residual compressive strength loss was more than only heat-exposed samples, as seen in Figure 6c,d. Penetration of aviation oil after 60 cycles is identified as pink and dark colours in the crashed cube specimen, as shown in Figure 7b. After 60 cycles of combined exposures, the control specimens lost 35.91% of their initial compressive strength. For 0.3% and 0.5% SF reinforced specimens, compressive strength loss was 33.33% and 39.77%, respectively. For PF reinforced concrete, the loss was more significant. For 0.3%, 0.5% and 0.7% PF specimens, the compressive strength loss was 38.46%, 47.45% and 51.84%, respectively. After repeated exposure to high temperature and HC fluids, heat caused PF fibres to melt. Those melted channels allow more HC fluids to penetrate the concrete, which causes significant strength reduction. Similarly, for HB specimens with 0.3%, 0.5% and 0.7% HB fibre reinforcement, the compressive strength loss was 35.55%, 31.46% and 26.23%, respectively. In general, high-temperature and HC fluids exposed samples suffer more compressive strength loss due to chemical reaction between the cement and aviation oils to form harmful salts. S.k. Shill et al. [39] also reported similar strength loss due to the chemical reaction between cement and ester, the fatty acid of HC fluids. However, under similar circumstances, HB fibre reinforcement concrete suffered comparatively lower strength loss. In general, high-temperature and HC fluids exposed samples suffer more compressive strength loss due to chemical reaction between the cement and aviation oils to form harmful salts. S.k. Shill et al. [39] also reported similar strength loss due to the chemical reaction between cement and ester, the fatty acid of HC fluids. However, under similar circumstances, HB fibre reinforcement concrete suffered comparatively lower strength loss. In summary, PF fibres in HB concrete created additional microchannels allowing the release of vapour pressure created by exposure to HC fluids. The inclusion of SF also increases the tensile capacity [38]; thus, this matrix helped to retain more residual compressive strength than the control specimens.
Mass Loss
The mass loss of specimens after recurring exposure to high temperature and HC fluids is shown in Figure 7a. The specimens lost their initial free water entirely due to recurrent high temperature within the first few cycles. Within the first twenty cycles of heating, the mass loss rate was higher due to the evaporation of initial free water, which escaped outside in a vapour state. After 20 cycles of HC fluids and high-temperature exposures, the mass loss slope decreases gradually, which implies that the concrete matrix lost its initial free water within the initial cycles. However, the maximum fibre Vf = 0.7% used in this study was low enough to cause any significant mass loss due to fibres' presence. Specimens containing 0.7% PF fibres and 0.7% SF suffered a mass loss of 12% and 7.7%, respectively. At the same time, HB samples lost 8.4% of the mass after 60 cycles. However, Lee et al. [32] suggested that below 200 °C, loss of cementitious component is negligible. Nevertheless, a few other studies [40,41] also strongly support that when concrete is subjected to chemical and high temperature, the mass loss could be due to the decomposition of mineral compounds in concrete besides the loss of initial water.
In summary, initial mass loss occurs due to the evaporation of initial free water for exposures to high temperature and HC fluids. Subsequent mass loss occurs due to the evaporation of chemically bonded water, decomposition of C-S-H gel and calcium hydroxide, which is also identified in the XRD and TG analysis paragraphs of this paper. In summary, PF fibres in HB concrete created additional microchannels allowing the release of vapour pressure created by exposure to HC fluids. The inclusion of SF also increases the tensile capacity [38]; thus, this matrix helped to retain more residual compressive strength than the control specimens.
Mass Loss
The mass loss of specimens after recurring exposure to high temperature and HC fluids is shown in Figure 7a. The specimens lost their initial free water entirely due to recurrent high temperature within the first few cycles. Within the first twenty cycles of heating, the mass loss rate was higher due to the evaporation of initial free water, which escaped outside in a vapour state. After 20 cycles of HC fluids and high-temperature exposures, the mass loss slope decreases gradually, which implies that the concrete matrix lost its initial free water within the initial cycles. However, the maximum fibre V f = 0.7% used in this study was low enough to cause any significant mass loss due to fibres' presence. Specimens containing 0.7% PF fibres and 0.7% SF suffered a mass loss of 12% and 7.7%, respectively. At the same time, HB samples lost 8.4% of the mass after 60 cycles. However, Lee et al. [32] suggested that below 200 • C, loss of cementitious component is negligible. Nevertheless, a few other studies [40,41] also strongly support that when concrete is subjected to chemical and high temperature, the mass loss could be due to the decomposition of mineral compounds in concrete besides the loss of initial water.
In summary, initial mass loss occurs due to the evaporation of initial free water for exposures to high temperature and HC fluids. Subsequent mass loss occurs due to the evaporation of chemically bonded water, decomposition of C-S-H gel and calcium hydroxide, which is also identified in the XRD and TG analysis paragraphs of this paper.
Flexural Strength
Control and FRC samples were evaluated for flexural strength after E0 and E60 cycles of high temperature only and combined high temperature and HC fluid exposures. The average of three sets of data was used to calculate the flexural strength. The midspan deflection against the applied load under four-point bending is shown in Figure 8 for control, PF, SF and HB specimens. The flexural strength of the control specimen at E0 was 5.08 MPa. The inclusion of fibres in the concrete matrix increased the flexural strength and increases in the V f further improved the flexural load capacity of concrete. For samples without any heat and HC fluids exposures (E0), the inclusion of 0.3%, 0.5% and 0.7% PF fibre led to the flexural strength increasing by 4.5%, 15.89% and 21.82%, respectively. With the inclusion of 0.3%, 0.5% and 0.7% SF fibre, the flexural strength increased by 15.5%, 17.89% and 22.82%, respectively. Similarly, with the inclusion of 0.3%, 0.5% and 0.7% HB fibre, the flexural strength increased by 13.13%, 17.89% and 32.29%, respectively. The flexural strength and descending branch after peak load were significantly improved in concrete reinforced with SF and HB compared to PF fibre.
Flexural Strength
Control and FRC samples were evaluated for flexural strength after E0 and E60 cycles of high temperature only and combined high temperature and HC fluid exposures. The average of three sets of data was used to calculate the flexural strength. The midspan deflection against the applied load under four-point bending is shown in Figure 8 for control, PF, SF and HB specimens. The flexural strength of the control specimen at E0 was 5.08 MPa. The inclusion of fibres in the concrete matrix increased the flexural strength and increases in the Vf further improved the flexural load capacity of concrete. For samples without any heat and HC fluids exposures (E0), the inclusion of 0.3%, 0.5% and 0.7% PF fibre led to the flexural strength increasing by 4.5%, 15.89% and 21.82%, respectively. With the inclusion of 0.3%, 0.5% and 0.7% SF fibre, the flexural strength increased by 15.5%, 17.89% and 22.82%, respectively. Similarly, with the inclusion of 0.3%, 0.5% and 0.7% HB fibre, the flexural strength increased by 13.13%, 17.89% and 32.29%, respectively. The flexural strength and descending branch after peak load were significantly improved in concrete reinforced with SF and HB compared to PF fibre. The flexural capacity of samples exposed to a high temperature and HC fluids decreased significantly, as shown in Figure 8b,c. The penetration of HC fluids into the beam sample is shown in Figure 8d. As discussed earlier, due to the melting of PF fibre and deterioration of tensile strength of fibres due to high temperature and HC fluid exposure, significant flexural strength loss was recorded. After 60 cycles of high-temperature and HC fluids exposure, the control specimen lost 23% of the initial flexural strength. For 0.3%, 0.5% and 0.7% PF specimens, flexural strength loss was 6.11%, 13.13% and 21.23% of their initial strength, respectively. For 0.3%, 0.5% and 0.7% SF specimens, flexural strength loss was 9.67%, 7.84% and 6.91%, respectively, and for 0.3%, 0.5% and 0.7% HB specimens, flexural strength loss was 18.38%, 16.93% and 11.59%, respectively. The flexural capacity of samples exposed to a high temperature and HC fluids decreased significantly, as shown in Figure 8b,c. The penetration of HC fluids into the beam sample is shown in Figure 8d. As discussed earlier, due to the melting of PF fibre and deterioration of tensile strength of fibres due to high temperature and HC fluid exposure, significant flexural strength loss was recorded. After 60 cycles of high-temperature and HC fluids exposure, the control specimen lost 23% of the initial flexural strength. For 0.3%, 0.5% and 0.7% PF specimens, flexural strength loss was 6.11%, 13.13% and 21.23% of their initial strength, respectively. For 0.3%, 0.5% and 0.7% SF specimens, flexural strength loss was 9.67%, 7.84% and 6.91%, respectively, and for 0.3%, 0.5% and 0.7% HB specimens, flexural strength loss was 18.38%, 16.93% and 11.59%, respectively.
In summary, the residual flexural strength and the post-peak deformation behaviour of FRC were significantly improved with PF and SF. Due to the hybridisations, hightemperature and HC fluids exposed samples' maximum residual flexural strength loss was 23% for control and 11.59% for 0.7% HB sample.
Visual Observations of Ruptured Sections of FRC
The ruptured sections of the FRC were carefully inspected after the flexural test. The increase in fibre fraction resulted in the smoothness of ruptured sections for FRCs but was reduced with an increase in high temperature and HC fluids exposures. The fracture energy between the cement pastes and aggregates interface decreased with high temperature and HC fluid exposures because high temperature decreases the bond between the cement paste and aggregates. Figure 9 shows the typical ruptured cross-section of the hybrid FRC specimen after the flexural test at E0. However, regarding the fracturing process concerned, there are no significant differences between the fractured faces. Thus, it suggests that fibre content or types do not play a significant role in fracture mode. FRC specimens exposed to 60 cycles of high temperature and HC fluids also failed with the same fracture modes, which means that temperature has no significant effect on fracture modes. In summary, the residual flexural strength and the post-peak deformation behaviour of FRC were significantly improved with PF and SF. Due to the hybridisations, high-temperature and HC fluids exposed samples' maximum residual flexural strength loss was 23% for control and 11.59% for 0.7% HB sample.
Visual Observations of Ruptured Sections of FRC
The ruptured sections of the FRC were carefully inspected after the flexural test. The increase in fibre fraction resulted in the smoothness of ruptured sections for FRCs but was reduced with an increase in high temperature and HC fluids exposures. The fracture energy between the cement pastes and aggregates interface decreased with high temperature and HC fluid exposures because high temperature decreases the bond between the cement paste and aggregates. Figure 9 shows the typical ruptured cross-section of the hybrid FRC specimen after the flexural test at E0. However, regarding the fracturing process concerned, there are no significant differences between the fractured faces. Thus, it suggests that fibre content or types do not play a significant role in fracture mode. FRC specimens exposed to 60 cycles of high temperature and HC fluids also failed with the same fracture modes, which means that temperature has no significant effect on fracture modes. However, PF FRC samples were split entirely into two parts since fibres were melted after repeated heating as also reported in SEM results, thus behaving like ordinary concrete. The SF has been seen protruding from the concrete matrix after the specimen has ruptured, as seen in Figure 9b,c; the PF, once it reached its maximum tensile strength, also ruptured. Pliya et al. [42] and Ç avdar et al. [43] also found that SF and HB fibres provide more flexural strength than single PF fibre-reinforced concrete. However, PF FRC samples were split entirely into two parts since fibres were melted after repeated heating as also reported in SEM results, thus behaving like ordinary concrete. The SF has been seen protruding from the concrete matrix after the specimen has ruptured, as seen in Figure 9b,c; the PF, once it reached its maximum tensile strength, also ruptured. Pliya et al. [42] and Çavdar et al. [43] also found that SF and HB fibres provide more flexural strength than single PF fibre-reinforced concrete.
Concrete Spalling
Spalling of concrete has a significant influence on concrete fire performance and can be a governing factor in determining the fire resistance of an RC structural member [43]. Some researchers [18] reported spalling in high-strength concrete once subjected to high temperatures. Similarly, ordinary concrete specimens exposed to recurring 60 cycles of high temperature and HC fluids suffered significant spalling, as seen in Figure 10. Though samples were exposed to 175 • C, the combined effect of chemical degradation due to aviation oils and repeated heating may have caused the spalling. PF samples also suffered partial spalling (Figure 10b) once exposed to high temperatures and HC fluids. However, no significant spalling in HB FRC samples. The main reason for spalling in the high strength concrete is its dense microstructure, which prevents moisture from escaping when exposed to high temperatures. In this study, the lack of spalling phenomenon in the hybrid fibre-reinforced samples' might be due to the melting of PF fibres, thus creating a microchannel, which helps release vapour pressure, and the SF prevents the spalling by stopping micro-crack propagation.
Concrete Spalling
Spalling of concrete has a significant influence on concrete fire performance and can be a governing factor in determining the fire resistance of an RC structural member [43]. Some researchers [18] reported spalling in high-strength concrete once subjected to high temperatures. Similarly, ordinary concrete specimens exposed to recurring 60 cycles of high temperature and HC fluids suffered significant spalling, as seen in Figure 10. Though samples were exposed to 175 °C, the combined effect of chemical degradation due to aviation oils and repeated heating may have caused the spalling. PF samples also suffered partial spalling (Figure 10b) once exposed to high temperatures and HC fluids. However, no significant spalling in HB FRC samples. The main reason for spalling in the high strength concrete is its dense microstructure, which prevents moisture from escaping when exposed to high temperatures. In this study, the lack of spalling phenomenon in the hybrid fibre-reinforced samples' might be due to the melting of PF fibres, thus creating a microchannel, which helps release vapour pressure, and the SF prevents the spalling by stopping micro-crack propagation.
Measurement of Thermal Properties
Concrete is an anisotropic and non-homogenous material where hydrated cement paste helps to keep aggregates held together. Due to its porosity, moisture content affects the pore conductivity of concrete. Thermal properties of concrete have a significant role in retaining its strength because the concrete structure deteriorates quickly once exposed to high temperature. In this study, the thermal conductivity and specific heat of FRC and control specimens subjected to recurrent higher temperatures and HC fluids were measured.
The experiment involves measuring the amount of heat flow q through the sample once the equilibrium condition is achieved. Thermal conductivity is k, the sample thickness is Δx, the temperature difference across the sample is ΔT and cross-sectional area is A through which heat flows influences the magnitude of the heat flow q. Fourier's law of conduction gives the relation between these parameters:
Measurement of Thermal Properties
Concrete is an anisotropic and non-homogenous material where hydrated cement paste helps to keep aggregates held together. Due to its porosity, moisture content affects the pore conductivity of concrete. Thermal properties of concrete have a significant role in retaining its strength because the concrete structure deteriorates quickly once exposed to high temperature. In this study, the thermal conductivity and specific heat of FRC and control specimens subjected to recurrent higher temperatures and HC fluids were measured.
The experiment involves measuring the amount of heat flow q through the sample once the equilibrium condition is achieved. Thermal conductivity is k, the sample thickness is ∆x, the temperature difference across the sample is ∆T and cross-sectional area is A through which heat flows influences the magnitude of the heat flow q. Fourier's law of conduction gives the relation between these parameters: Two heat flow transducers of the HFM 436, the heat flow instrument used to calculate the amount of heat flow (in volts) through the sample, is shown in Figure 11. The area through which heat flows is the same as the area measured by the heat flow transducer and remains the same for all samples. Therefore: Two heat flow transducers of the HFM 436, the heat flow instrument used to calculate the amount of heat flow (in volts) through the sample, is shown in Figure 11. The area through which heat flows is the same as the area measured by the heat flow transducer and remains the same for all samples. Therefore: For calculation of thermal conductivity and specific heat, ASTM C518 and IS EN 12667/12939 [44] was followed.
Thermal conductivity measured for specimens exposed to high temperature only, and high temperature and HC fluids is shown in Figure 12a,b. Thermal conductivity of PF, SF, and HB FRC ranges between 1.41 to 3.53 W/m·k compared to 2.02 W/m·k of the control specimen. HB and SF samples showed higher thermal conductivity due to the presence of micro steel fibre, which is an excellent conductor of heat [39]. With the increase in the number of heat exposures, a significant decrease in thermal conductivity was recorded. This thermal conductivity trend may be due to the rapid decrease in moisture level following the evaporation of pore and free water of cement paste due to recurrent hightemperature exposures [45]. A similar thermal conductivity reduction due to high-temperature exposure was also reported by Kodur et al. [46]. However, samples exposed to repeated high temperature and HC fluids showed higher thermal conductivity than samples exposed to heat only. This tendency of slightly higher thermal conductivity may be attributed to the repeated exposure to HC fluids, which causes an increase in the samples' moisture content.
For calculation of thermal conductivity and specific heat, ASTM C518 and IS EN 12667/12939 [44] was followed.
Thermal conductivity measured for specimens exposed to high temperature only, and high temperature and HC fluids is shown in Figure 12a,b. Thermal conductivity of PF, SF, and HB FRC ranges between 1.41 to 3.53 W/m·k compared to 2.02 W/m·k of the control specimen. HB and SF samples showed higher thermal conductivity due to the presence of micro steel fibre, which is an excellent conductor of heat [39]. With the increase in the number of heat exposures, a significant decrease in thermal conductivity was recorded. This thermal conductivity trend may be due to the rapid decrease in moisture level following the evaporation of pore and free water of cement paste due to recurrent high-temperature exposures [45]. A similar thermal conductivity reduction due to hightemperature exposure was also reported by Kodur et al. [46]. However, samples exposed to repeated high temperature and HC fluids showed higher thermal conductivity than samples exposed to heat only. This tendency of slightly higher thermal conductivity may be attributed to the repeated exposure to HC fluids, which causes an increase in the samples' moisture content.
A similar separate test was done to measure the value of specific heat Cp. The specific heat measured for specimens exposed to high temperature only and high temperature and HC fluids is plotted in Figure 12c,d against the number of exposures. The specific heat of PF, SF and HB FRC ranges from 1.03 to 1.4 compared to 1.15 of the control specimen. Kodur et al. [46] also reported lower specific heat values in ordinary concrete than the HSC. With the increase in the number of heat exposures, a significant decrease in the specific heat was recorded. SF and hybrid FRC show slightly higher specific heat values, which may be attributed to steel fibres. The use of PF fibres reduces the specific heat in comparison to other fibres. A similar separate test was done to measure the value of specific heat Cp. The specific heat measured for specimens exposed to high temperature only and high temperature and HC fluids is plotted in Figure 12c,d against the number of exposures. The specific heat of PF, SF and HB FRC ranges from 1.03 to 1.4 compared to 1.15 of the control specimen. Kodur et al. [46] also reported lower specific heat values in ordinary concrete than the HSC. With the increase in the number of heat exposures, a significant decrease in the specific heat was recorded. SF and hybrid FRC show slightly higher specific heat values, which may be attributed to steel fibres. The use of PF fibres reduces the specific heat in comparison to other fibres.
Moreover, the deterioration of concrete due to repeated high temperature and HC fluids exposures results in hairline cracks, which increased PF porosity and other FRC porosity. In contrast, ordinary concrete has lower specific heat because of its lower permeability and dense microstructure, requiring additional heat to convert moisture into vapours.
XRD Analysis for FRC
The XRD peaks for control and FRC specimens appeared to be similar in original condition. This trend reveals that fibres do not contain any crystalline mineral compounds. Nevertheless, the intensity height of the ordinary concrete and FRC sample in XRD was found to be different in some cases. The addition of SF and PF in the concrete matrix might have caused the increase/decrease in the peak intensity height and peak shift for the crystalline compound in the XRD, as shown in Figure 13 Moreover, the deterioration of concrete due to repeated high temperature and HC fluids exposures results in hairline cracks, which increased PF porosity and other FRC porosity. In contrast, ordinary concrete has lower specific heat because of its lower permeability and dense microstructure, requiring additional heat to convert moisture into vapours.
XRD Analysis for FRC
The XRD peaks for control and FRC specimens appeared to be similar in original condition. This trend reveals that fibres do not contain any crystalline mineral compounds. Nevertheless, the intensity height of the ordinary concrete and FRC sample in XRD was found to be different in some cases. The addition of SF and PF in the concrete matrix might have caused the increase/decrease in the peak intensity height and peak shift for the crystalline compound in the XRD, as shown in Figure 13. The XRD results of the fresh ordinary concrete specimen show various crystal phases such as P = Portlandite, C = Calcite, E = Ettringite, Q = Quartz, G = Gypsum, M=magnesium oxide, alite C 3 S = 3CaO·SiO 2 , mullite = 3Al 2 O 3 2SiO 2 and belite C 2 S = 2CaO·SiO 2 , similar results were also reported by other researchers [46,47]. According to Dittrich and Song et al. [18,48], when concrete is exposed to 80-90 • C due to thermal effect, ettringite in concrete can be decomposed and dehydrated. reported by other researchers [46,47]. According to Dittrich and Song et al. [18,48], when concrete is exposed to 80-90 °C due to thermal effect, ettringite in concrete can be decomposed and dehydrated. Similarly, when exposed to 200 • C, physical and chemical decomposition in mineral compounds usually occurs. Recurrent exposure of control and FRC samples to 175 • C might cause the ettringite dehydration and decomposition. Similarly, the peak position of quartz, ettringite and portlandite have shifted to a lower degree (2θ), as shown in Figure 12c-e. Additionally, the minerals' intensity height was significantly reduced. As stated by some researchers [44][45][46]49], this phenomenon may be due to the decomposition of crystal minerals when exposed to chemicals or high temperature. A few new peaks also appeared in the FRC specimens' XRD pattern after repeated high temperature and HC fluids exposure. Shill et al. [50] reported that the chemical reaction of concrete with HC fluids may cause the formation of salt and soap compounds on the concrete surface, which may cause a few new peaks.
However, a few of the picks like alite, mullite and calcite did not shift their position but a significant reduction in peak intensity was observed. Most mineral compounds in FRCs were affected by combined actions of HC fluid and a high thermal cycle.
Thermogravimetric (TG) Analysis
The thermogravimetric (TG) tests were conducted for control and FRC specimens after HC fluid and high-temperature exposure, as shown in Figure 14. The TG curve of the original concrete showed three rapid weight loss sections. The picks between 20 to 200 • C represent weight losses due to the escape of parts of the bound water and free water, though Song et al and Fares et al. [51,52] suggested that the free water was eliminated at 120 • C. Then, at 182 • C, gypsum decomposed along with ettringite [45] and carboaluminate hydrates, causing mass loss. This sharp mass loss in the 30-200 • C temperature range was evaluated to be about 3.19%, 3.34%, 1.36% and 1.15%, respectively, for Control E0, 0.3% PF E0, E40 and E60. Repeated exposure to HC fluid and high-temperature cycles caused the reduction in mass loss because after initial exposure, excess and bound water almost dries up. Similarly, with the increase in the number of exposures, the mass loss percentage in higher temperature ranges increased significantly, as seen in Table 5. Similarly, when exposed to 200 °C , physical and chemical decomposition in mineral compounds usually occurs. Recurrent exposure of control and FRC samples to 175 °C might cause the ettringite dehydration and decomposition. Similarly, the peak position of quartz, ettringite and portlandite have shifted to a lower degree (2θ), as shown in Figure 12c-e). Additionally, the minerals' intensity height was significantly reduced. As stated by some researchers [44][45][46]49], this phenomenon may be due to the decomposition of crystal minerals when exposed to chemicals or high temperature. A few new peaks also appeared in the FRC specimens' XRD pattern after repeated high temperature and HC fluids exposure. Shill et al. [50] reported that the chemical reaction of concrete with HC fluids may cause the formation of salt and soap compounds on the concrete surface, which may cause a few new peaks.
However, a few of the picks like alite, mullite and calcite did not shift their position but a significant reduction in peak intensity was observed. Most mineral compounds in FRCs were affected by combined actions of HC fluid and a high thermal cycle.
Thermogravimetric (TG) Analysis
The thermogravimetric (TG) tests were conducted for control and FRC specimens after HC fluid and high-temperature exposure, as shown in Figure 14. The TG curve of the original concrete showed three rapid weight loss sections. The picks between 20 to 200 °C represent weight losses due to the escape of parts of the bound water and free water, though Song et al and Fares et al. [51,52] suggested that the free water was eliminated at 120 °C. Then, at 182 °C, gypsum decomposed along with ettringite [45] and carboaluminate hydrates, causing mass loss. This sharp mass loss in the 30-200 °C temperature range was evaluated to be about 3.19%, 3.34%, 1.36% and 1.15%, respectively, for Control E0, 0.3% PF E0, E40 and E60. Repeated exposure to HC fluid and high-temperature cycles caused the reduction in mass loss because after initial exposure, excess and bound water almost dries up. Similarly, with the increase in the number of exposures, the mass loss percentage in higher temperature ranges increased significantly, as seen in Table 5.
The decomposition of the C-S-H and carboaluminate hydrates cause the loss of bound water from [52] The decomposition of the C-S-H and carboaluminate hydrates cause the loss of bound water from [52].
3.4CaO·2SiO 2 ·3H 2 O (C-S-H) → 3.4CaO·2SiO 2 + 3H 2 O·(180-300 • C) (6) Decomposition of portlandite: Decarbonation of calcium carbonate [53]: After the initial rapid decline, the actual weight loss rate stabilised between 200 • C and 350 • C. The second significant weight loss was observed between 350-600 • C, where portlandite started decomposing until completion of the process at around 550 • C [54,55]. The mass loss in this range was recorded to be 5.54% in PF E60 and 3.49% in HB E60. However, after 400 • C, mass loss was significantly reduced in FRC than in control specimens. In the 650-800 • C range, the release of carbon dioxide in addition to the decomposition of C-S-H (II) into wollastonite and larnite, contributed to the mass loss, which accounts for 3.645%, 4.11% and 4.26% of mass loss in Control E60, PF E60 and HB E60, respectively.
DSC Analysis
The DSC plots of the control, PF, and HB specimens are shown in Figure 15, supporting the TGA analysis' claims. The DSC plot and TGA analysis of specimens show similarity in FRC decomposition and dehydration behaviour with other researchers [56,57]. The release of free water, bound water of C-S-H gel and ettringite was recorded up to 200 • C [55,58]. However, few differences in the heating process were observed between FRC and control samples. Notably, below 120 • C, both PF and HB FRC offered a lower peak than control. This phenomenon could be attributed to the lower unit weight of FRC compared to the control specimen that resulted in the increased volume of mixtures and, therefore, more free water in the specimens.
Decarbonation of calcium carbonate [53]: CaCO3 → CaO + CO2 (700-900 °C) After the initial rapid decline, the actual weight loss rate stabilised between 200 °C and 350 °C. The second significant weight loss was observed between 350-600 °C, where portlandite started decomposing until completion of the process at around 550 °C [54,55]. The mass loss in this range was recorded to be 5.54% in PF E60 and 3.49% in HB E60. However, after 400 °C, mass loss was significantly reduced in FRC than in control specimens. In the 650-800 °C range, the release of carbon dioxide in addition to the decomposition of C-S-H (II) into wollastonite and larnite, contributed to the mass loss, which accounts for 3.645%, 4.11% and 4.26% of mass loss in Control E60, PF E60 and HB E60, respectively.
DSC Analysis
The DSC plots of the control, PF, and HB specimens are shown in Figure 15, supporting the TGA analysis' claims. The DSC plot and TGA analysis of specimens show similarity in FRC decomposition and dehydration behaviour with other researchers [56,57]. The release of free water, bound water of C-S-H gel and ettringite was recorded up to 200 °C [55,58]. However, few differences in the heating process were observed between FRC and control samples. Notably, below 120 °C , both PF and HB FRC offered a lower peak than control. This phenomenon could be attributed to the lower unit weight of FRC compared to the control specimen that resulted in the increased volume of mixtures and, therefore, more free water in the specimens. According to Noumowé and Li et al. [59,60], continuous dehydration of the C-S-H gel and melting of PF were seen between 200-300°C and at 480 °C, and a slight variation of heat flow was observed in those ranges. Between 400-500 °C, significant decomposition of Ca(OH)2 occurred with peak shifts with increased heat exposures. For recurrent exposure of FRC samples to high temperature, a significant reduction in peak intensity at 440 °C (in Figure 15a) was recorded in contrast to that of E0, particularly for 0.3% PF reinforced concrete after 60 exposures. This trend may be due to the continuous dehydroxylation of portlandite in the cement matrix. The phase transition α to the β phase of quartz was recorded at around 573 °C with a visible peak on DSC plots. Between 700-800 °C, decomposition of calcium carbonate occurs, which release carbon dioxide from the concrete. Sun and Xu [34] also found that between 700 and 800 °C, the calcium carbonate (CaCO3) decomposes, thus permitting CO2 to liberate from the concrete.
Microstructure Analysis
SEM/optical analysis was done on the polish surface taken from the specimen's core (up to 20 mm depth) to understanding the fibre-matrix interactions after repeated high temperatures and HC fluid exposures. Figure 16a shows PF fibres strewed in original FRC samples, and Figure 16b shows melted PF fibre due to high-temperature exposure. In the control specimen, repeated HC fluids exposure and high temperature create extra vapour pressure and heat-induced thermal cracks that may have caused the spalling (Figure 16e,f). Due to repeated exposure to 175 °C , there was a significant difference between PF and SF FRC porosity. When the PF fibre reinforced concrete was heated up to 175 °C , PF fibres start shortening in length due to relaxation, melting occurs. These micro-channels form a network of more permeable cement matrix, which contributes to the release of gas and water vapour, thus resulting in the reduction in the pore pressure [58,61]. This phenomenon helps to reduce the spalling tendency of concrete exposed to high temperature. Due to SF's high melting temperature, the development of such a microchannel was not observed in SF reinforced concrete. However, SF's presence helps bridge the cracks that develop due to repeated thermal exposures, as seen in Figure 16d. According to Noumowé and Li et al. [59,60], continuous dehydration of the C-S-H gel and melting of PF were seen between 200-300 • C and at 480 • C, and a slight variation of heat flow was observed in those ranges. Between 400-500 • C, significant decomposition of Ca(OH) 2 occurred with peak shifts with increased heat exposures. For recurrent exposure of FRC samples to high temperature, a significant reduction in peak intensity at 440 • C (in Figure 15a) was recorded in contrast to that of E0, particularly for 0.3% PF reinforced concrete after 60 exposures. This trend may be due to the continuous dehydroxylation of portlandite in the cement matrix. The phase transition α to the β phase of quartz was recorded at around 573 • C with a visible peak on DSC plots. Between 700-800 • C, decomposition of calcium carbonate occurs, which release carbon dioxide from the concrete. Sun and Xu [34] also found that between 700 and 800 • C, the calcium carbonate (CaCO 3 ) decomposes, thus permitting CO 2 to liberate from the concrete.
Microstructure Analysis
SEM/optical analysis was done on the polish surface taken from the specimen's core (up to 20 mm depth) to understanding the fibre-matrix interactions after repeated high temperatures and HC fluid exposures. Figure 16a shows PF fibres strewed in original FRC samples, and Figure 16b shows melted PF fibre due to high-temperature exposure. In the control specimen, repeated HC fluids exposure and high temperature create extra vapour pressure and heat-induced thermal cracks that may have caused the spalling (Figure 16e,f). Due to repeated exposure to 175 • C, there was a significant difference between PF and SF FRC porosity. When the PF fibre reinforced concrete was heated up to 175 • C, PF fibres start shortening in length due to relaxation, melting occurs. These micro-channels form a network of more permeable cement matrix, which contributes to the release of gas and water vapour, thus resulting in the reduction in the pore pressure [58,61]. This phenomenon helps to reduce the spalling tendency of concrete exposed to high temperature. Due to SF's high melting temperature, the development of such a microchannel was not observed in SF reinforced concrete. However, SF's presence helps bridge the cracks that develop due to repeated thermal exposures, as seen in Figure 16d For HB FRC, a better result was observed due to the combined effect of PF and SF, where melting of PF created a microchannel, which helps to release moisture pressure developed for HC fluids exposure. Simultaneously, SF's presence helps prevent thermal cracks in concrete; thus, HB FRC showed better performance in high temperatures and HC fluid exposures.
Conclusions
Airfield concrete pavements face different severe loading conditions during their service life. The most significant one is repeated thermal shock from APU's exhaust of jet For HB FRC, a better result was observed due to the combined effect of PF and SF, where melting of PF created a microchannel, which helps to release moisture pressure developed for HC fluids exposure. Simultaneously, SF's presence helps prevent thermal cracks in concrete; thus, HB FRC showed better performance in high temperatures and HC fluid exposures.
Conclusions
Airfield concrete pavements face different severe loading conditions during their service life. The most significant one is repeated thermal shock from APU's exhaust of jet engines and chemical attack from the leaked HC fluids. This effect causes the spalling of the concrete surface, which requires additional maintenance and replacement costs. Recent investigations showed that the inclusion of fibres improves the residual mechanical properties of concrete exposed to high temperatures and HC fluids. This study includes an experimental investigation on adding PF fibre and SF in the FRC mixes to improve the residual mechanical and thermal properties. The following conclusions are drawn as a result of this study.
•
The compressive strength loss was more significant in specimens exposed to the combined high temperature and HC fluids than to the high temperature only. Specimens with PF showed slightly more strength loss than control specimens under similar loading conditions. The addition of SF resulted in more residual compressive strength than PF and control specimens. However, hybrid FRC performed much better in retaining residual compressive strength.
•
The hybridisation of concrete using PF and SF was slightly more effective in enhancing the flexural properties of FRC than control and PF reinforced concrete. The specimens' flexural strength decreased gradually with an increase in the number of high-temperature exposures due to the degradation of the tensile strength of fibres and bonding between the fibres and the matrix. Besides, PF's melting created extra pores in the matrix to reduce the vapour pressure due to heating. Moreover, SF improves the FRC post-peak behaviour and reduced the crack propagation and the risk of spalling compared to OPC concrete.
•
The thermal properties of FRC improved with the addition of SF and PF fibres. However, recurrent exposure to heat and HC fluids caused a gradual decrease in thermal conductivity and specific heat. Due to the excellent thermal conductivity, SF had the highest positive impact above HB FRC, and PF fibres exhibited the most negligible impact on thermal properties.
•
Mass loss was prominent among PF and control samples than SF and HB samples. TG and DS test also confirmed the decomposition of concrete at the higher temperature besides loss of free and bound water, which has contributed to mass loss. • Microstructure analysis revealed that the PF's melting created extra pores in FRC that release the extra vapour pressure due to high-temperature exposures. The addition of SF reduces the microcracks propagation in the concrete by the bridging effect, thus enhancing the tensile capacity of FRC specimens.
In Brief, the above tests showed that the addition of fibres has a positive impact on the residual thermo-mechanical properties of FRC, except for PF fibre. Therefore, incorporating 0.7% HB fibres seems to be a promising way to enhance concrete resistance to thermally induced explosive spalling. In this experiment, concrete specimens were exposed to high temperatures from all directions, but the airfield APU exhaust applies one-directional heat. It will be worth considering that exposing concrete samples to one direction (1D) heating as it happens under the APU exhaust. It will reveal the effect of uneven thermal stress distribution developed from 1D exposure compared to this research's all-directional heating.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 2021-05-10T02:15:40.864Z | 2021-04-30T00:00:00.000 | {
"year": 2021,
"sha1": "6117409a2c7cf50179b1ba4aef6211a884cb7ef6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/14/11/2725/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8d854b47c5f0e3ce615cc7dbc0cbd99c89c7235",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1698332 | pes2o/s2orc | v3-fos-license | Right atrial pressure alterations during echocardiography-guided-catheterization predict tricuspid valvular impairment: a novel method for the creation of a rabbit model of Staphylococcus aureus endocarditis
Background We previously reported the use of a catheter system to damage the tricuspid valve and create infectious endocarditis (IE) in an animal model. The current study aims to create a faint IE model suitable for antibiotic prophylaxis using a low bacterial inoculum. We also aim to explore a way to quantitatively assess valvular impairment and to predict the success of the IE models during catheterization. Methods Ninety rabbits were assigned to two groups according to the density of bacteria inoculated (1 × 105 CFU for Group A and 1 × 104 CFU for Group B). A catheter system consisting of a polyethylene catheter and a guide wire were used to damage the valve. The catheter system was passed through the rabbits’ tricuspid valves under echocardiographic guidance. A pressure transducer was used to assess right atrial pressure (PRA) before and just after valvular damage to calculate the pressure alterations (ΔPRA). The animals in group A and B were divided into 3 subgroups according to the ΔPRA (0–5 mmHg for Groups A1 and B1; 5–10 mmHg for Groups A2 and B2; 10–15 mmHg for Groups A3 and B3). Staphylococcus aureus (ATCC 29213) inoculation was performed 24 hr after cardiac catheterization. Results Faint IE was confirmed in 20%, 93.3%, 26.7%, 6.7%, 20%, and 33.3% of the rabbits in Groups A1, A2, A3, B1, B2, and B3, respectively. There was no difference in the LV/RV ratio and VTR of the No-IE, faint-IE, and severe IE animals. Faint IE rabbits had a larger ΔPRA than No-IE rabbits (7.81 ± 1.21 vs. 2.48 ± 1.0, P < 0.01, for Group A; 7.60 ± 1.32 vs. 2.98 ± 1.08, P < 0.01, for Group B). The ΔPRA of severe IE and faint IE rabbits was significantly different (13.11 ± 1.31 vs. 7.81 ± 1.21, P < 0.01, for Group A; 12.73 ± 1.44 vs.7.60 ± 1.32, P < 0.01, for Group B). Conclusion ΔPRA could be used to assess valvular impairment. Controlling the value of ΔPRA during catheterization and inoculating of an appropriate dose of bacteria was associated with a successful IE model.
Background
Infective endocarditis (IE) is a life-threatening disease associated with a high mortality rate [1][2][3]. It continues to be a challenge in clinical practice. Animal models of IE are widely used to provide a better understanding of the pathogenesis, pathophysiology [4][5][6], and treatment of intracardiac infections [7][8][9]. Normally, blood flows smoothly through cardiac valves. If these valves are damaged, the risk of bacterial attachment is increased. There are two main procedures used to create an IE model, damaging a valve and inoculating the host with bacteria. A large inoculum of bacteria is used for most IE models in order to guarantee the successful creation of IE [10][11][12][13]. Durack et al. injected 10 8 colony-forming units (CFU) of viridans streptococci IV to produced IE in 100% of the experimental animals [14]. In fact, the dose of bacteria in humans may be 10 to 100 times higher than the lowest infecting dose necessary to produce IE in 90% of the animals. IE models created by injecting a large number of bacteria may be unsatisfactory as they may not be relevant to the human situation. For example, the magnitude of bacteremia observed in humans after certain procedures such as tooth extraction is generally 10 1 to 10 2 CFU/ml of blood [15]. A small inoculum similar to that found in humans with low-grade bacteremia is appropriate when the aim of the study is to evaluate the efficacy of a given antibiotic regimen given prior to bacterial challenge [16]. Using animal models with a large bacterial inoculum may require a prolonged administration of higher doses of antibiotics to achieve successful prophylaxis.
Small injected doses of bacteria may produce vegetations too small to be confirmed by echocardiography. This may be more relevant to the human situation [17][18][19]. We speculate that an appropriate aggravation for valvular damage might guarantee the successful creation of this IE model. We previously reported [20] a rabbit model of right-sided IE using a catheter to damage the tricuspid valve. This model is useful in evaluating the therapeutic effect of different medications on IE.
We have improved this model to simulate the clinical setting of antibiotic prophylaxis. We reduced the size of the bacterial inoculation and evaluated valve impairment in this model of IE.
Ethics statement
All animal procedures were approved by the Animal Ethics Committee of China Medical University and were conducted in compliance with institutional regulations.
Experimental design
We speculated that the density of a bacterial inoculation can affect the development of IE. One hundred animals were used in the study. Ninety were randomly assigned to two groups according to the density of Staphylococcus aureus inoculated. 1 × 10 5 CFU (Group A) or 1 × 10 4 CFU (Group B) of S. aureus was inoculated 24 hr after right heart catheterization. Five animals also had right heart catheterization performed without inoculation (Group C) and five animals were only inoculated with bacteria (Group D).
The extent of damage to the tricuspid valve affected the success rate of IE in the animal models. This might be related to the degree of right atrial pressure change related to instrumentation (ΔP RA ). We quantified valvular impairment by measuring the ΔP RA. ΔP RA was controlled to a predicted range of values when damaging the tricuspid valves. For example, the catheter system was manipulated forward and backward across the tricuspid valve ten times to produce severe damage and once to produce mild damage. The number of times the guide wire was passed through the tricuspid valve led to different values of ΔP RA . Animals in Group A and B were divided into three sub-groups related to this effect of the wire on the pressure gradient across the tricuspid valve. The pressure gradients were 0-5 mmHg, 5-10 mmHg, and 10-15 mmHg, respectively for Groups A 1 /B 1 , A 2 /B 2 , and A 3 /B 3 . Each subgroup contained 15 animals. The ratio of left ventricle to right ventricle diameter (LV/RV), and peak velocity of tricuspid valve regurgitation (V TR ) were evaluated during these procedures.
There were three outcomes in this animal model. Animals developed severe IE with vegetations visualized by echocardiography, faint IE with vegetations too small to be detected by echocardiography but confirmed by macroscopic and histologic examination of the cardiac valves, or did not develop IE by echocardiography or by histologic findings. The ΔP RA and echocardiographic findings just after manipulating the tricuspid valves were used to assess model outcomes.
Experimental animals
One hundred New Zealand white rabbits (50 males and 50 females), weighing 2-2.5 kg, were obtained from Beijing Animal Institute (Beijing, China). They were kept in the animal facility at China Medical University. The rabbits were maintained in individually ventilated cages and supplied heat-sterilized food and distilled water ad libitum.
Right heart catheterization under echocardiographic guidance
Right heart catheterization was performed under echocardiographic guidance as previously described [20]. Briefly, an incision was made in the left inguinal region after induction of anesthesia with pentobarbital sodium (30 mg/Kg ip). The left femoral vein (LFV) was exposed and dissected to allow introduction of the catheter system. The catheter system consisted of a polyethylene catheter with a steel guide wire. The catheter system was prepared by flushing the external wall and the lumen of the catheter with heparinized sterile saline. The catheter system was introduced into the LFV to the entrance of the right atrium under echocardiographic guidance (Figure 1-A). When the catheter system touched the atrial septum, it was necessary to adjust the direction of the guide wire to advance the system pointing to the tricuspid valve (Figure 1-B) and then across it (Figure 1-C). The guide wire was advanced until about 1 cm of the tip was exposed. The guide wire was moved forward and backward repeatedly over the fragile tricuspid valve to induce damage.
Evaluation of right atrial pressure (P RA ) The guide wire was slowly backed out after the catheter system was visualized entering the right atrium ( Figure 1-A). A 20 ml injection syringe (with 10 ml saline inside) was attached to the end of catheter using a three-way stopcock. Placement was checked using Figure 1 The process of right-heart catheterization under echocardiographic guidance. Echocardiography with an aortic short axis view was used to visualize the position of the catheter system. The catheter system was visualized entering the right atrium (A). The guide wire was slowly backed out to connect a pressure transducer to measure pressures in the right atrium. The pressure transducer was removed to allow introduction of the guide wire into the catheter system again. Adjust the direction of the guide wire to advance the catheter system pointing to the tricuspid valve (B) and then getting through it (C). The guide wire inside the catheter was then used to damage the tricuspid valve. Color Doppler was in action to visualize tricuspid valve regurgitation (D). The peak velocity was then evaluated by Pulsed Doppler. The catheter system was then repositioned at the entrance of the right atrium (E). The guide wire was removed and a pressure transducer connected to measure the pressure of the right atrium after valvular impairment. blood aspiration. A pressure transducer (YP-100, Yilian Medicine, Ltd., Shanghai, China) attached to the Biological and Functional Experimental System (BL-420, Taimeng Science technology, Ltd., Chengdu, China) was used to monitor pressure. Data was input into a personal computer using a USB 2.0 data cable. TM_WAVE Bio-signal acquisition and analysis software (version 1.0, Taimeng Science technology, Ltd., Chengdu, China) was used to display the P RA in real-time. Five randomly selected portions of the waveform were used to calculate the mean P RA (Figure 2-A). The pressure transducer was removed to allow introduction of the guide wire into the catheter system and to damage the tricuspid valve, which could be confirmed by the existence of tricuspid regurgitation by Color Doppler (Figure 1-D). The catheter system was then backed out into the right atrium ( Figure 1-E). The guide wire was removed and the P RA monitored again (Figure 2-B).
Echocardiographic measurement
An ultrasound system (Philips iE 33) with a 4-12 MHz transducer was used in this study.
The heart was imaged using a four chamber view immediately after damaging the tricuspid valve. The diameters of the left and right ventricles were measured at the level of atrioventricular valve annulus and at the middle of the two ventricles. The average dimensions of the two ventricles were used to calculate the LV/RV ratio. A four chambers or aortic short axis view was used to confirm the existence of tricuspid valve regurgitation and to acquire the value of V TR .
Echocardiography was performed to confirm the existence of cardiac vegetations on aortic short axis and four-chamber views when the rabbits were sacrificed or moribund.
Production and confirmation of IE
The rabbits were injected with a S. aureus (ATCC 29213) suspension (1 × 10 5 or 1 × 10 4 CFU) using a marginal ear vein. The body temperature was recorded during the challenge. Blood cultures were performed using intracardiac puncture at the end of the experiment. The presence of IE was confirmed by macroscopic and histologic examination of the cardiac valves. Tricuspid valves were excised and prepared for light microscopy. The specimens were fixed in 10% formalin, embedded in paraffin and cut into 5 μm sections. Sections were stained with hematoxylin & eosin.
Quantitative microbiologic analysis
Bacterial titers per gram of tissue were determined for cardiac vegetations obtained from rabbits with IE. The tissue fragments were crushed in tryptic soy broth (Sigma, U.S.) and 10-fold serial dilutions were inoculated onto tryptic soy agar plates. Bacterial titers were reported in terms of CFU.
Statistical analysis
Data was expressed as means ± SD. Differences between multiple means were compared using one-way ANOVA, Tamhane's T2 test or Bonferroni test when the variance was heterogeneous or homogeneous, respectively. P < 0.05 was considered statistically significant. All statistical analyses were performed using commercially available software (SPSS, release 17.0).
Results
Only one of the 90 rabbits in experimental groups A and B died prematurely. None of the control animals (Group C and D) died. The body temperature in all inoculated rabbits (Groups A, B, and D) increased to at least 40°C during the 72 hr following inoculation. No discernible difference was observed between rabbits in Group A and B and no febrile response occurred in group C.
Macroscopic and histologic examination of the tricuspid valves demonstrated the presence of IE in 42 rabbits, 33 from Group A and 9 from Group B. Vegetations 1-10 mm in size were adherent to the tricuspid valve. The vegetations appeared yellow or gray-white. Histologic examination showed infectious vegetations with destruction of valvular tissue. Heavy inflammation composed mostly of neutrophils was present (Figure 3).
Blood cultures were positive in 29 rabbits from Group A and 3 rabbits from Group B. All except one were sacrificed on day 5. The mean bacterial count of the cardiac vegetations was measured for each rabbit in Group A and B (Table 1).
Echocardiographic parameters and ΔP RA were measured after damaging the tricuspid valve. The LV/RV ratio was about 2:1 in all the rabbits in Group A and B. There was no difference in the LV/RV ratio or V TR of the No-IE, Faint-IE, and Severe-IE subgroups. The ΔP RA of the Faint IE rabbits was significantly higher than that of the No-IE rabbits (P < 0.01). The ΔP RA of the Severe-IE rabbits was significantly higher than that of the Faint-IE rabbits (P < 0.01) ( Table 2). The ΔP RA during catheterization was able to predict the success of the IE models. Animals in Groups A and B were divided into groups according to ΔP RA . Faint IE was confirmed in 20%, 93.3%, 26.7%, 6.7%, 20%, and 33.3% of the rabbits in Group A 1 , A 2 , A 3 , B 1 , B 2 , and B 3 , respectively ( Table 3). The Faint-IE model best correlated with a ΔP RA of 5-10 mmHg and an inoculation of 1 × 10 5 CFU bacteria (Group A 2 ).
Discussion
IE animal models have been used to evaluate proposed changes in medical treatment. Cardiac valves are damaged and the animal inoculated with bacteria. The most primitive method was to damage cardiac valves by complex surgical procedures [21]. This modality was criticized because of the high mortality rate of the experimental animals. A catheter-related model for IE has been widely used since first introduced by Garrison and Freedoman [12] in 1970s. Usually, the catheter was used to destroy aortic valves when it was retrograded from carotid artery into ascending aorta [13]. However, it might cause relatively high mortality for the experimental animals. According to our experience, the heart rates of experimental animals such as rats or rabbits were very high. The retrograded catheter may severely damage aortic valves and then cause heart failure immediately. Many animals died within minutes after catheterization.
We developed a method [20] using a catheter system to damage the tricuspid valves under echocardiographic visualization. We reported the use of the catheter system with a guide wire inside to damage the tricuspid valves. We demonstrated a high survival rate and high infection rate with this reliable model. There were several limitations to this model. First, a high density of bacteria was used to ensure successful infection. This IE model was suited for therapeutic purposes, but not for prophylactic purposes. Second, the extent of tricuspid damage could not be quantitatively assessed when injuring the cardiac valves. Which animals were suitably infected could not be determined until several days after bacterial inoculation.
In the current study, we evaluated the use of different inoculum doses in the development of an IE model. Two doses of bacteria were inoculated. 1 × 10 5 CFU was best suited for an IE prophylactic model. Previous studies have commonly used 1 × 10 6 CFU [22] or even more [10][11][12][13]. The use of a relatively low density of inoculated bacteria is a highlight of this study.
Tricuspid valve regurgitation was confirmed in this new model just after damaging the tricuspid valves. There was not a difference in the V TR of No-IE, Faint-IE, and Severe-IE animals. Inoculation density and valvular damage are the two main factors controlled in the development of an IE model. V TR was not a sensitive predictor of valvular impairment or the successful creation an IE model. Differences in ΔP RA were related to valvular impairment and IE. The value of ΔP RA was useful in guiding valvular damage to make the IE model. A faint-IE model was reliably created when ΔP RA was kept between 5-10 mmHg. The faint-IE model is relevant in assessing the impact of medications on early stage IE. This right-sided IE model simulates human invasive diagnostic and therapeutic procedures, such as right heart catheter angiography, atrial septal defect occlusion, placement of intracardiac pacemakers [23].
In the current study, 90 animals were used to create IE models. ΔP RA and inoculation dosage were used to evaluate the technique. Echocardiographic and physiologic parameters were also evaluated. The relatively large number of experimental animals and the detailed experimental design were strengths of this study. We found that control of ΔP RA during catheterization and tricuspid valve damage was associated with the successful creation of IE, another highlight of our study. Our method is different from previous reports [12,13] as the catheter system was removed immediately after catheterization. This modification is suitable for modeling intracardiac catheter procedures and the associated use of antibiotics [11,24]. Compared with traditional techniques, we proposed a novel method to create an IE model in rabbits. A relatively low bacterial innoculum made this model more suitable for a prophylaxis purpose. In addition, echocardiographic guidance made it possible for the catheter systems precisely damage the valve. The most important point, we demonstrated that ΔP RA could precisely assess valvular impairment and then predict the success of the IE models during catheterization. It could save lots of experimental animals using this novel method. Limitation of the study is the time of the procedures. Repeated measurements of ΔP RA could prolong the procedures. For our study, the mean time for catheterization and measurement of ΔP RA was 25 ± 4 min.
At last, we could read some additional message from the current study. V TR could not reflect the extent of valvular impairment just after injury, which indicates that V TR is not a sensitive index to reflect ΔP RA. Also, one may imagine that it is not always reliable to estimate pulmonary artery pressure using V TR. For example, V TR is very low (tricuspid regurgitation could hardly be detected) while the pulmonary artery pressure is above 50 mmHg measured by cardiac catheterization for a newborn.
Conclusion
We described a method to create a faint IE model using a relatively low bacterial inoculum. ΔP RA was used to assess valvular impairment. Controlling the value of ΔP RA during catheterization and inoculating of an appropriate dose of bacteria was associated with a successful IE model. | 2016-05-04T20:20:58.661Z | 2014-06-20T00:00:00.000 | {
"year": 2014,
"sha1": "117a574277b1ca6cbc95146df73e98361f060d27",
"oa_license": "CCBY",
"oa_url": "https://cardiovascularultrasound.biomedcentral.com/track/pdf/10.1186/1476-7120-12-21",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "60c1b678f8e1c57a6f93bf21b6a93c6f2e01f052",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270540460 | pes2o/s2orc | v3-fos-license | A comparison of four decontamination procedures in Reusing healing abutments: An in vitro study
Objectives This study aimed to compare the effect of four decontamination methods on the level of residual contaminants in the re-usage of dental healing abutments. Materials and methods In this experimental study, 50 used healing abutments were divided into five groups of ten as follows: 1. Control group: healing abutments were submerged in the ultrasonic device then autoclaved at 121 °C for 15 min; 2. Hypochlorite group: Same procedure as the control group, but the healing abutments were additionally immersed in 3 % hypochlorite for 20 min; 3. Chlorhexidine group: Same procedure as the control group, but the healing abutments were additionally treated with 12 % chlorhexidine; 4. Air polishing group: Same procedure as the control group, but the healing abutments were subjected to air polishing; 5. Hydrogen peroxide group: Same procedure as the control group, but the healing abutments were additionally exposed to 3 % hydrogen peroxide. Then, all healing abutments were stained with a protein-specific stain, Phloxine B. Five photographs were taken of each healing abutment, with four capturing the body (shank)and one capturing the top. All images were analysed, to measure the stained (contaminated) areas of each sample. The obtained data were analysed using statistical software (significance set at p < 0.05). Results The one-way ANOVA test indicated that the average percentage of contamination residues on the occlusal surface did not show a significant difference among the five groups: control: 5.5 ± 2.8, sodium hypochlorite: 4.9 ± 2.5, Chlorhexidine: 5.3 ± 2.5, air polisher: 3.1 ± 1.8 and Hydrogen peroxide: 4.8 ± 3.1. (p = 0.26). The average percentage of residual contamination on the body surfaces (shank part) was significantly lower in the air polisher (1.7 ± 1.1) and sodium hypochlorite (2.4 ± 1.1) groups compared to the other three groups (Control: 6.1 ± 2.3, Hydrogen peroxide: 4.6 ± 0.7, Chlorhexidine: 5.4 ± 2.4) (p < 0.05). Conclusion The results of this study showed that the use of sodium hypochlorite and air polishing, alongside autoclaving and ultrasonic cleaning, effectively reduced residual contamination on the body surfaces of healing abutments.
Introduction
The healing abutment (HA), usually made of titanium or a titanium alloy, serves as an intermediate, interim metal element used during the second step of implant surgery until the placement of a permanent prosthesis.During this time, it directly contacts soft tissues, facilitating the formation of a tight seal around the implant.This biological barrier prevents the penetration of bacteria and their products into deeper underlying areas, subsequently inhibiting infection of the pre-implant tissue, marginal bone loss (MBL), and further soft tissue recession.Additionally, it contributes to the development of an acceptable emergence profile in terms of aesthetics, particularly in the anterior region (Chokaree, Poovarodom, Chaijareenont, Yavirach, & Rungsiyakull, 2022;Odatsu et al., 2020).
One of the crucial requirements to achieve these objectives is to establish an aseptic environment, and a prerequisite for this is the utilization of sterile instruments.HAs are typically labeled as single-use or disposable items by companies.However, it is common for clinicians to reuse them for the same patient during prosthesis delivery or even for different patients (Kyaw, Hanawa, & Kasugai, 2020).Moreover, certain companies reprocess and resupply HAs after undergoing cleaning, sterilization, and repackaging for sale (Cakan, Delilbasi, Er, & Kivanc, 2015).The underlying motivation for these practices primarily revolves around reducing expenses for both patients and clinicians, as well as minimizing material wastage in the industry (Kyaw, Abdou, Nakata, & Pimkhaokham, 2022b).However, concerns persist regarding the potential for cross-contamination and cross-infection, despite the implementation of conventional cleaning and sterilization methods (Bidra, Kejriwal, & Bhuse, 2020;Wadhwani, Schonnenbaum, Audia, & Chung, 2016).An inadequacy to eliminate the contaminations leads to a decrease in the adhesion, proliferation, and spread of fibroblast and epithelium cells in contact with the implant surface.Consequently, the formation of a robust biological barrier that prevents bacterial penetration is compromised, leading to the risk of pre-implant tissue infection and implant failure (Canullo et al., 2020).
As defined, sterilization refers to the complete removal of all viable microorganisms at the warranty level of acceptable sterility (Rees, 2012).However, research has indicated that removing protein and amino acid residues adhered to titanium surfaces can be challenging, and residual organic material may remain on HAs even after standard sterilization practices (Abreu, Estepa, Naqvi, Nares, & Narvekar, 2023;Almehmadi, 2021;Burioni et al., 2024;Gul, Zafar, Ghafoor, & Khan, 2024;Kyaw, Abdou, Nakata, & Pimkhaokham, 2022a).In a study by Wadhwani et al .(Wadhwani et al., 2016) the results revealed that 99% of HAs still had proteins and peptides remaining on one or more sites, even after following standard cleaning and sterilization practices.Moreover, a review (Bidra et al., 2020) concluded that conventional methods such as ultrasonic cleaning and autoclaving are insufficient in completely removing contamination for HA reapplication.On the other hand, investigations suggest that implementing a three-step protocol involving cleaning, disinfection, and sterilization could be a promising strategy to overcome these limitations (Gehrke et al., 2022).Some studies have suggested the application of air polishing with Glycine powder as a beneficial and safe method for removing residual contamination from HA (Cochis et al., 2013).Furthermore, the use of topical chemical disinfectant agents is common in dentistry, including sodium hypochlorite, chlorhexidine, and hydrogen peroxide.
Hydrogen peroxide offers several advantages over other chemical agents, including its broad spectrum of activity against various pathogens through the oxidation of diverse cell molecules.Furthermore, hydrogen peroxide is considered safe for use on open wounds and has been found to promote the proliferation of epithelial cells (Wiedmer, Petersen, Lönn-Stensrud, & Tiainen, 2017).Chlorhexidine, recognized as a gold-standard antibacterial agent, has demonstrated remarkable efficacy in inhibiting biofilm formation and gingivitis.It exhibits dosedependent bacteriostatic and bactericidal effects.Moreover, it has demonstrated effectiveness against yeasts, dermatophytes, and certain lipophilic viruses (Bürgers, Witecy, Hahnel, & Gosau, 2012).Additionally, sodium hypochlorite, a well-established and traditional disinfectant widely used in dentistry for various applications such as endodontic treatment and reducing biofilm accumulation in removable prostheses, can be considered an excellent alternative as a disinfection agent.It offers beneficial properties, including bactericidal and fungicidal effects, tissue dissolving capabilities, and low toxicity at normal concentrations (Fukuzaki, 2006).
Based on the limited available evidence regarding the reuse of HA and the conflicting findings in research studies, our primary objective is to compare the efficacy of ultrasonic cleaning bath and autoclave, along with the utilization of air polishing, sodium hypochlorite, chlorhexidine, and hydrogen peroxide as additional options in a three-step method, in contrast to the conventional method.The goal is to assess the extent of residual contamination on the surface of healing abutments to determine their suitability for reuse.The null hypothesis in this study was that the amount of residual contamination in all groups and all surfaces was the same.
Materials and methods
This experimental study was conducted at the Dental Implants Research Center, School of Dentistry, Isfahan University of Medical sciences, Isfahan, Iran.The sample size for this study was calculated using the below formula, with an alpha level set at 0.05, an effect size (d=10%), and a power level set at 0.80, (Z 1− α 2 =1.96,Z 1− β = 0.84,and σ=16.7).A total of 50 HAs,10 per group showed to be necessary.
Fifty healing abutments (UFII®, DIO Implant Co., Pusan, Korea) with a diameter of 4.5 and a height of 3 mm, which had been previously used once (for amount of at least 4 to 6 weeks in the patients' mouth) in fifty patients, were randomly divided by numerical draw into the following five groups, and each HA was assigned a code.
Group 2 (Hypochlorite): 10 used HAs were subjected to a 10-minute submersion in the ultrasonic device at 60 • C, followed by a 20-minute application of hypochlorite 3%.They were then washed with sterile saline solution for 1 min and autoclaved at 121 • C for 15 min (Gosau et al., 2010;Sonntag & Peters, 2007).
Group 3 (Chlorhexidine): 10 used HAs were subjected to a 10-minute submersion in the ultrasonic device at 60 • C, followed by a 5-minute application of chlorhexidine 12%.They were then washed with sterile saline solution for 1 min and autoclaved at 121 • C for 15 min (Mariotti & Rumpf, 1999).
Group 4 (Air polish): 10 used HAs were subjected to 10-minute submersion in the ultrasonic device at 60 • C, followed by air polishing system with glycine powder (Perio-mate, NSK, Japan) for 15 s at a distance of 5 mm from the surface and an inclined angle of 45-60 degrees.They were then autoclaved at 121 • C for 10 min (Chew, Tompkins, Tawse-Smith, Waddell, & Ma, 2018).
Group 5 (Hydrogen peroxide): 10 used HAs were subjected to a 10minute submersion in the ultrasonic device at 60 • C, followed by a 1minute application of hydrogen peroxide 3%.They were then washed with sterile saline solution for 1 min and autoclaved at 121 • C for 15 min (Alotaibi, Moran, Grufferty, Renvert, & Polyzois, 2019).
After the aforementioned procedures, all HAs were packed in sealed pockets containing 2 ml of Phloxine B solution (Sigma Aldrich).Phloxine B is a fluorescein derivative stain and one of its applications is to detect proteins.Proteins can get degraded to tenacious biological parts known as prions' which are infectious agents that can retain their infective potential over time.Recent studies have used Phloxine B to evaluate the ability of various cleaning protocols to clean dental implant abutments (Rasooly, 2005;Bali, Bali, & Nagrath, 2011;Stacchi, Berton, Porrelli, & Lombardi, 2018;Chew, Tompkins, Tawse-Smith, Waddell, & Ma, 2018).
. Subsequently, the HAs were placed in ultrasonic bath for 10 min and then rinsed in deionized water and dried at room temperature (Wadhwani et al., 2016) (Fig. 1).The HAs were then observed and imaged using a Stereomicroscope (Trinocular Zoom Stereo Microscope, SMP 200, HP, USA) equipped with a digital camera (Moticam 480 Digital camera, SP10.0224,Motic Instruments Inc., USA) at ×15 magnification.Five photographs were taken of each HA, four of the body surfaces and one of the top surfaces.To capture images of the surfaces, the HAs were secured within a rectangular putty mold with four equal surfaces (Figs. 2 and 3).The putty molds were rotated 90 degrees three times to obtain four images of the body surfaces and one image of the top N. Naghsh et al.
surface for each HA.Subsequently, all images were analysed using Cool PHP tools software (NJ, USA) with color code to measure the stained (contaminated) areas of each sample.The contamination surface area was expressed as a fraction (%) of the total surface area within the image pixels (Langbein, 2016).For Statistical Analyses, the obtained data were analysed using one-way ANOVA, and Tukey's post-hoc tests, utilizing the SPSS statistical software version 22.0.The significance level was set at p < 0.05.
Results
The analysis involved 50 HAs to identify the fluoxetine B-stained surfaces (contaminated surfaces).The results of the one-way ANOVA test revealed no significant difference in the average amount of contamination residues on the occlusal surface among the five groups (p= 0.26) However, a significant difference was observed in the average level of contamination residues on the body surfaces among the five groups (p < 0.001) (Table1).Furthermore, the Tukey post-hoc test demonstrated that the average level of contamination on the body surfaces was significantly lower in the air polisher and sodium hypochlorite groups compared to the other three groups (p < 0.05).No significant difference was found between the air polisher and sodium hypochlorite groups, and there were no significant differences among the control, hydrogen peroxide, and chlorhexidine groups (Table2).
Discussion
Reusing HAs in daily clinical practice is common, primarily due to cost-effectiveness for both patients and clinicians.According to evaluations, the cost of an HA for manufacturing companies is approximately 15% of the price of an implant (Bidra et al., 2020).However, there are limitations, such as the risk of cross-contamination in patients, which restricts their application (Browne et al., 2012).The findings of this study revealed that both mechanical methods (air polishing) and chemical methods (using hydrogen peroxide, chlorhexidine, and sodium hypochlorite) were unable to completely eliminate residual contamination on HAs which is align with recent studies (Almehmadi, 2021;Chew et al., 2018).One concern in the re-usage of HAs, is prion contamination, which the risk of transmission in HA re-usage after the decontamination methods are very low, and it is more related to the rereuse of endodontic files in the pulp in close contact with peripheral nerves (Eswaramurthy et al., 2022;Rapisarda, Bonaccorso, Tripi, & Condorelli, 1999).However, it is worth considering, the potential presence of prions on implant drills, which are typically considered reusable by implant manufacturing companies, as well as biomaterials sourced from animals such as cows (Bidra et al., 2020;Chew et al., 2018;Rapisarda et al., 1999).Furthermore, some studies have not regarded the reuse of HAs as an ideal procedure (Abreu et al., 2023;Sahin & Dere, 2021).However, they have suggested that by effectively applying mechanical or chemical methods to remove debris and contamination prior to autoclaving, promising results can be achieved (Almehmadi, 2021;Sánchez-Garcés, Jorba, Ciurana, Vinas, & Vinuesa, 2019).In a study conducted by Browne et al. (Browne et al., 2012), the used elements, including implant impression copings and healing abutments, exhibited sterility levels equal to new elements without any visible distortion after multiple rounds of sterilization using steam autoclave and Chemiclave protocols.Additionally, it was reported the levels of pro-inflammatory cytokines such as IL-1β and TNF-α in peri-implant crevicular fluid and clinical parameters like bleeding index and plaque index didn't show significant differences in patients using both unused and reused HAs (Lashkarizadeh, Foroudisefat, Abyari, Mohammadi, & Lashkarizadeh, 2022).
Based on the data obtained from this study, the null hypothesis that the amount of residual contamination in all groups and in all surfaces was the same, was rejected.Simultaneously, the findings of this study indicate no significant distinction among the five groups at the occlusal level.This observation could be attributed to the geometry and shape of the HA, influencing the accumulation of contaminants in different areas.It is possible that the limited access to the deep recesses within the occlusal part of the HA contributed to this outcome.This finding aligns with the results of a study conducted by Michelle Chew et al. (Chew et al., 2018), which demonstrated effective decontamination primarily on the body surfaces, followed by the bottom and then the occlusal surface.Furthermore, the results obtained indicate a significant difference in contamination levels on the body surfaces, with lower levels observed in the air polishing and sodium hypochlorite groups compared to the other groups.The residual contamination in the chlorhexidine and hydrogen peroxide groups was comparable to that of the control group.In a study evaluated the soft tissue response to clinically retrieved and decontaminated cover screws in a rat model, researchers reported that hydrogen peroxide, in conjunction with CO2 laser, could be clinically utilized for adequate decontamination of titanium surfaces, but not when used alone (Mouhyi, Sennerby, & Van Reck, 2000).
The positive impact of air polishing on removing residual organic contamination from HAs without causing harm was observed in this study, which is consistent with the findings of Chew's study (Chew et al., 2018).They found that using air polishing with erythritol powder effectively eliminated contamination.However, it is important to note that none of the decontamination methods employed should alter surface characteristics such as roughness, wettability, or surface energy.Nevertheless, the efficacy of air polishing as a decontamination method remains uncertain due to its potential impact on the surface characteristics and topography of healing abutments and needs further investigation (Louropoulou, Slot, & Van der Weijden, 2014).
The beneficial effect of sodium hypochlorite is likely attributed to the release of potent oxidizing agents, including free radicals.In addition to its disinfecting properties, sodium hypochlorite can dissolve organic residues on the HA surface (Abuhaimed & Abou Neel, 2017).Recent studies also reported the combination of sodium hypochlorite with electrochemical decontamination is more effective than sodium hypochlorite alone, and can remove soft and hard deposits, without altering the HAs surface topography and HA reuse can be considered multiple times in this combined decontamination protocol (Kyaw et al., 2023;Kyaw et al., 2020).It should also be acknowledged that variations in the effectiveness of sodium hypochlorite observed in different studies may be attributed to differences in concentration, duration of application, and application method.
When evaluating the reutilization of HAs, factors such as the type of healing abutment (titanium, stainless steel, zirconia, or polymer), the number of times the HA has been used, whether it was used on one patient or multiple patients, the duration of HA usage, and whether it was used in one-stage or two-stage surgery, as well as its use in guided bone regeneration procedures, should also be taken into consideration.
A limitation of this study was the small sample size, and the evaluation of residual contamination in the bottom surface of the HA was not conducted using the method employed.Additionally, the identification of non-protein residual contamination and the source of the remaining proteins was not evaluated.HA contamination differs from patient to patient based on diet, oral microflora, and oral hygiene, which come from saliva, food debris, and organic material like blood and epithelial cells (Rompen, Domken, Degidi, Farias Pontes, & Piattelli, 2006), making it difficult to standardize the type of prior use in different patients.Ultimately, the matter of sterilization and reuse of healing abutments continues to pose challenges in terms of safety, ethics, and cost.Consequently, further research and studies involving larger sample sizes are deemed essential to address this issue.Additionally, in future in vivo research, it is important to focus on the clinical implications related to the healing of soft and hard tissues and the assessment of biological complications.
Conclusion
In none of the studied groups, the contamination was completely eliminated in reused HAs.Additionally, the findings of this study indicate that incorporating sodium hypochlorite and air polishing, alongside autoclaving, can serve as an effective approach to minimize residual contamination on the body surfaces of utilized titanium HAs.Also, no significant difference was observed in the average amount of contamination residues on the occlusal surface among the five groups.
Author contribution N.N developed the theoretical framework of this manuscript, performed and supervised the project, and contributed to the final version of the manuscript.A.H prepared the manuscript and summed up the data.A.M developed the theoretical framework of this manuscript.J.Y helped in writing the article.Z.P and N.KH measured the requested parameters and performed the project.All authors read and approved the final version before submission.
Ethics statement.
The present study was approved by the Ethics Committee of Isfahan University of Medical Sciences, Isfahan, Iran.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Fig. 1 .
Fig. 1.Ultrasonic device and HAs placed in phloxine B solution.
Table 1
The Average percentage of contamination residing on the occlusal and body surface in five groups.
Table 2
Pairwise comparison of the average percentage of residual contamination on body surfaces between groups using Tukey's Post Hoc test. | 2024-06-17T15:06:00.648Z | 2024-06-01T00:00:00.000 | {
"year": 2024,
"sha1": "6fb76241231298964c5a653f1da653dbf974fa3b",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.sdentj.2024.06.013",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bbc2c11333dd5386a08f27cbaa78064d00499d83",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
115032252 | pes2o/s2orc | v3-fos-license | Evaluation of quality management systems implementation in medical diagnostic laboratories benchmarked for accreditation
Accreditation is the process which ensures that certification practices are implemented in laboratories to enhance their quality and efficiency. It in turn helps laboratories to improve technical processes, achieve competitive advantage and increase market share. To achieve accreditation, successful implementation of the laboratory quality management system (LQMS) is a requisite. In this study, evaluation of quality system implementation in small, medium and large sized laboratories, covering management and technical requirements were carried out. The study analysis was carried out by scoring the implementation of quality system in various operational activities of laboratory system. Data was gathered by auditing the laboratories using check list for the purpose of international organization for standardization (ISO) quality management system implementation. This study, emphasize that training is an essential element, and can play a major role in creating awareness and understanding to implement quality system in medical testing laboratory. There should be real time training on various aspects of laboratory activities. The training needs should be evidence based and assess the competency of laboratory staff, and evaluate staff performance in order to maintain world class service of the laboratory. The quality indicators can be used for benchmarking and improving services. The study conclude that LQMS in medical testing laboratories explicate the need for understanding current standard requirements of quality system implementation and maintenance to improve the quality of service of the laboratories and facilitate accreditation. A break down in implementation of quality systems can cause a decline in quality services and hence accreditation.
INTRODUCTION
Accreditation is a procedure by which medical testing laboratories are approved for their demonstrated capability and competence in executing each and every process of operation. The major gain of accreditation of medical testing laboratories is quality test reports which are accepted internationally. Quality of laboratory reports is always maintained when quality systems are established and followed by the laboratory personnel *Corresponding author. E-mail: manickamtamilselvis@gmail.com. Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License which in turn leads to customer satisfaction and confidence leading to increase in performance and productivity. Accreditation also offers a change in total operational process for implementing standards. Other advantages of accreditation are having accessibility, affordability, scalability and sustainability.
To implement uniformity/harmonization in laboratory testing process, international organization for standardization (ISO) is a worldwide federation, which publishes guidelines as international standards. The other agencies are international laboratory accreditation cooperation (ILAC) and the international accreditation forum (IAF). By following the guidelines of these agencies, the laboratory releases the test results and is certified to be standard, unique and accepted all over the world. At national level, National Accreditation Board for Laboratories (NABL) is an autonomous accreditation body, acting under the department of science and technology, government of India and its objective is to provide third party assessment of quality and technical competence for medical testing laboratories. NABL promotes development and maintenance of good clinical and laboratory practices (GCP, GLP) in compliance with existing standard practices in testing and calibration, that is, technical and management requirements and competencies. It is also involved in establishing and maintaining international standard of identification for national program. GLP issued by Department of Science and Technology also helps to get more structured approach to achieve quality in the laboratory, Organization for economic cooperation and development (OECD, 1999).
Laboratories are the core function in the health quality system. The result of a test is an essential and life-saving support within the health care system and ensures accurate and reliable test results. Therefore, qualityassured testing of patient samples is vital (WHO, 2006). Implementing quality in laboratories is in a better position to meet the requirements of international standards. Accreditation is most effective when it is rooted in a policy framework for evaluating laboratory quality and patient safety (Trevor et al., 2010). Accreditation will build trust with the consumer in all of the sectors. Furthermore, accreditation will raise the medical testing to internationally acceptable and comparable levels.
The laboratory quality management system (LQMS) has not received its full attention in the areas of medical testing laboratory operations. As per the current revised standard by BSI (2012) every laboratory should have the quality system to manage all the technical and management process and the process flow of the Quality Management System (QMS) as sown in Figure1. Implementing quality system in the laboratory not only provides certification but also credibility to the competency among the laboratories. Accreditation process will ensure the quality of the test results and in turn assures quality. The present study was undertaken to follow up of implementation of current regulatory compliance related to quality systems in medical testing laboratories.
The study analysis was done to identify, improve and maintain the implementation of quality system standard in the laboratory process. The measures to improve and achieve the current requirement of quality system in the testing laboratories are discussed.
Methodology
This study was conducted in several medical testing laboratories as well as standalone laboratories. The study conducted existing system study analysis in the context of implementation of quality system according to NABL 15189: 2012 standard (1), and GLP guidelines. This study context is regional. Real time study analysis was conducted as per the existing WHO scores based on the pre developed check list.
Study selection
Systematic study analysis was done on implementation of quality system, and for its potential relevance to improving the quality of medical testing in laboratories. The study evaluated in depth according to the standard requirements (OECD, 1999).
Studies included:
(1) Laboratory specified valid criteria for quality system improvement and appropriate to laboratory as per published guidelines.
(2) Implementation of these criteria in all its testing process.
(3) The scope of testing in which the laboratory specified for accreditation.
(4) Laboratory quality control issues appropriate to testing.
The study, studied existing condition in implementing quality system by personnel for specific activities. The study included qualified, trained key personnel laboratory director, quality manager, technical manager and senior laboratory technician in this study. The study was done in small, medium and large sized laboratory with score sections on document and records, organization and personnel, equipment, purchasing and inventory, process control internal/external quality assessment and facilities and safety. This study was carried out to investigate implementation of quality system and its maintenance as per ISO 15189.
RESULTS AND DISCUSSION
Quality system audit A quality system implementation audit was done in small, medium and large sized laboratories, which do or do not have awareness of basic quality system in its operation and applying or renewing accreditation. As per World Health Organization checklist, top scores of the clauses are given in the Table 1. The study analysis was done on the represented parameters in the X axis and the points were given based on the implementation of existing quality system. The data is represented in a 100% stacked horizontal bar graph. The study analysis reveals that the laboratories are implementing and practicing quality system in its operation irrespective to the size of the laboratory ( Figure 2). However, there were differences in terms of facilities and safety aspect when small laboratories are compared with medium and large size laboratories.
Ignorance in quality Implementation
To achieve laboratory effort on its measurable objectives implementation of QMS facilities is essential. In this system study, the study found that the quality system management implementation process was not practiced effectively in the laboratories. A manual of procedure was not developed to support efficient, effective, high quality operation and appropriate laboratory services irrespective of the size of the laboratory. A brief written statement describing the Laboratory's intended action with respect to attain a specific requirement of the standard BSI (2012) was not implemented in the laboratories. Quality standards were not implemented in the laboratory processes or series of inter-related steps involved in examination that uses instruments, reagents, staff and Perecentage (%) Figure 2. Comparison of quality system implementation in small, medium and large sized laboratories.
other related resources to get the test results efficiently. A written test procedure was not prepared in most of the laboratories. Technician does the test by using kit instructions as a routine practice. Other staff also learned the same practice and followed without proper documentation. Awareness among the staff is very poor regarding implementation of quality standards. Internal audits were not done to see the progress and performance of the laboratory. Non-conformities observed during audit were not discussed with the management in Management Review Meeting (MRM) to improve the overall quality standards in its entire operational process.
Deficiency in personnel training
A good quality system is developed by scheduled training program implemented and ensuring that each staff of the laboratory is suitably trained to meet the skills required for undertaking job responsibilities. In this study, it is found that training of the personnel was not effectively done in the laboratories. Awareness of quality system implementation in operational process was lacking among the personnel. Training on test procedures is insufficient in the staff. Based on survey of training records and audits training on safety process especially handling the infectious samples was deficient in the laboratories. General training on how to write the standard operation procedure, and maintain documentation in all laboratory process was not given to the personnel. Training on biological waste management is poor. Training of personnel was not consistent. Training on mentioned subjects, its effective implementation and evaluation is given as a pie chart representation in Figure 3. By implementing all the mentioned training to the laboratory proves 100% performance compliance.
Inadequate quality assurance
The laboratories are required to have adequate quality service (QA) personnel in place. It is important that records are kept of all control and standard results which help the assessor to overview the laboratory performance. In this study, it is found that laboratories are not having adequate QA personnel in place. Records are not kept updated. QA is not given priority at the laboratory. There is negligence in evaluation of personnel performance and insufficient coordination with the quality system implementation. The administrator is poor in management and skills. There was no simultaneous training which can help and improve personnel performance to solve troubleshooting.
Need for quality service
In recent decades, due to the competitiveness, worldwide laboratories have realized that a good quality service is a key area for the commercial success and its development. Quality is essential where the measure of performance and satisfaction of the client or customer is to be placed foremost. Quality management is to assess the level of quality in operational process and to improve it. Accreditation reassures quality by giving an opportunity for laboratory to function in a highly organized way to achieve the quality in its operations.
Measures for implementing quality
Laboratories should standardize and implement quality improvement process, from lot to lot reagent verification, external quality assurance services (EQAS), inter laboratory comparison (ILC), equipment calibration, instrument comparison of methods to the control of records and documentation. Errors can be prevented and arrested by preventive action and root cause analysis for nonconformities. Real time process of quality management in laboratory need to be followed and comply with the standards BSI (2012) and can lead to quality achievement. Quality system procedures (QSPs) meant to execute policies can serve as guide to streamline laboratory work. The most important step towards the process of achieving quality is to follow the pyramid structured documentation process which includes QSM, QSP, dept manual and all the forms and records used in laboratory operational process ( Figure 4).
Quality manager (QM) in the laboratory needs to be cautious in addressing training needs of new and existing staff. Training can play a major role in creating awareness and understanding to implement quality system in medical testing laboratory. There should be real time training on various aspects of laboratory activities. The training needs should be evidence based and assess the competency of laboratory staff and evaluate staff performance in order to maintain world class service of the laboratory. The QM holds the key to sustain quality by remaining vigilant and creating a system for routine audit to monitor all the activities and a continuous training on relevant medical education for all the laboratory staff. Quality is a philosophy and by implementing the principles of quality system in day to day operation of laboratory is vital to sustain quality (Chopra et al, 2012). In order to maintain the quality standard of the laboratory, understanding the quality concept, terms and definitions of Quality Management System (QMS) is a key aspect of improving quality. The laboratory staff and management should understand the importance of audits and QMS.
Implementation process of quality system ensures that customer requirements are achieved consistently. Quality management system will also define time scales and internal review mechanisms to implement quality in laboratory process. Awareness raising sessions for staff should be conducted on the implementation of the system. Maintenance of quality needs full support and commitment of the entire laboratory. Evaluation process of internal audit, corrective action on findings, management review meetings and ongoing internal audits provides an opportunity to refine the quality management system policies and procedures by Framework for Assessment of Environmental Impact (FASSET, 2004).
To achieve laboratory effort on its measurable objectives and implementation of QMS facilitates, a manual of procedure must be developed to support efficient, effective, high quality of operation and appropriate to the laboratory services. It should also include accurate and precise test results, appropriate test selection, timely reporting, and correct interpretation of test results, and recommendations for further investigations. Written instructions/standard operating procedures (SOP), describing the way to carry out a step in the process of an examination, how one should perform an activity should be designed to meet quality policy and objectives and to direct and control an organization with regard to quality should be the precedence to achieve implementation of quality in the laboratories.
The laboratory is providing the information for the diagnosis, prevention, or treatment of any disease to assess the health of, human beings, therefore implementation of quality system is one of the essential requirement in order to maintain quality standard in the performance of all activities from sample collection to release of reports. Audit findings should be addressed along with its corrective action and preventive action. In case of any major non conformities root cause, analysis must be done along with investigation for complete closure of the addressed findings (6). Laboratory should conduct internal audit as per the standard (BSI, 2012). Good laboratory practice entails planning, conducting and reporting the entire laboratory operation. It also includes personnel training, job responsibilities of laboratory personnel including key personnel, examination, data collection, quality system for continual improvement (Burke, 2014). If any aspect GLP is not followed, yet the same should be addressed with sound reasoning for the occurred deviation and it does not necessarily invalidate the process. The study director must explain why the deviation occurred and assess its impact on data integrity as per 21 CFR 58 subpart j (Shahram and Susan, 2009). The most relevant subjects to be audited in the laboratory are quality system, personnel, documentation and records, laboratory controls, validation, change control and complaints. The checklist that was used during the initial laboratory evaluation should also be verified during the audit. A qualified quality auditor, from quality assurance (QA) and/or a quality control (QC) expert would be the recommended to do the audit. The auditors should focus on the effectiveness of the laboratory controls for the procedures: form test process and all its related process (APIC, 2012). Internal quality control procedures must be practiced for all testing methods used by the laboratory and quality control data sheets and summaries of corrective action should be retained for documentation (Gershy-Damet et al., 2010).
Medical laboratory diagnostic testing services have an important role in the human health care. Assessing the quality of laboratory services using quality indicators requires a systematic, transparent, and consistency in analysis, since the analysis is considered one of the important consequences on patient care and health. Laboratory quality indicators are identified as one of the continual improvement in laboratory testing process. The general awareness of the standard (BSI, 2012) among the laboratory personnel may result in excellence in monitoring of the total testing process. Test order appropriateness, patient identification and specimen collection, patient satisfaction with phlebotomy, sample Identity, sample preparation and transport, specimen Inadequacy and rejection, blood culture contamination, sample container Information error, analysis, proficiency testing performance, result reporting, inpatient laboratory result availability, turn around time, clinician satisfaction with laboratory services are the areas of work where the laboratory can constantly monitor the performance to customize the quality of services Shahangian and Snyder (2009) and the QM has to give equal importance to all the addressed parameters includes from patient identification to analysis, turnaround time (TAT) and clinician satisfaction ( Figure 5). Total TAT is considered as one of the critical aspect of quality indicator. TAT is established in the laboratory in consultation with its clients. At least 80% of specimens received must be processed within the stated TATs to receive an accreditation rating. TATs will be interpreted as the time from receipt of the specimen in the laboratory until the reporting of results.
Customer satisfaction is generally considered as one of the quality indicator of laboratory services and related to null test report errors or no delays in release of reports and appropriate utilization of laboratory services and its associated costs. Laboratory administrator being the immediate person to know for most laboratory services including timely reporting, communication of relevant information, and notification of significant abnormal results to any outcomes, laboratories can carry out their quality system development on a sustainable basis by using the existing checklist prepared until all areas of quality are fully developed (Kusum and Silva, 2005). Assessing the effectiveness of quality system by anticipating errors, developing clear systems and procedures, ensuring that staff is trained for the tasks they perform and validate all operations, follow SOP and change control Pereira (2015) should be done.
Agreement with contract service providers, and where after satisfactory quality assessment done by the QM, the signed documents of both parties should be maintained. Laboratory confidentiality agreement should be signed by all the laboratory staff to assure patient safety, patient confidentiality, data integrity and compliance with good clinical practice (GCP). Laboratory facilities, system, equipment, examination procedures, QC, data recording, personnel records, report of all documents should be done at intervals as per the audit plan. Performance measures of the laboratory-related quality are very important in case of any nonconformity which is beyond the acceptable range for which laboratory is responsible to comply with the requirements of standard (BSI, 2012).
Laboratory testing and its related process of improvements certainly have the potential to improve outcomes of interest and consequences in the laboratory quality systems. Medical testing is often the principal basis for more costly downstream care. It also features prominently pay-for-performance guidelines and compliance standards, making it a potential target for cost savings under global payment plans (Song et al., 2011). Other areas that have not been adequately monitored are corrective and preventive actions and their effective implementation in which they have been shown to improve the provision of service. Because there are so many processes involved in laboratory testing, there is considerable challenge in identifying, defining, and, ultimately, implementing indicators that cover the various stages of the total laboratory testing process (15).
In medical diagnostic laboratory, the challenge will be to continue to improve quality practices and to continue to support laboratories in achieving accreditation. Laboratory personnel must have knowledge of the ISO requirements for medical testing laboratories as well as expertise in the area of work and to increase awareness of EQAS. Thus, both accreditation and participation in EQAS are accepted as effective and important tools to improve the accuracy and reliability of quality standards in testing process.
During recent decades, quality management sciences have advanced dramatically in many industries, operational standards are defined, and the information technology revolution has opened up new possibilities for highly reliable service. Nevertheless, experience suggests that the application of these sciences, standards and information technologies in health care seems to advance slowly and unevenly (Schneider, 2014).
Laboratory testing is the single highest volume of medical activity and drives clinical decision-making across medicine (Alexander, 2012). Thus, overall quality system improvement of a diagnostic laboratory at present condition depends on understanding of standard compliance of the personnel involved. Medical testing is considered appropriately if it supports the standard of care, which in turn is defined according to patient outcomes, improving laboratory utilization (18) when it follows standard compliance. Therefore, laboratory professionals should comply with the NABL 15189:2012 standards to achieve major harmonization of laboratory test results (individual results, reference-and decision levels). NABL 15189:2012 standardization of all the mentioned aspects helps to improve the overall quality of service and will have an enormous economic impact, and will contribute to uniform test results over time and space (Müller, 2010(Müller, , 2000 and customer satisfaction. A well defined documented process provides perfect evidence to the assessors that the system is suitable for its intended use.
CONCLUSION
Overall, in the present study found out that the quality system was not effectively implemented in the laboratories, and the study discussed what could be done to improve and implement quality system in its practice. A more detailed evaluation of documentation in quality system and the quality indicators should be done. Considerable variation and inconsistency in key terms, definitions, implementation, measurement and reporting practices should be solved in order to improve subjective evidence and its importance. The study, recommend that there should be a regular training for all the staff in order to create awareness and interest to implement quality in laboratory process. The quality indicators should be used for benchmarking and improving services. LQMS in medical testing laboratories explicate the need for understanding current standard requirements of quality system implementation and maintenance to improve the quality of service of the laboratories and facilitate accreditation. A break down in implementation of quality systems can cause a decline in the quality services and hence accreditation. | 2018-12-29T15:36:53.552Z | 2015-08-31T00:00:00.000 | {
"year": 2015,
"sha1": "5d77e730c8ccc94fc2779f2e7079fe57e46fcd3e",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/JMLD/article-full-text-pdf/F1F335655143",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f95a1528ae791f555d72b2ed2d6fcc737f2067d1",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
} |
214693345 | pes2o/s2orc | v3-fos-license | Lightweight Photometric Stereo for Facial Details Recovery
Recently, 3D face reconstruction from a single image has achieved great success with the help of deep learning and shape prior knowledge, but they often fail to produce accurate geometry details. On the other hand, photometric stereo methods can recover reliable geometry details, but require dense inputs and need to solve a complex optimization problem. In this paper, we present a lightweight strategy that only requires sparse inputs or even a single image to recover high-fidelity face shapes with images captured under near-field lights. To this end, we construct a dataset containing 84 different subjects with 29 expressions under 3 different lights. Data augmentation is applied to enrich the data in terms of diversity in identity, lighting, expression, etc. With this constructed dataset, we propose a novel neural network specially designed for photometric stereo based 3D face reconstruction. Extensive experiments and comparisons demonstrate that our method can generate high-quality reconstruction results with one to three facial images captured under near-field lights. Our full framework is available at https://github.com/Juyong/FacePSNet.
Introduction
High-quality 3D face reconstruction is an important problem in computer vision and graphics [38] that is related to various applications such as digital actor [3], face recognition [5,48] and animation [3,21,40]. Some works have been devoted to solving this problem at the source, using either multi-view information [15,43] or the illumination conditions [1,2,37]. Although some of these methods are capable of reconstructing high-quality 3D face models with both low-frequency structures and high-frequency details like wrinkles and pores, the hardware environment is hard to set up and the underlying optimization problem is not easy to solve. For this reason, 3D face reconstruction from a single image has attracted wide attention, with many works focusing on reconstruction from an "in-the-wild" im- * Corresponding author age [35,17,14,18]. Although most of them can reconstruct accurate low-frequency facial structures, few can recover fine facial details. In this paper, we turn our attention to the photometric stereo technique [42], and consider the nearfield point light source setting due to its portability. We aim to reconstruct high-precision 3D face models with sparse inputs using photometric stereo under near point lighting.
State-of-the-art sparse photometric 3D reconstruction methods such as [9,13] can reconstruct 3D face shapes with fine geometric details. However, they are mainly based on conventional optimization approaches with high computational costs. In recent years, great progress has been made in deep learning-based photometric stereo [22,12] that can estimate accurate normals. However, these existing methods cannot be directly applied to solve our problem. First, they mainly focus on general objects with dense inputs, making them not suitable for our 3D face reconstruction problem with sparse inputs. Second, they assume parallel directional lights, which is difficult to achieve in practice especially for indoor lighting conditions. To solve the sparse photometric stereo problem fast and well, we must address the following challenges. First, without the parallel lighting assumption, calibrating the lighting direction of near-field point light sources is much more complex and needs to solve a nonlinear optimization problem. Moreover, the reconstruction problem with less than three input images is ill-posed, and thus prior knowledge of the reconstruction object is needed.
In this paper, we combine deep learning-based sparse photometric stereo and facial prior information to reconstruct high-accuracy 3D face models. Currently, there is no publicly available dataset of face images captured under near point lighting conditions and their corresponding 3D geometry. Therefore, we construct such a dataset for the network training. We use real face images captured using a system composed of three near point light sources and a fixed camera. Based on this system, we develop an optimization method to recover 3D geometry along with calibrating light positions and estimating normals. Using our reconstructed 3D face models and publicly available high-quality 3D face datasets, we augment our dataset by synthesizing a large number of face images with their cor- Our dataset for network training consists of photos with different expressions, captured using a system composed of three near point light sources and a fixed camera. (c): Our proposed method can recover fine details even with a single image input (left). For images captured by a smartphone, with a hand-held light at locations not seen in our training dataset, our method also works well in this casual setup. (right). responding 3D shapes. With the real and synthetic data, we design a two-stage convolutional neural network to estimate a high-accuracy normal map from sparse input images. The coarse shape, represented by a parametric 3D face model [6] and the pose parameters, are recovered in the first stage. The face images and the normal map obtained from the first stage are fed into the second-stage network to estimate a more accurate normal map. Finally, a high-quality 3D face model is recovered via a fast surface-from-normal optimization. Fig. 1 shows the pipeline of our method. Comprehensive experiments demonstrate that our network can produce more accurate normal maps compared with state-of-the-art photometric stereo methods. Our lightweight method can also recover fine facial details better than state-of-the-art single image-based face reconstruction methods.
Related work
Photometric Stereo. The photometric stereo (PS) method [42] estimates surface normals from a set of images captured under different lighting conditions. Since the seminal work of [42], different methods have been proposed to recover surfaces in this manner [44,20]. Many such methods assume directional lights with infinite light source positions. On the other hand, some works focus on reconstruction under near point light sources, using optimization approaches that are often complex and time-consuming [46,28,32]. To achieve efficiency for practical applications with near point light sources, we only adopt optimization-based methods to construct the training dataset and then train the neutral model for lightweight photometric stereo for 3D face reconstruction. The most related work to our training data construction step is [9], which proposed an iteration pipeline to reconstruct high-quality 3D face models.
Deep Learning-Based Photometric Stereo. With the development of convolutional neural networks, various deep learning-based approaches have been proposed to solve pho-tometric stereo problems. Most of them can be categorized into two types according to their input. The first type requires images together with corresponding calibrated lighting conditions. Santo et al. [34] proposed a differentiable multilayer deep photometric stereo network (DPSN) to learn the mapping from a measurement of a pixel to the corresponding surface normal. Chen et al. [12] put forward a fully connected convolutional network to predict the normal map of a static object from an arbitrary number of images. A physics-based unsupervised neural network was proposed by Taniai et al. [39] with both surface normal map and synthesized images as output. Ikehata [22] presented an observation map to describe pixel-wise illumination information, and estimated surface normals with the observation map as input to an end-to-end convolutional network. Furthermore, Zheng et al. [47] and Li et al. [26] solved the sparse photometric stereo problem based on the observation map. This type of work assumes lighting directions as prior and cannot handle unknown lighting directions. The second type directly estimates lighting conditions and normal maps altogether from the input images. A network named UPS-FCN was introduced in [12] to calibrate lights and predict surface normals. Later, Chen et al. [11] proposed a two-stage deep learning architecture called SDPS-Net to handle this uncalibrated problem. Both types focus on solving photometric stereo problems under directional lights which is difficult to achieve in practice, and most of these methods do not perform well with sparse inputs. In this paper, we solve the sparse uncalibrated photometric stereo problem under near-field point light sources.
Single Image-Based 3D Face Reconstruction. 3D face reconstruction from a single image has made great progress in recent years. The key to this task is to establish a correspondence map from 2D pixels to 3D points. Jackson et al. [23] proposed to directly regress a volumetric representation of the 3D mesh from a single face image with a convolutional neural network. Feng et al. [16] designed a 2D representation called UV position map to record 3D positions of a complete human face. Deng et al. [14] directly regressed a group of parameters based on 3DMM [6,7,31]. All these works can reconstruct the 3D face model from a single image but cannot recover geometry details. Recently, this issue has been addressed with a coarse-to-fine reconstruction strategy. Sela et al. [36] first constructed a coarse model based on a depth map and a dense correspondence map and then recovered details in a geometric refinement process. Richardson et al. [33] developed an end-to-end CNN framework composed of a CoarseNet and a FineNet to reconstruct detailed face models. Jiang et al. [24] designed a three-stage approach based on a bilinear face model and the shape-from-shading (SfS) method. Li et al. [27] recovered face details using SfS along with an albedo prior mask and a depth-image gradient constraint. Tran et al. [41] proposed a bump map to describe face details and use a hole filling approach to handle occlusions. Chen et al. [10] recovered high-quality face models based on a proxy estimation and a displacement map. For 3D face reconstruction from caricature images, Wu et al. [45] proposed an intrinsic deformation representation for extrapolation from normal 3D face shapes.
Most existing works approximated the human face as a Lambertian surface and simulated the environment light using the spherical harmonics (SH) basis functions, which is not suitable for the near point lighting condition due to a large area of shadows. Based on our constructed dataset, we also design a network that can reconstruct a 3D face model with rich details from a single image captured under the near point lighting condition.
Dataset Construction
In this paper, we propose a lightweight method to reconstruct high-quality 3D face models from uncalibrated sparse photometric stereo images. As there is no publicly available dataset that contains face images with near point lighting and their corresponding 3D face shapes, we construct such a dataset by ourselves. Given face images captured under different light sources, we would like to solve for the albedos and the normals of the face model such that the intensities of the resulting images under calibrated lights are consistent with the observed intensities from the input images. This problem may be ill-posed with only three input images due to the presence of shadows. Therefore, we utilize a parametric 3D face model as prior knowledge, and propose an optimization method to estimate accurate normal maps. In this section, we first introduce some related basic knowledge, and then present how we construct the real image-based dataset and synthetic dataset.
Preliminaries
Imaging Formula. We approximate the human face as a Lambertian surface and simulate the near point lighting condition using the photometric stereo. Given a point light source at position P j ∈ R 3 with illumination β j ∈ R, the imaging formula for a point i can be expressed as [46]: are the position and normal of the point, and I ij , ρ i ∈ R 3 are the intensity and albedo in the RGB color space, respectively. Given the captured images, the photometric stereo problem with near point light sources is to recover lighting positions and illuminations, the vertex position, albedo and normal of a point on the object. Parametric Face Model. 3DMM [6] is a widely used parametric model for human face geometry and albedo. We use 3DMM to build a coarse face model for further optimization. In general the parametric model represents the face geometry G ∈ R 3nv and albedo A ∈ R 3nv as where n v is the number of vertices of the face model; G ∈ R 3nv and A ∈ R 3nv are respectively the mean shape and albedo; α id ∈ R 100 , α exp ∈ R 79 and α albedo ∈ R 100 are corresponding coefficient parameters specifying an individual; B id ∈ R 3nv×100 , B exp ∈ R 3nv×79 and B albedo ∈ R 3nv×100 are principle axes extracted from some 3D face models by PCA. We use the Basel Face Model (BFM) [31] for B id and B albedo , and the FaceWarehouse [8] for B exp . Camera Model. We use the standard perspective projection to project the 3D face model to the image plane, which can be expressed as where q i ∈ R 2 is the location of vertex V i in the image plane, and R ∈ R 3×3 is the rotation matrix constructed from Euler angles pitch, yaw and roll, t ∈ R 3 is the translation vector, and Π : R 3 → R 2 is the perspective projection.
Construction of Real Dataset
Our real dataset is derived from photometric face images captured using a system consisting of three near point light sources (on the front, left and right) and a fixed camera. The dataset contains 84 subjects covering different races, genders and ages, with each subject captured under 29 different expressions. All images are captured at the resolution of 1600×1200. Similar to [9], we design an optimization-based method to reconstruct a 3D face model with rich details from a set of images captured under different near point lighting positions and illuminations. The method in [9] uses the face shape prior for lighting calibration, then estimates the [9] that chooses at least three reliable lights to update normals after handling shadows. This method can only update a part of normals due to the large area of shadows and only three input images. Thus it is not suitable for our situation. normals and recovers the depths in the image plane. Different from existing photometric stereo methods which always need more than three images, we have only three images as input and there may exist under-determined parts caused by shadows ( Fig. 2 (b)). To alleviate this problem, we utilize the parametric model to help recover the normals. From the recovered coarse shape and updated normals, we can recover the 3D face shape with fine details as shown in Fig. 2 (a). Our algorithm pipeline is shown in Fig. 3.
In order to provide a good initial 3D face shape for the following optimization, we first generate the coarse face model with three image inputs using the optimization-based inverse rendering method in [24]. Different from the problem setting in [24] which has only one input image, we have three face images that share the same shape, expression and albedo parameters but with different lighting conditions. After recovering the coarse face model, we calibrate the light positions P ∈ R 3×n and illuminations β ∈ R n using the calibration method proposed in [9]. Since the Lambertian surface model is invalid in regions under shadows, we use a simple filter to determine the available light sources L i for each triangle of the 3D face mesh by where P j ∈ R 3 is the position of the j th light source, and N f i , V f i ∈ R 3 are the normal and centroid of the i th triangle. We only use available light sources in L i for each triangle to Here the first term penalizes the deviation between the observed intensity I ij from the input images and the intensity resulting I ij evaluated with Eq. (1) using the updated albedô ρ f i and the updated normalN f i at each triangle centroid V f i , with F v representing the set of visible triangles on the initial face model. I ij is determined by projecting the centroid V f i onto the image plane and performing bilinear interpolation of its nearest pixels. The second term penalizes the deviation between the updated normalsN ∈ R 3×|Fv| on visible triangles and the corresponding normals N ∈ R 3×|Fv| on the initial face model. The last term regularizes the smoothness of the updated albedo, with Ω i denoting the set of visible triangles in the one-ring neighborhood of triangle i. We solve Eq. (6) via alternating minimization. Specifically, we optimizeN while fixingρ, and then optimizeρ while fixinĝ N. This process is iterated until convergence.
Vertex Recovery. After updating the triangle normalsN, we optimize the face shape as a height filed Z ∈ R m over the image plane to match the updated normals, where m is the number of the pixels covered by the projection of the coarse face model. We first transferN to pixel normals via the standard perspective projection. Then we compute Z via: . The process of our data augmentation methods. (a) We generate different geometries by randomly generating shape and expression parameters from 3DMM [6] and transfer albedos obtained in our real dataset. (b) We use non-rigid ICP [4] to fit the face models in Light Stage [29] with the mean shape, together with albedos in our real dataset to generate training data.
Here Z 0 ∈ R m is the initial height field obtained from the coarse face model. ∆Z ∈ R m denotes the Laplacian of the height field, and the third term in Eq. (7) is to regularize the smoothness of height field. N 0 , N ∈ R 3×m collect the pixel normals derived from the triangle normalsN and from the height field Z, respectively. Specifically, to derive the normal N p for a pixel p from the height field, we first project the pixel back into its 3D location V p by inverting the standard perspective projection. Then N p is computed as N p = e 2 × e 1 + e 3 × e 2 + e 4 × e 3 + e 1 × e 4 e 2 × e 1 + e 3 × e 2 + e 4 × e 3 + e 1 × e 4 , where e 1 , e 2 , e 3 , e 4 denote the vectors from V p to the 3D locations of p's four neighbor pixels in counter-clockwise order. This non-linear least squares problem is solved with Gauss-Newton algorithm.
Construction of Synthetic Dataset
To improve the coverage of our dataset, we further construct a synthetic dataset. We use albedos and 3D face models obtained from the Light Stage [29], a publicly available dataset containing 23 people with 15 different expressions and their corresponding high-resolution 3D models, as the ground truth. Then we render synthetic images under three random point light positions and illuminations calibrated from our real dataset using Eq. (1). Data augmentation. In order to fit the requirement of further network training we carry out a data augmentation process mainly from the following two aspects. On the one hand, we use the parametric model introduced in Sec. 3.1 to present different face geometry structures and albedos by randomly generating parameters {α id , α exp , α albedo }. We transfer the albedos obtained from our real dataset to such shape models with randomly generated shape parameters, since our initial coarse model is based on the same topology. On the other hand, to have accurate parametric models as ground truth for network training on our synthetic dataset, we register a neutral parametric model to 3D face models obtained from the Light Stage using the non-rigid ICP [4], and find closest points between these two types of models as their correspondence. We further transfer albedos in our real dataset according to this correspondence. After generating those mentioned models, we render three images for each model with point light sources calibrated in our real dataset. The process is shown in Fig. 4.
Deep Photometric Stereo for 3D Faces
The optimization-based method described in Sec. 3.2 can recover high-quality facial geometry from several face images captured under different point lighting conditions, but the procedure is time-consuming and requires at least three images as input due to the ambiguity of geometry and albedo. To alleviate these problems, we propose a CNNbased method to learn high-quality facial details from an arbitrary number of face images captured under different near point lighting conditions. Similar to the procedure in Sec. 3.2, we use a two-stage network to regress a coarse face model represented with 3DMM and a high-quality normal map respectively. With the power of CNN and our wellconstructed dataset, our method can efficiently recover highquality facial geometry even with a single image, which is not possible for optimization-based photometric stereo methods and other deep photometric stereo methods that do not utilize facial priors. Better results can be obtained with more input images. The network structure is shown in Fig. 5.
Proxy Estimation Network
At the first stage, we learn the 3DMM parameters and pose parameters directly from a single image to obtain a coarse face model as a proxy for the second stage with a ResNet-18 [19]. The set of regressed parameters is represented by χ = {α id , α exp , pitch, yaw, roll, t}. To train the proxy estimation network, we use both the real data and the synthetic data with ground truth parameters as described in Sec. 3. To enrich the data, we also synthesize 5000 images using the data augmentation strategy described in Sec. 3.3. The connection between the two modules is a rendering layer which generates a coarse normal map with the estimated proxy parameters.
We use two loss terms to evaluate the alignment of dense facial geometry and sparse facial features respectively. The first term computes the distance between the recovered geometry and the ground truth geometry as follows: where G is the geometry recovered with Eq.
(2) and G gt is the ground truth geometry. As facial landmarks convey the structural information of the human face, we design the second term to measure how close the projected 3D landmark vertices are to the corresponding landmarks in the imge: where L is the set of landmarks, q i is a detected landmark position in the input image, and V i is the corresponding vertex location in the 3D mesh. The final loss function is a combination of the two loss terms: where w lan is a tuning weight.
Normal Estimation Network
The recovered geometry at the first stage lacks facial details due to the limited representation ability of 3DMM. To recover the facial geometry with final details, we learn an accurate normal map by utilizing the appearance information from face images and the geometric information from the proxy model obtained at the first stage. Specifically, the input to our normal estimation network is several face images and the normal map rendered with parameters obtained from our proxy estimation network, and the output is a refined normal map that contains high-quality facial details. The network architecture is similar to PS-FCN [12], which consists of a shared-weight feature extractor, an aggregation layer, and a normal regression module. One notable difference is that PS-FCN requires lighting information as input, while our normal estimation network requires proxy geometry as input to utilize facial priors. The loss function for normal estimation network is: where M is the set of all pixels in the face region covered by the coarse face model, n i andn i is the estimated and ground truth normals at pixel i, respectively. With the estimated accurate normal map, we then obtain a high-quality face model using the vertex recovery method as explained in Sec. 3.2.
Implementation Details
To evaluate the proposed method, we select 77 subjects from our captured dataset and 18 subjects from the Light Stage dataset to train our networks and use the other subjects for testing, yielding 95 subjects with 2503 samples for training and 12 subjects (7 from our constructed dataset and 5 from the Light Stage dataset) with 278 samples for testing. We implement our method in PyTorch [30] and optimize the networks' parameters with Adam solver [25]. We first train the proxy estimation network for 200 epochs with a batch igure 6. Ablation studies that compare the proposed method with two approaches that exclude the proxy estimation module and data augmentation respectively. For each method, we show the estimated normal maps and the corresponding angular error maps. And we use the leftmost images for single image input.
Ablation Study
To validate the design of our architecture, we compare the proposed method with alternative strategies that exclude some components. First, we demonstrate the necessity of the proxy estimation network by conducting an experiment that excludes the proxy estimation module and estimates the normal map with only face images as input in the normal estimation network. Secondly, we show the effectiveness of data augmentation for training the proxy estimation network, with another experiment that trains the proxy estimation network without the 5000 synthesized images derived from data augmentation. The comparison results on test set for both experiments are shown in Tab. 2 and Fig. 6. We can see that excluding each component will cause a drop performance for both three image inputs and single image input.
Comparisons
Comparison with deep learning-based photometric stereo. We further compare our network with UPS-FCN [12] and SDPS-Net [11] that solve the uncalibrated photometric stereo problem. Both methods estimate normals for general objects under different directional lights, whereas we focus on the human face under different point lighting conditions. We take three images with different uncalibrated lighting conditions from the test set as input and compare the accuracy of the output normal map according to the angle between the output and the ground truth normal map. We show the results in Table. 3 and Fig. 7. It can be observed from Table. 3 that all methods perform better on the Light State test data, potentially due to noises in the real captured data. On the other hand, our method performs better than the other two methods both qualitatively and quantitatively, due to the near point lighting hypothesis and the face prior information. Comparison with 3D face reconstruction from a single image. In order to evaluate the quality of our reconstructed 3D face models, we compare our deep learning-based reconstruction method with some state-of-the-art detail-preserving reconstruction methods from a single image. Most existing methods focus on reconstruction from an "in-the-wild" image and simulate the environment lighting condition using the spherical harmonics (SH) basis functions, which performs poorly in simulating the near point lighting condition due to a large area of shadows. For a fair comparison, we take only one photometric stereo image as input to our network and one image captured in normal light as input to compared methods. The results shown in Fig. 8 demonstrate that our method can better recover facial details such as wrinkles and eyes. For quantitative evaluation, we compute a geometric error for each reconstructed model, by first applying a transformation with seven degrees of freedom (six for rigid transformation and one for scaling) to align it with the ground-truth model, and then computing its point-to-point distance to the ground-truth model. The average geometric errors of Extreme3D [41], DFDN [10] and our method on test set are 1.77, 1.54, 0.86 respectively, with four examples shown in Fig. 9. It can be seen that our method significantly outperforms other methods due to our accurate simulation of the near point lighting condition.
Conclusion
We proposed a lightweight photometric stereo algorithm combining deep learning method and face shape prior to . Input Pix2Vertex DFDN Extreme3D Ours Figure 8. Qualitative comparison between Pix2vertex [36], DFDN [10], Extreme3D [41] and our method. Other methods use the left image on the top row as input while ours uses the right image as input. Our method can reconstruct more accurate face models with fine details such as wrinkles and eyes.
Input
GT Extreme3D DFDN Ours 0mm 10mm Figure 9. Reconstructed results and geometric error maps of Ex-treme3D [41], DFDN [10] and ours. Other methods use the left image in the first column as input while ours uses the right image as input.
reconstruct 3D face models containing fine-scale details. Our two-stage neural network estimates a coarse face shape with structure and a normal map with details, followed by an optimization method to recover the final facial geometry. For the network training, we construct a real dataset across different races, genders and ages, and a data augmentation is applied to enrich the dataset. Extensive experiments demonstrated that our method outperforms state-of-the-art deep learning-based photometric stereo methods and 3D face reconstruction methods from a single image.
Single Input
Two Inputs Three Inputs Inputs GT S1 S2 S3 S1&S2 S2&S3 S3&S1 S1&S2&S3 Figure 13. Estimated normal maps. For each model we show normal maps and their corresponding error maps with different kinds of inputs. S1, S2, S3 represent the leftmost, the upper-right corner and the lower-right corner image respectively. | 2020-03-30T01:00:37.259Z | 2020-03-27T00:00:00.000 | {
"year": 2020,
"sha1": "f7c0f4c42f424cc9e0cbed1cc2bf1d84b6c3009c",
"oa_license": null,
"oa_url": "https://orca.cardiff.ac.uk/130641/1/FacePSNet.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f7c0f4c42f424cc9e0cbed1cc2bf1d84b6c3009c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
261594520 | pes2o/s2orc | v3-fos-license | CAR Protects Females from Diet-Induced Steatosis and Associated Metabolic Disorders
Non-Alcoholic Fatty Liver Disease (NAFLD) is the most common cause of chronic liver disease worldwide, affecting 70–90% of obese individuals. In humans, a lower NAFLD incidence is reported in pre-menopausal women, although the mechanisms affording this protection remain under-investigated. Here, we tested the hypothesis that the constitutive androstane nuclear receptor (CAR) plays a role in the pathogenesis of experimental NAFLD. Male and female wild-type (WT) and CAR knock-out (CAR−/−) mice were subjected to a high-fat diet (HFD) for 16 weeks. We examined the metabolic phenotype of mice through body weight follow-up, glucose tolerance tests, analysis of plasmatic metabolic markers, hepatic lipid accumulation, and hepatic transcriptome. Finally, we examined the potential impact of HFD and CAR deletion on specific brain regions, focusing on glial cells. HFD-induced weight gain and hepatic steatosis are more pronounced in WT males than females. CAR−/− females present a NASH-like hepatic transcriptomic signature suggesting a potential NAFLD to NASH transition. Transcriptomic correlation analysis highlighted a possible cross-talk between CAR and ERα receptors. The peripheral effects of CAR deletion in female mice were associated with astrogliosis in the hypothalamus. These findings prove that nuclear receptor CAR may be a potential mechanism entry-point and a therapeutic target for treating NAFLD/NASH.
Introduction
Metabolic diseases include many energy metabolism disorders, such as obesity and type II diabetes.Metabolic diseases represent a global health issue approaching epidemic proportions.Non-Alcoholic Fatty Liver Disease (NAFLD) emerges as the most common cause of chronic liver disease worldwide, affecting 30% of the general population and 70 to 90% of obese individuals [1].NAFLD is characterized by a reversible accumulation of lipid droplets in the liver, which in some cases can progress to more severe states such as Non-Alcoholic Steato-Hepatitis (NASH), cirrhosis, or hepatocellular carcinoma.NAFLD is a multifactorial pathology resulting from genetic and/or environmental factors such as lack of physical exercise and high fat and carbohydrate diets.Moreover, several studies suggest a sexual dimorphism in the prevalence of NAFLD with a higher susceptibility in men compared to pre-menopausal women [2].Despite the important prevalence of hepatic diseases, there is no approved treatment for NAFLD.Therefore, it is essential to elucidate the molecular mechanisms involved in the pathogenesis of NAFLD to seek new therapeutic targets while considering the sexual dimorphism of this disease.Nuclear receptor CAR (Constitutive Androstan Receptor), mainly expressed in the liver, has first been described for its role in exogenous detoxification processes and endogenous catabolism of steroid and thyroid hormones [3,4].However, many studies have highlighted the involvement of CAR in major hepatic metabolic pathways such as gluconeogenesis and lipogenesis, thus revealing a role in regulating energy homeostasis [5,6].Given its role at the crossroads
Oral Glucose Tolerance Tests
At 4 and 10 weeks of diet, oral glucose tolerance tests were performed on conscious mice.Mice were fasted for 6 h and fasted glycaemia was assessed with a drop of tail vein blood using an AccuCheck Performa glucometer (Roche Diagnostics, Indianapolis, IN, USA).Mice then received an oral glucose load (2 g/kg body weight) and blood glucose was assessed as described above at −15, 0, 15, 30, 45, 60, 90, and 120 min.
Liver Histological Sections and Scoring
Paraformaldehyde-fixed, paraffin-embedded liver tissue was sliced into 3-µm sections, deparaffinized, rehydrated, and stained with hematoxylin-eosin or Sirius red for histopathological analysis.Slides with two sections of a single hepatic lobe were digitized using a Mirax scanner from 3DHistech.Sections were visualized using CaseViewer software version 2.4.0.119028 and were individually scored using a NAFLD scoring system (NAS) adapted from Kleiner et al. [7].ImageJ public domain software (ImageJ Website, https://imagej.net/ij/,accessed on 5 January 2023) was used to assess the area covered by lipid droplets or Sirius red staining.A total of four variables were qualitatively assessed and ranked with a score: hepatocellular steatosis, liver inflammation, lobular fibrosis.A detailed summary of the criteria for score assignments is presented in Table S1.
Hepatic Neutral Lipids Analysis
Hepatic neutral lipid contents were determined at the end of the experiment as described previously [8].Liver samples were homogenized in methanol/5 mM EGTA (2:1, v/v); lipids were extracted with chloroform/methanol/water (2.5:2.5:2.1, v/v/v), in the presence of glyceryl trinonadecanoate, stigmasterol, and cholesteryl heptadecanoate (Sigma Saint-Quentin Fallavier, France) as internal standards.Triglycerides, free cholesterol, and cholesterol esters were analyzed by gas-liquid chromatography on a Focus Thermo Electron system.
Plasma Analysis
Blood samples were collected following a 6 h fast for insulinemia assays or in fed-state before euthanasia for aspartate transaminase (AST) and alanine transaminase (ALT) assays.Blood was collected from the sub-mandibular vein using a lancet into lithium-heparin coated tubes (BD Microtainer).Plasma was obtained by centrifugation (1500× g, 10 min, 4 • C) and stored at −80 • C. Fasted insulinemia was assayed using the ultrasensitive mouse insulin ELISA kit (Crystal Chem, Elk grove Village, IL, USA).AST and ALT plasmatic levels were assayed using a PENTRA 400 biochemical analyzer (Anexplo facility, Toulouse, France).
Microarray and qPCR Gene Expression Studies
RNA was extracted from liver samples and qPCR assays were performed as previously described [9] for gene expression analysis of Tnfα, Il1β, Pdgfr1β, Col1a1, Tgfb1, Tgfbr1, Acta2.Primers used for qPCR assays are reported in Table S2.Microarray analysis on liver samples (n = 6 per group) was performed at the GeT-TRiX facility (GénoToul, Génopole Toulouse) using Agilent Sureprint G3 Mouse GE v2 microarrays (8 × 60 K, design 074809) following the manufacturer's instructions.Microarray data were analyzed using R (R Core Team, 2018) and Bioconductor packages [10].Raw data (median signal intensity) were filtered, log2 transformed, corrected for batch effects (microarray washing bath and labelling serials) and normalized using the quantile method [11].A model was fitted using the limma lmFit function [12].Pair-wise comparisons between biological conditions were applied using specific contrasts.A correction for multiple testing was applied using the Benjamini-Hochberg procedure [13] to control the False Discovery Rate (FDR).Probes with FDR ≤ 0.05 were considered to be differentially expressed between conditions.Hierarchical clustering was applied to the samples and the differentially expressed probes using the 1-Pearson correlation coefficient as distance and Ward's criterion for agglomeration.The clustering results are illustrated as a heatmap of expression signals.The enrichment of KEGG pathways was evaluated using Enrichr (Biotools).The differentially expressed gene list of CAR−/− females was in silico compared in BaseSpace Correlation Engine (Illumina, NextBio) to liver signatures of female ERα−/− knock-out mice fed an HFD (D12492, Research Diets) for 10 weeks (GSE95283).
Brain Immunohistochemistry
Brains were fixed in PFA 4% solution and immersed in sucrose 15% for 24 h followed by sucrose 30% (n = 5 per group).Brains were then snap-frozen and stored at −80 • C. Slices (20 µm) were obtained using a cryostat, and immunohistochemistry was performed after PBS washes.Slices were added with blocking solution (PBS, triton 0.5%, horse serum 20%) at room temperature for 1 h.Primary antibodies (Table S3) were diluted in a blocking solution, and slices incubated overnight at 4 • C.After PBS washes, a secondary antibody was added in PBS for 2 h at room temperature.After PBS washes, slices were mounted using Vectashield containing DAPI.For all quantifications, 20X Z-stack images (Z = 12 to 15 planes, each of 1 µm) were analyzed using Fiji (version 2.9.0).Two slices were examined for each mouse to quantify signals in constant regions of interest (ROI), identified by DAPI maps: arcuate (AN) and paraventricular (PVN) nuclei of the hypothalamus, cortex (CTX), dentate gyrus (DG), and white matter (WM).Before analysis, all Z-stacks images were combined (Z-project, sum) using Fiji.For GFAP quantification: images were converted to RGB stack format.The signal threshold was adjusted to 200 units for each image.The area of GFAP signal was calculated, setting threshold sensitivity equal for each image.GFAP data are expressed as a percentage of ROI total pixels.For Iba1 quantification, a cell counter tool was used to calculate the total number of DAPI cells and the number of Iba1+ cells in each ROI.Data are expressed as a percentage (Iba1+/tot DAPI+) × 100.
Data Representation and Statistical Analysis
All data are presented as the mean +/− standard error of the mean (SEM).Differential effects were analyzed on GraphPad Prism (version 9.00; GraphPad Software) to evaluate the effect of CAR deletion combined with high-fat diet in male or female mice.For each sex, the WT group was compared to the CAR−/− group using two-tailed Student's t-test.
To compare histological score ranks, the Mann-Whitney test was used.A p-value < 0.05 was considered significant.
CAR Deletion Exacerbates HFD-Induced Body Weight Gain Only in Female Mice
A weekly follow-up revealed a higher body weight in CAR−/− than in WT males from week 2 until week 14 (Figure 1A).Male mice present a total weight gain of 25.7 g for WT and 26.2 g for CAR−/− (Figure 1A).In contrast, CAR−/− female mice gain 30.5 g compared to 14.5 g in WT (Figure 1B).At 16 weeks, the mean weight was 47.3 g for WT males and 49.5 g for CAR−/− males.CAR−/− females reached 49.2 g compared to 33.9 g in WT females.No difference was observed in the food and water intake of WT and CAR−/− mice (Supplementary Figure S1A-D).Next, the subcutaneous and perigonadal white adipose tissues (WAT) were harvested and weighed (Figure 1D).Subcutaneous WAT was 0.049 g in WT compared to 0.073 g in CAR−/− females.Perigonadal WAT was increased from 0.052 g in WT to 0.068 g in CAR−/− females.The adiposity index was calculated taking into account perigonadal and subcutaneous adipose tissue was greater in CAR−/− female mice compared to WT.
CAR Deletion Exacerbates HFD-Induced Hyperglycemia and Hyperinsulinemia
After 10 weeks of diet, glucose tolerance was assessed using an oral glucose tolera test (OGTT).WT and CAR−/− males presented equal glucose tolerance as indicated by equal OGTT area under the curve (AUC) (Figure 2A,C).Conversely, CAR−/− female m
CAR Deletion Exacerbates HFD-Induced Hyperglycemia and Hyperinsulinemia
After 10 weeks of diet, glucose tolerance was assessed using an oral glucose tolerance test (OGTT).WT and CAR−/− males presented equal glucose tolerance as indicated by an equal OGTT area under the curve (AUC) (Figure 2A,C).Conversely, CAR−/− female mice present decreased AUC, revealing better glucose tolerance than WT mice (Figure 2B,C).Fasted glycaemia and insulinemia levels were assessed, revealing more important hyperglycemia and hyperinsulinemia in CAR−/− mice than WT in both male and female mice (Figure 2D,E).
CAR Deletion Exacerbates HFD-Induced Hepatic Steatosis and Liver Injury
After 16 weeks of HFD, liver samples were harvested for histological sectioning matoxylin-eosin staining, and scoring of hepatocellular steatosis.Analysis of the area ered by the lipid droplets showed that the steatosis was increased two times mo CAR−/− females (↑19.12%)than in CAR−/− males (↑10.29%).WT males presented ma microvesicular hepatocellular steatosis (Figure 3A), with a mean steatosis score of CAR−/− males showed a higher mean steatosis score of 5.00, with cells presenting m micro/macrovesicular or microvesicular steatosis (Figure 3A).In WT females, a sco microvesicular steatosis was observed in all animals, which is considerably lower that of WT males (Figure 3A).CAR−/− females presents a more severe steatosis, w mean score of 4.80, comparable to that described for CAR−/− males (Figure 3A).
Hepatic neutral lipids were analyzed to support the histological observat CAR−/− males present increased levels of cholesterol esters compared to WT mice (Fi 3B).In females, both cholesterol esters and triglycerides increase in CAR−/− mice (Fi 3B).
Subsequently, we determine whether the observed steatosis presents progres markers toward a more severe form, such as NASH.The presence of foci of mononu and polymorphonuclear cells assessed inflammation scores.CAR−/− male mice prese a discrete score-1 inflammation with changes comparable in appearance and distribu to WT mice (Figure 3C).Score-1 inflammation was also observed in WT and CAR−/ males (Figure 3C).Despite no morphological differences, RNA expression of pro-infl matory cytokine Tnfα and Il1β was increased in CAR−/− mice (Figure 3C).
CAR Deletion Exacerbates HFD-Induced Hepatic Steatosis and Liver Injury
After 16 weeks of HFD, liver samples were harvested for histological sectioning, hematoxylin-eosin staining, and scoring of hepatocellular steatosis.Analysis of the area covered by the lipid droplets showed that the steatosis was increased two times more in CAR−/− females (↑19.12%)than in CAR−/− males (↑10.29%).WT males presented mainly microvesicular hepatocellular steatosis (Figure 3A), with a mean steatosis score of 3.33.CAR−/− males showed a higher mean steatosis score of 5.00, with cells presenting mixed micro/macrovesicular or microvesicular steatosis (Figure 3A).In WT females, a score 2 microvesicular steatosis was observed in all animals, which is considerably lower than that of WT males (Figure 3A).CAR−/− females presents a more severe steatosis, with a mean score of 4.80, comparable to that described for CAR−/− males (Figure 3A).
Hepatic neutral lipids were analyzed to support the histological observations.CAR−/− males present increased levels of cholesterol esters compared to WT mice (Figure 3B).In females, both cholesterol esters and triglycerides increase in CAR−/− mice (Figure 3B).
Subsequently, we determine whether the observed steatosis presents progression markers toward a more severe form, such as NASH.The presence of foci of mononuclear and polymorphonuclear cells assessed inflammation scores.CAR−/− male mice presented a discrete score-1 inflammation with changes comparable in appearance and distribution to WT mice (Figure 3C).Score-1 inflammation was also observed in WT and CAR−/− females (Figure 3C).Despite no morphological differences, RNA expression of pro-inflammatory cytokine Tnfα and Il1β was increased in CAR−/− mice (Figure 3C).To further characterize the hepatic impact of CAR deletion, plasmatic levels of alanine aminotransferase (ALT) and aspartate aminotransferase (AST) were assessed to evaluate liver injury.Both CAR−/− males and females, presented increased levels of ALT and AST, suggesting underlying steatohepatitis (Figure 3D).The presence of liver fibrosis was assessed by analysis of histological slices stained with Sirius red (Figure 4A).The analysis of fibrosis area coverage demonstrated no variation between CAR−/− and WT mice, irrespective of their gender (Figure 4B).Fibrosis scoring revealed a comparable score-1 peri-sinusoidal or peri-portal fibrosis in all groups and no transcriptional changes in the expression of the Pdgfr1β, Tgfb1, Tgfbr1, and Acta2 markers of fibrosis (Figure 4B).Only the expression of the marker Col1a1 was increased in females (Figure 4B).To further characterize the hepatic impact of CAR deletion, plasmatic levels of alanine aminotransferase (ALT) and aspartate aminotransferase (AST) were assessed to evaluate liver injury.Both CAR−/− males and females, presented increased levels of ALT and AST, suggesting underlying steatohepatitis (Figure 3D).
The presence of liver fibrosis was assessed by analysis of histological slices stained with Sirius red (Figure 4A).The analysis of fibrosis area coverage demonstrated no variation between CAR−/− and WT mice, irrespective of their gender (Figure 4B).Fibrosis scoring revealed a comparable score-1 peri-sinusoidal or peri-portal fibrosis in all groups and no transcriptional changes in the expression of the Pdgfr1β, Tgfb1, Tgfbr1, and Acta2 markers of fibrosis (Figure 4B).Only the expression of the marker Col1a1 was increased in females (Figure 4B).
Dimorphic Impact of CAR Deletion on Hepatic Transcriptome
Next, a microarray analysis was performed to assess the impact of CAR deletion on the hepatic transcriptome of HFD-fed mice.A principal component analysis (PCA) revealed an important effect of CAR deletion on the hepatic transcriptome in males and females (Figure 5A), with significant discrimination of WT and CAR−/− mice on the first principal component, representing 28.6% of the total variance (Dim1).Clustering of male and female mice is reported on the second principal component (Dim2, 22.7% of total variance) with a convergence of CAR−/− females towards the male profile.
A heat map of 3354 probes selected as differentially expressed genes was plotted (Figure 5B).Nine clusters are distinguishable; cluster 1 is down-regulated explicitly in CAR−/− males, whereas cluster 5 is specifically up-regulated in CAR−/− females.Clusters 4 and 6 are impacted by sex, whereas clusters 2, 8, and 9 are by genotype.The expression profiles of clusters 3 and 7 in CAR−/− female mice are similar to those of males.Enrichment analysis of Kegg pathways was performed for clusters 3 and 7.The 7 most significant pathways are presented in Supplementary Table S4.
Venn diagrams were plotted to compare variations between WT and CAR−/− in both sexes (Figure 5C,D).CAR−/− males and females present 1643 and 1896 up-regulated genes compared to WT.Only 810 up-regulated genes are common to CAR−/− males and females.Similarly, CAR−/− males and females present 1550 and 1788 down-regulated genes compared to WT, and only 367 down-regulated genes are common to both sexes.Thus, most up and down-regulated genes in CAR−/− mice are not common to both sexes, revealing a dimorphic impact of CAR deletion on the hepatic transcriptome.Enrichment analysis of up-regulated male genes revealed pathways such as osteoclast differentiation and natural killer cytotoxicity (Figure 5C).In CAR −/− females, genes involved in the NAFLD and thermogenesis pathways are up-regulated (Figure 5C).In males, steroid hormone biosynthesis and amino acid metabolism are negatively affected by CAR deletion (Figure 5D).Protein processing in the endoplasmic reticulum (ER) and spliceosome are the main pathways represented among down-regulated female genes (Figure 5D).The 36 genes linked to the NAFLD pathway, represented among the specifically up-regulated genes in CAR−/− females (Figure 5C), are in Figure 5E.These genes were traced in the NAFLD Kegg pathway map (hsa04932), representative of a stage-dependent progression of NAFLD, and are mainly involved in β-oxidation (Ndufa-b-s-c, Sdha-b, Cyc1, Uqcrq-c-s, Cox1-5-6-7-8) and inflammation (Jun) which are mechanisms disrupted in NASH conditions.Genes involved in simple steatosis mechanisms are specifically up-regulated genes in CAR−/− females (Gsk3, Prkab2).
Positive correlation between transcriptomic signatures of CAR−/− and ERα−/− females.The hepatic transcriptomic signature of CAR−/− males and females was compared to other hepatic profiles using Base Space Correlation Engine (Illumina).The transcriptomic signature of CAR−/− females positively correlated with ERα knock-out females fed an HFD (Figure 6A).Of up-regulated genes, 373 are common to CAR−/− females and ERα females (Figure 6B).Positive correlation between transcriptomic signatures of CAR−/− and ERα−/− females.The hepatic transcriptomic signature of CAR−/− males and females was compared to other hepatic profiles using Base Space Correlation Engine (Illumina).The transcriptomic signature of CAR−/− females positively correlated with ERα knock-out females fed an HFD (Figure 6A).Of up-regulated genes, 373 are common to CAR−/− females and ERα females (Figure 6B).Enrichment analysis revealed that these genes are involved in NAFLD, and oxidative phosphorylation is disrupted in NASH conditions (Figure 6D).On the other hand, 345 down-regulated genes are common to CAR−/− and ERα females (Figure 6C) and are primarily involved in the spliceosome pathway and linoleic acid metabolism (Figure 6E).
CAR Deletion Associates with Hypothalamic Astrogliosis in HFD-Fed Females: Initial Evidence
We have previously demonstrated that CAR deletion leads to recognition memory impairment and increased anxiety-like behavior in males [14].In this study, our aim was to investigate the response of the mouse brains to a HFD in order to understand how the combination of the CAR deletion and a HFD influences brain function.Specific brain regions were analyzed to assess the possible impact of CAR deletion and HFD on central nervous system homeostasis, initially exploring glial cells at a specific brain border.GFAP reactivity in the arcuate (AN) and paraventricular (PVN) nuclei of the hypothalamus was increased in CAR−/− females (Figure 7A,B).Enrichment analysis revealed that these genes are involved in NAFLD, and oxidative phosphorylation is disrupted in NASH conditions (Figure 6D).On the other hand, 345 down-regulated genes are common to CAR−/− and ERα females (Figure 6C) and are primarily involved in the spliceosome pathway and linoleic acid metabolism (Figure 6E).
CAR Deletion Associates with Hypothalamic Astrogliosis in HFD-Fed Females: Initial Evidence
We have previously demonstrated that CAR deletion leads to recognition memory impairment and increased anxiety-like behavior in males [14].In this study, our aim was to investigate the response of the mouse brains to a HFD in order to understand how the combination of the CAR deletion and a HFD influences brain function.Specific brain regions were analyzed to assess the possible impact of CAR deletion and HFD on central nervous system homeostasis, initially exploring glial cells at a specific brain border.GFAP reactivity in the arcuate (AN) and paraventricular (PVN) nuclei of the hypothalamus was increased in CAR−/− females (Figure 7A,B).In these specific conditions, astrogliosis is specific to the hypothalamus.Although limited to the quantification of fluorescence in the total tissue, GFAP immunoreactivity was not observed in other regions such as the Cortex (CTX), hippocampus (e.g., dentate gyrus), and the white matter (WM) (Figure 7C).Total GFAP expression was unchanged in males (Figure S2A).With the method used, tissue IBA1 microglial reactivity was unchanged across experimental conditions except for the WM in males (Figure S2B) and AN in females (Figure S2C).These results suggest regionally limited pro-inflammatory changes, indicating a possible and specific involvement of hypothalamic astrocytes in female mice.
Discussion
The central focus of this study was to investigate the gender-specific variations in the function of the nuclear receptor CAR in response to HFD, a known trigger of metabolic stress.A prevailing trend in animal models is that males tend to be more prone to developing obesity, insulin resistance, hyperglycemia, and steatosis compared to females upon exposure to dietary challenges [15].The outcomes of this study highlight the critical role of the nuclear receptor CAR in safeguarding females against the onset of metabolic disorders.
These revelations shed light on a more pronounced detrimental impact of CAR deficiency in females in response to an HFD.These effects encompass weight gain, adiposity, steatosis development, and hypothalamic astrogliosis.Conversely, the implications of CAR absence in males are predominantly linked to hyperglycemia, hyperinsulinemia, and hepatic injury.Prior research, predominantly conducted in males, has showcased that the activation of the CAR receptor through pharmacological agonists can ameliorate glucose and insulin tolerance and alleviate hepatic steatosis in animals afflicted with metabolic disorders [6,16].Our present study extends this understanding by unveiling a more robust influence of the CAR receptor in females relative to males when exposed to an HFD.
An unexpected result concerns the enhanced glucose tolerance observed in CAR−/− females compared to control even though they exhibited elevated levels of blood glucose and insulin.This improved glucose tolerance was already observed in a previous study, where we explored the role of nuclear receptor CAR under standard dietary conditions through the characterization of both WT and CAR−/− male and female mice [9].The underlying mechanisms driving this improved glucose tolerance remain complex, potentially involving intricate physiological interactions such as heightened glucose uptake or increased insulin sensitivity in distinct tissues such as the liver, muscles, or adipose tissue [17].A more targeted investigation utilizing mice with CAR inactivation in specific tissues could provide a deeper comprehension of this intriguing observation.
The intricate involvement of CAR in the development of NAFLD and NASH has been a subject of conflicting findings in existing research.Activation of CAR has been associated with both beneficial effects, such as mitigating hepatic steatosis, and adverse outcomes, such as exacerbating hepatic fibrosis and hepatocarcinogenesis, depending on the experimental context [18,19].Most of these investigations have been centered around males, leaving a gap in understanding CAR's dimorphic impact.Our study contributes to filling this gap by revealing CAR's pivotal role in safeguarding female mice against NAFLD in the context of an HFD.These findings align with broader research highlighting the inherent protection observed in pre-menopausal women, which tends to diminish following menopause.Consistent with our results, a recent meta-analysis involving 54 studies reported a 19% lower risk of NAFLD in women compared to men [20].
The intricate connection between metabolism and the central nervous system has garnered increasing attention due to its potential to influence various physiological processes.Our study takes a comprehensive approach by examining sex and region-specific adaptations within the brains of HFD-fed CAR−/− mice in comparison to their WT counterparts.This approach was guided by our earlier research, which indicated that CAR deletion is associated with notable adaptations in recognition memory and anxiety-like behavior, along with observable histological changes in glial cells [14].This endeavor is particularly relevant within the context of a HFD, known to exert substantial effects on both metabolism and neural function [21].Through this study, we specifically focus on histological brain markers that indicate modifications in astrocytes, with a particular emphasis on their response to an HFD.The significance of our findings is underscored by the identification of distinct histological changes in astrocytes within the hypothalamic paraventricular and arcuate nuclei of female CAR−/− mice exposed to a high-fat diet.In the hypothalamus, astrocytes perform various functions that can directly affect energy homeostasis, such as nutrient sensing and transport [22].In addition, HFD-induced metabolic stress induces astrogliosis, described as a protective homeostatic response that restrains food intake in response to the diet [23].The alterations observed in astrocytes suggest a potential link between CAR, neural adaptations, and the intricate regulation of metabolic processes.In essence, our study offers a novel perspective by bridging the gap between CAR's known role in metabolic regulation and its potential influence on neural mechanisms.This exploration sheds light on the complex interplay between CAR, the central nervous system, and the metabolic responses observed in females exposed to a high-fat diet.By investigating these interactions, we contribute to a more holistic understanding of how CAR exerts its effects across physiological domains, which could have broader implications for addressing metabolic disorders in a comprehensive manner.
To deepen our comprehension of the underlying mechanisms responsible for CAR's protective role, particularly in females, we executed an in-depth analysis of hepatic transcriptomes in both WT and CAR−/− mice subjected to an HFD.The comparison of transcriptomes between HFD-fed CAR−/− males and females revealed a noteworthy convergence, as indicated by both the principal component analysis (PCA) in Figure 5A and the heatmap representation in Figure 5B.Strikingly, the absence of CAR in females resulted in a transcriptomic profile more closely aligned with that of males.Notably, the impact of CAR deletion appeared to exert a more pronounced effect on females than on males, evident from the divergent numbers of up and down-regulated genes (Figure 5C,D), which align with the more severe metabolic disorders observed in CAR−/− females (Figures 1B and 2B).The phenomenon of female liver masculinization resulting from CAR expression loss has been previously documented in various studies.This effect is attributed to the alteration of the 6α/15α-OH testosterone ratio, a recognized biomarker associated with liver masculinization [24].This transition towards a more masculine liver pattern may involve the Stat5b pathway, which is acknowledged for its role in regulating liver masculinization or feminization [25] and its potential interaction with CAR [9].So, everything happens as if the invalidation of CAR causes females to lose their protection against metabolic disorders, rendering them equally susceptible as males.
Specifically, clusters 3 and 7, highlighted in the heatmap, demonstrated a distinct disruption in CAR−/− females (Figure 5B).Analysis of these clusters underscored the trend of the female transcriptome drawing closer to the male profile in the absence of CAR.Remarkably, among the array of insights, 57 genes associated with thermogenesis exhibited specific up-regulation in CAR−/− female mice (Figure 5C).This intriguing finding suggests a potential perturbation in thermogenesis that could contribute to the observed metabolic phenotype in CAR−/− females particularly the improved glucose tolerance [26].
Unraveling the mechanisms behind this potential perturbation in thermogenesis not only sheds light on the intriguing biology of CAR−/− females but also presents a promising avenue for understanding how thermogenesis and glucose tolerance are interconnected in the broader context of metabolic health Moreover, upon a more comprehensive analysis, a notable enrichment of genes associated with the NAFLD pathway was observed among the up-regulated genes, with a particularly pronounced effect in CAR−/− females (Figure 5C).These identified genes intricately align with the well-established NAFLD Kegg pathway map, effectively delineating a cascade from the initial stages of hepatic lipid accumulation (NAFLD) to the more advanced state of non-alcoholic steatohepatitis (NASH).This pathway involves multifaceted biological processes, encompassing inflammation, oxidative phosphorylation, and perturbations in lipid metabolism.This outcome is in line with the evaluation of gene expression related to fibrosis, which reveals an elevated expression of the Col1a1 gene associated with collagen synthesis, particularly evident in females (Figure 4C) even if the Sirius red staining of liver slices did not reveal changes in fibrosis levels between WT and CAR−/− mice.All this underscores the potential significance of the CAR receptor in influencing the critical metabolic NAFLD-to-NASH cascade processes.This provides an intriguing avenue for future investigations into the intricate interrelationships between CAR, hepatic fibrosis, and the dynamic shifts occurring in NAFLD progression, particularly in the context of gender disparities.The cumulative evidence underscores the potential substantive role played by the CAR receptor in steering the pivotal and intricate metabolic processes that orchestrate the progression from NAFLD to NASH.Consistent with our results, a recent meta-analysis involving 54 studies reported a 19% lower risk of NAFLD in women compared to men.However, once NAFLD is established, women share a similar risk of advancing to NASH, along with a 37% higher risk of advanced fibrosis [20].
Using the Base Space Correlation Engine analysis, we evidenced a positive correlation between HFD-fed CAR−/− mice and HFD-fed ERα−/− mice (Figure 6A).This correlation provides a potential explanation for the heightened severity of steatosis observed in CAR−/− HFD-fed female mice compared to the control group (Figure 3A,B).Intriguingly, this correlation sheds light on the co-regulation of 373 genes being up-regulated and 345 genes being down-regulated in the absence of either CAR or Erα.Remarkably, these co-regulated genes play integral roles in the previously described metabolic pathways of thermogenesis and NAFLD (Figure 6D).This insight reveals a lack of compensatory regulation between these two nuclear receptors, emphasizing their individual indispensability.This phenomenon may arise from shared response elements on the promoters of the aforementioned genes, implying a direct mechanism.Moreover, the involvement of CAR in estrogen catabolism [4], prompts the proposal of an indirect mechanism, suggesting that CAR potentially modulates ERα activity by regulating estrogen levels.This unexplored interplay between the CAR and ERα signaling pathways could imply estrogen inefficiency in the absence of CAR, similar to the absence of ERα [27].It is noteworthy that the protection against NAFLD in pre-menopausal women is attributed to estrogens, which curtail hepatic lipid accumulation and dampen liver inflammation and fibrosis [28].Supporting this notion, the silencing of ERα expression in the liver using adenoviral short hairpin RNA significantly amplifies hepatic triglyceride accumulation in HFD-fed C57BL/6 female mice [29].Similarly, the deletion of ERα in the liver abolishes the protective effects of E2 against HFD-induced steatosis [27].In our study, intriguing parallels emerge as certain functions of CAR intersect with those of ERα.The deletion of CAR in female mice accentuates the accumulation of lipids in the liver in response to a HFD diet, as illustrated in Figure 3B.This cross-talk between CAR and ERα could explain why females lose protection against metabolic disorders upon losing CAR and become as susceptible as males (Figure 5A).
Conclusions
In conclusion, this study provides valuable insights into the dimorphic pathogenesis of NAFLD, unveiling the protective role of the nuclear receptor CAR in mitigating HFD-induced metabolic disruptions, particularly in females.CAR emerges as a potential therapeutic target for addressing NAFLD/NASH, warranting further investigation.The multifaceted roles of CAR, from metabolic regulation to potential neural influences, present a comprehensive perspective on its influence on physiological processes, opening avenues for a holistic approach to managing metabolic disorders.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/cells12182218/s1, Figure S1: CAR deletion does not affect food and water intake of mice, Figure S2: CAR deletion induces dimorphic central adaptations, Table S1: Precise criteria used in the NAS scoring system; Table S2: Oligonucleotide primers used for qPCR assays; Table S3: Primary/secondary antibodies used for immunohistochemistry, Table S4: Kegg pathway enrichment of cluster 3 and 7 of heat map.
Figure 1 .
Figure 1.CAR deletion exacerbates HFD-induced body weight gain only in female mice.B weight was monitored during 16 weeks of HFD in WT and CAR−/− male and female (A) mice mean body weight gain was assessed (B).Following 16 weeks of diet, subcutaneous (C) perigonadal (D) white adipose tissues (WAT) were harvested and weighed (BW: averaged by gr of body weight).The adiposity index was calculated taking into account perigonadal and subc neous adipose tissue (E) Data are the mean ± SEM of n = 10 per group.Groups were compared u two-tailed Student's t-test and * p ≤ 0.05, ** p ≤ 0.01, and *** p ≤ 0.001.
Figure 1 .
Figure 1.CAR deletion exacerbates HFD-induced body weight gain only in female mice.Body weight was monitored during 16 weeks of HFD in WT and CAR−/− male and female (A) mice and mean body weight gain was assessed (B).Following 16 weeks of diet, subcutaneous (C) and perigonadal (D) white adipose tissues (WAT) were harvested and weighed (BW: averaged by grams of body weight).The adiposity index was calculated taking into account perigonadal and subcutaneous adipose tissue (E) Data are the mean ± SEM of n = 10 per group.Groups were compared using two-tailed Student's t-test and * p ≤ 0.05, ** p ≤ 0.01, and *** p ≤ 0.001.
Cells 2023 , 6 Figure 2 .
Figure 2. CAR deletion exacerbates HFD-induced hyperglycemia and hyperinsulinemia.At 10 of HFD, an oral glucose tolerance test was performed (A,B).Data are presented as the perce of initial glycaemia with the corresponding area under the curve (AUC, (C)).Glycaemia and linemia levels were assessed in the fasted state (D,E).Data are the mean ± SEM of n = 10 per g Groups were compared using two-tailed Student's t-test and ** p ≤ 0.01, and *** p ≤ 0.001.
Figure 2 .
Figure 2. CAR deletion exacerbates HFD-induced hyperglycemia and hyperinsulinemia.At week 10 of HFD, an oral glucose tolerance test was performed (A,B).Data are presented as the percentage of initial glycaemia with the corresponding area under the curve (AUC, (C)).Glycaemia and insulinemia levels were assessed in the fasted state (D,E).Data are the mean ± SEM of n = 10 per group.Groups were compared using two-tailed Student's t-test and ** p ≤ 0.01, and *** p ≤ 0.001.
Figure 3 .
Figure 3. CAR deletion exacerbates HFD-induced hepatic steatosis and injury.Following 16 weeks of HFD, histological sections of the liver were stained with hematoxylin-eosin (HE, magnification ×20), the area covered by the lipid droplets was estimated and a score ranging in severity from 1 to 3 was assigned for hepatocellular steatosis (A).Hepatic cholesterol esters and triglycerides were analyzed by gas chromatography (B).Scoring of inflammation was performed on HE slices, and gene expression of inflammation markers Tnfα and Il1β were assessed by qPCR (C).Plasmatic levels of alanine aminotransferase (ALT) and aspartate aminotransferase (AST) were assayed (D).Data are the mean ± SEM of n = 10 per group.Groups were compared using two-tailed Student's t-test and * p ≤ 0.05, ** p ≤ 0.01, and *** p ≤ 0.001.Scoring ranks were compared using the Mann-Whitney test.
Figure 3 .
Figure 3. CAR deletion exacerbates HFD-induced hepatic steatosis and injury.Following 16 weeks of HFD, histological sections of the liver were stained with hematoxylin-eosin (HE, magnification ×20), the area covered by the lipid droplets was estimated and a score ranging in severity from 1 to 3 was assigned for hepatocellular steatosis (A).Hepatic cholesterol esters and triglycerides were analyzed by gas chromatography (B).Scoring of inflammation was performed on HE slices, and gene expression of inflammation markers Tnfα and Il1β were assessed by qPCR (C).Plasmatic levels of alanine aminotransferase (ALT) and aspartate aminotransferase (AST) were assayed (D).Data are the mean ± SEM of n = 10 per group.Groups were compared using two-tailed Student's t-test and * p ≤ 0.05, ** p ≤ 0.01, and *** p ≤ 0.001.Scoring ranks were compared using the Mann-Whitney test.
Figure 4 .
Figure 4. CAR deletion does not affect liver fibrosis.Histological sections of the liver were stained with Sirius red ((A), magnification ×20).Circles indicate peri-sinusoidal fibrosis, and rectangles indicate peri-portal fibrosis.Analysis of fibrosis area coverage was assessed, lobular fibrosis was scored on Sirius red slices and gene expression of fibrosis markers Pdgfr1β, Col1a1, Tgfb1, Tgfbr1, Acta2 was assessed by qPCR (B,C).Data are the mean ± SEM of n = 10 per group.Groups were compared using two-tailed Student's t-test and * p ≤ 0.05.Scoring ranks were compared using the Mann-Whitney test.
NewFigure 5 .
Figure 5. Dimorphic impact of CAR deletion on the hepatic transcriptome.Microarray analysis of hepatic transcriptome was performed.(A) Principal component analysis (PCA) with separation of data by two dimensions.(B) Heatmap representation of gene expression for each individual; the hierarchical clustering was obtained using Ward's criterion and Pearson's correlation coefficient.Red and green, respectively, indicate values above and below the mean averaged, centered and scaled expression values (Z-score).Black shows values close to the mean.Venn diagrams with the number of up-regulated (C) or down-regulated (D) genes in CAR−/− mice.Sex-specific variations were further analyzed, and enrichment of Kegg pathways is reported with gene number and corresponding pvalue.(E) The 36 genes of NAFLD pathway which are specifically up-regulated in CAR−/− females were represented for both male and females.
Figure 6 .
Figure 6.Positive correlation between transcriptomic signatures of CAR−/− and ERα−/− females.(A) The differentially expressed gene list of CAR−/− female mice was compared in Base Space Correlation Engine (Illumina) to publicly available datasets.The transcriptomic signature of CAR−/− females positively correlates with a significant p-value of overlap (−log(p-value overlap) > −log(0.05))with the profile of ERα−/− females fed a HFD (GSE95283).Venn diagrams representing common up-regulated (B) and down-regulated genes (C) between CAR−/− and ERα−/− females.Common genes were further analyzed for enriched Kegg pathways using Enrichr (BioTools) (D,E).
Figure 7 .
Figure 7. CAR deletion associates with hypothalamic astrogliosis in HFD-fed females.(A) Histolog ical brain slices of WT and CAR−/− females were stained for GFAP expression, an astrocyte marke Examples from the arcuate (AN) and paraventricular (PVN) nuclei of the hypothalamus.The whit squares correspond to the regions for which a strong magnification is shown below.Quantificatio of GFAP fluorescence in AN and PVN (B), cortex (CTX), dentate gyrus (DG), and white matter (WM of females (C).Data are the mean ± SEM of n = 5 per group.* p ≤ 0.05, ** p ≤ 0.01 using unpaired two tailed Student's t-test.
Figure 7 .
Figure 7. CAR deletion associates with hypothalamic astrogliosis in HFD-fed females.(A) Histological brain slices of WT and CAR−/− females were stained for GFAP expression, an astrocyte marker.Examples from the arcuate (AN) and paraventricular (PVN) nuclei of the hypothalamus.The white squares correspond to the regions for which a strong magnification is shown below.Quantification of GFAP fluorescence in AN and PVN (B), cortex (CTX), dentate gyrus (DG), and white matter (WM) of females (C).Data are the mean ± SEM of n = 5 per group.* p ≤ 0.05, ** p ≤ 0.01 using unpaired two-tailed Student's t-test. | 2023-09-08T15:05:00.783Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "33fc1cd6c6b8ca64557ac9331282dfecb53c4ac1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/12/18/2218/pdf?version=1693982349",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "127a2bf0455ffd2fe532bef3f66ed520dc63060c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
195761419 | pes2o/s2orc | v3-fos-license | Precision Nutrition and the Microbiome Part II: Potential Opportunities and Pathways to Commercialisation
Modulation of the human gut microbiota through probiotics, prebiotics and dietary fibre are recognised strategies to improve health and prevent disease. Yet we are only beginning to understand the impact of these interventions on the gut microbiota and the physiological consequences for the human host, thus forging the way towards evidence-based scientific validation. However, in many studies a percentage of participants can be defined as ‘non-responders’ and scientists are beginning to unravel what differentiates these from ‘responders;’ and it is now clear that an individual’s baseline microbiota can influence an individual’s response. Thus, microbiome composition can potentially serve as a biomarker to predict responsiveness to interventions, diets and dietary components enabling greater opportunities for its use towards disease prevention and health promotion. In Part I of this two-part review, we reviewed the current state of the science in terms of the gut microbiota and the role of diet and dietary components in shaping it and subsequent consequences for human health. In Part II, we examine the efficacy of gut-microbiota modulating therapies at different life stages and their potential to aid in the management of undernutrition and overnutrition. Given the significance of an individual’s gut microbiota, we investigate the feasibility of microbiome testing and we discuss guidelines for evaluating the scientific validity of evidence for providing personalised microbiome-based dietary advice. Overall, this review highlights the potential value of the microbiome to prevent disease and maintain or promote health and in doing so, paves the pathway towards commercialisation.
Introduction
The gut microbiota is an integral component of the human body, and such is its contribution to human physiology that it has been deemed an organ in itself. With a genetic coding capacity that exceeds its human host by ≥100-fold [1], the gut microbiota executes essential functions that the body itself is incapable of performing. It promotes gut maturation, educates the immune system, provides protection against viral and bacterial pathogens, influences brain activities and bodily metabolism. In Part I of this two-part review [2], we provided an overview of its development from birth to old age and detailed how it impacts host health through multiple mechanisms.
Importantly, several factors influence its composition and activities, one of which is host genetics, a factor which is beyond our control, while another significant contributor to its form and function is diet, an element which we can control. Indeed, humans not only feed themselves but also feed their gut microbiota. These two factors alone (host genetics and diet) largely account for the huge variability in microbiome composition and functionality which exists among individuals. Indeed, such is the inter-individual variability that scientists still grapple with what constitutes a "healthy" microbiota. One feature of a poorly functioning microbiota that is incapable of serving its host to its full potential is low microbial diversity. Indeed, in Part I of this review we discussed the implications of low microbial diversity in terms of infection and inflammation, the latter of which is associated with several non-communicable diseases in its chronic form including cardiovascular diseases, diabetes, allergies and arthritis as examples. Improving microbial diversity can be achieved through healthy eating and consuming the recommended daily intake for fibre (25 g/day for women and 38 g/day for men [3]). In Part I of this review we discussed the role of diet in shaping the microbiome with a particular focus on the Mediterranean diet. Long-term consumption of this diet not only improved the microbial profile and actions of the gut inhabitants in obese men but also generated physiological improvements in terms of metabolism [4,5]. In terms of feeding our gut microbiota "long-term" healthy dietary patterns appear to be the key since short term dietary interventions of this nature have minimal, if any, impact on microbiota diversity levels [6,7].
Interventions involving probiotics, prebiotics, synbiotics and dietary fibre also offer opportunities to "fertilize" our microbiota. Probiotics are defined as 'live microorganisms, which when administered in adequate numbers confer a health benefit on the host' [8]. The following genera represent the most commonly used probiotics for which health claims have been demonstrated, and within these, the benefits tend to be strain specific: Bifidobacterium, Lactobacillus, Saccharomyces, Streptococcus, Enterococcus, Leuconostoc, Pediococcus, Escherichia coli and Bacillus [9]. However, the prerequisite for 'live microorganisms' is subject to some debate, given that a pasteurised derivative of a beneficial strain exhibited enhanced effects in obese and diabetic mice [10]. The prebiotic definition has been recently updated/broadened to "a substrate that is selectively utilized by host microorganisms conferring a health benefit" by the International Scientific Association for Probiotics and Prebiotics [11]. By modulating the intestinal microbiota with a high or low level of specificity and increasing the abundance of beneficial bacteria, prebiotics can improve host metabolic and physiological parameters. Synbiotics describe the combination of probiotics and prebiotics which act synergistically. Dietary fibre has been defined as "the edible part of plants or their extracts, or analogous carbohydrates, that are resistant to digestion in the human small intestine, and undergoes complete or partial fermentation in the large intestine" [12], or more simply as "any dietary component that reaches the colon without being absorbed in a healthy gut" [13].
In this review, we examine initially the consequences of different life stages or situations on the gut microbiota of humans and examine the efficacy of probiotics and prebiotics with a focus on gut microbiota modulation and/or improvement of symptom(s). We then investigate the potential of probiotics, prebiotics and dietary fibre to aid in the management of two forms of malnutrition which are prevalent in both developed and developing countries, namely, overnutrition and undernutrition, reporting changes conveyed to the gut microbiota and hence host physiology based on data from human studies. However, it is becoming increasingly clear that an individual's baseline microbiota and genetic make-up can influence the efficacy of such interventions and scientists are beginning to unravel the discrepancies which exist between human 'responders' and 'non-responders.' This is perhaps one of the core elements of precision nutrition through the microbiome whereby it can serve as a biomarker to predict responsiveness to dietary components and interventions. As an example, the gut microbiota of an individual can be used to predict postprandial glycemic responses (PPGRs) to food [14] enabling the design of a precision-tailored individualised diet that helps prevent the development of metabolic syndrome and its comorbidities, a study which is discussed in more detail in Section 5. This level of data paves the way for new opportunities in terms of interventions and microbiome testing at an individual level. Microbiome testing is currently available; thus, we discuss its feasibility at this moment in time and how it can be streamlined to generate more scientifically meaningful results. Finally, we propose guidelines for evaluating the scientific validity of evidence for providing personalised microbiome-based dietary advice.
Impact of Environment and Life Stage on Gut Microbiota and Health and Opportunities for Optimising Health through Diet, Probiotics and Prebiotics
As science continues to delineate the composition and functionality of life stage-specific gut microbiota and deviations from what is considered "normal" or "healthy," opportunities arise for dietary and therapeutic interventions which can beneficially modulate the microbiota and result in translational benefits to host physiology and overall health. In this section, we consider different life stages/situations and the impact of each on the gut microbiota including pregnancy, infancy and the elderly, especially focusing on those in long-stay care facilities, physical activity, and times of psychological stress. Dietary recommendations exist for these particular life junctures, but we also summarise a number of studies which have investigated the potential of probiotics and prebiotics to beneficially influence the gut microbiota and ultimately human health.
Pregnancy
The female body undergoes several changes during pregnancy including an increase in body fat in early pregnancy which is followed by a decrease in insulin sensitivity later on [15]. The change in insulin sensitivity has been linked to immunity changes which are proposed to induce metabolic inflammation that is normally associated with obesity [16]. However, during pregnancy these changes support the growth of the foetus and prepare the mother's body for lactation [17][18][19]. Specific nutritional recommendations exist for pregnancy, but these can differ depending on eating tradition and nutritional status of the population [20]. However, the gut microbiota of the pregnant mother has received increasing attention given that it can influence the health of both mother and child.
In a study involving 91 pregnant mothers of varying body mass index (BMI) and gestational diabetes, Koren et al. [21] reported that the gut microbiota changes dramatically from the first trimester (T1) to the third trimester (T3) even though the diets and energy intake of participants did not change between sampling times. From T1 to T3, Proteobacteria significantly increased in 69.5% of women and Actinobacteria increased in 57% of women. As women progressed from T1 to T3, the number of operational taxonomic units (OTUs) became significantly reduced and T1 samples had greater within-sample alpha phylogenetic diversity than T3 samples irrespective of pre-pregnancy BMI and health status. It has been suggested that the reduced alpha diversity in T3 may not be due to loss of species but rather lower relative abundance levels below the sequencing level of detection [22]. The over-represented OTUs in T1 mainly belonged to the Clostridiales order of the Firmicutes and included butyrate producers such as Facalibacterium and Eubacterium [21]. Members of the Enterobacteriaceae family and the Streptococcus genus were over-represented in T3 samples. It is speculated that the increase in butyrate-producing microorganisms in T1 could increase immunoregulatory T regulatory (T-reg) cells which may be involved in reducing maternal rejection of the foetal allograft [22]. Interestingly, no correlations were found between the specific OTU abundance and the use of antibiotics, probiotics, diet, previous pregnancies or health markers [21]. The results revealed that T1 microbial diversity is similar to the microbial diversity observed in non-pregnant women while T3 microbial diversity is aberrant and persists for one month postpartum. In T3 and just before transmission of the microbiota to the newborn, each mother has a "purely personal" microbiota which is suggested to have been selected at the level of each host lineage to ensure maximum development of the developing foetus and newborn [22]. Transferring T3 microbiota to germ-free mice resulted in increased adiposity and reduced insulin sensitivity compared to T1 microbiota [21]. The study indicates that the microbial changes which occur during pregnancy influence host metabolism and are beneficial for that stage in life. It is suggested that such changes are driven by the immunological and hormonal changes which occur during pregnancy [22]. However, a follow-up study conducted in 2015 investigating temporal and spatial variation of the human microbiota at four body sites (distal gut, vagina, saliva and tooth/gum) did not observe changes in the gut microbiota taxonomic composition and diversity over the course of pregnancy, reporting relative stability for all four sites [23]. The authors suggest that the differences in study findings may be due to the fact that many mothers in the Koren et al., study were in receipt of a dietary intervention between T1 and T3. Further studies investigating the gut microbiota composition and functionality before, during and post pregnancy in larger cohorts and from different demographics and geographical locations are required.
It is known that excessive weight gain in pregnancy gives way to decreased glucose tolerance and potentially gestational diabetes mellitus (GDM) [24,25]. GDM is associated with adverse pregnancy outcomes including stillbirth, fetal macrosomia, neonatal metabolic disturbances and related issues [26,27]. Furthermore, offspring of mothers with GDM are at greater risk of obesity and diabetes [28]. Medical nutritional therapy is the first-line approach but up to 50% of women fail to regain metabolic control by this means and must avail of insulin treatment or hypoglycemic drugs [29,30]. Collado et al. [31] investigated the gut microbiota during pregnancy in overweight and normal weight women and reported that Bacteroides and Staphylococcus were significantly higher in overweight women, and mother's weight and BMI before pregnancy correlated with higher levels of Bacteroides, Staphylococcus and Clostridium. In both normal weight and overweight women, bacterial counts increased from T1 to T3. In another study, overweight or obese mothers presented gut microbiota with lower alpha diversity compared to lean mothers four days after delivery [32]. Most of the taxa that differentiated the two groups were higher in the lean mothers and included Parabacteroides, Lachnospira, Faecalibacterium prausnitzii, Christensenellaceae family members, Rumincoccus and Bifidobacterium, all of which have shown consistent associations with leanness. These maternal gut microbiota characteristics were not associated with overall differences in the infant gut microbiota over the first two years of life but the authors state that the presence of specific OTUs in the maternal gut microbiota at the time of delivery increased the chances of being present in the infant gut at 4-10 days old which included some lean-associated taxa. Further research is required to determine the degree to which these maternal microbial differences influence the health of the infant over time. More recently, Crusell et al. [33] reported that the gut microbiota of pregnant women with GDM differed substantially from normoglycaemic pregnant women in T3. At phylum level, Actinobacteria was observed to be more abundant in GDM women, while at genus level Collinsella, Rothia and Desulfovibrio were more abundant. The normoglycaemic pregnant women showed enrichment of Faecalibacterium, Anaerotruncus and depletion of Clostridium (sensu stricto) and Veillonella. Regardless of metabolic status, OTU richness and Shannon index decreased from late pregnancy to postpartum, reflecting an observation of Koren et al. [21]. Christensenella OTUs were associated with higher fasting plasma glucose concentration, while OTUs assigned to Akkermansia were associated with lower insulin sensitivity. Eight months after delivery, the microbiota of women with GDM during pregnancy was still aberrant in terms of composition resembling the aberrant microbiota composition of non-pregnant individuals with type 2 diabetes. Further studies are required to determine if such microbiota disruption places these individuals at increased risk of developing type 2 diabetes. This topic has been further reviewed by Ponzo et al. [29] who also reviewed the potential of the microbiota as a therapeutic target in GDM and concluded that certain microbiota-accessible carbohydrates (MACs) could beneficially modulate the gut microbiota and hence host metabolism in GDM patients. For example, reduced abundance of Bacteriodes by the end of pregnancy was reported for women with GDM who consumed higher intakes of oligosaccharides and fibre [34]. This is of significance given that the genus is associated with overweight in pregnancy [31]. In a randomized placebo-controlled clinical trial involving 52 pregnant women in T3, consumption of a synbiotic composed of Lactobacillus sporogenes and a prebiotic mixture daily for nine weeks resulted in significantly decreased serum insulin levels and beneficially impacted other insulin actions but did not affect fasting plasma glucose levels and serum high-sensitivity C-reactive protein [35]. More recently, a synbiotic composed of fructooligosaccharide (FOS) and a mixture of probiotic lactobacilli did not influence fasting plasma glucose and insulin resistance/sensitivity indices in women with GDM but proved effective in reducing blood pressure [36].
Interventions involving probiotics alone have generated conflicting results. For example, consumption of the probiotics Lactobacillus rhamnosus GG and Bifidobacterium lactis BB12 from T1 of pregnancy in a double-blind, placebo-controlled study significantly reduced the incidence of GDM (P = 0.003) [37]. However, probiotic supplementation for four weeks (weeks 24 to 28 of gestation) in obese pregnant women did not influence maternal metabolic profile, fasting blood glucose, or pregnancy outcomes [38]. It is possible that the short-term probiotic consumption in this study did not permit the probiotic to induce beneficial changes to the gut microbiota and hence host metabolism. More recently, probiotic supplementation (L. rhamnosus GG and Bifidobacterium animalis ssp. lactis) from T2 of pregnancy to week 28 in overweight and obese women did not prevent GDM [39]. These contradictory results could be due to a number of factors including differences in probiotics and doses used, timing and duration of supplementation as well as differences in host demographics, genetics and baseline gut microbiota of each individual.
Probiotic intervention during pregnancy has been shown to be beneficial for reducing the risk of preeclampsia, a serious condition associated with hypertension and proteinuria that can result in poor pregnancy outcome and is reported to be one of the leading causes of maternal death globally [40,41]. In 2011, a study conducted in Norway reported that regular consumption of milk-based probiotics could be linked with lower risk of preeclampsia in first-time expectant women [40]. A follow-on observational cohort study involving a large sample number of women from both urban and rural regions of Norway of varying ages and socioeconomic status reported that probiotic milk intake in late pregnancy was significantly associated with lower risk of preeclampsia [42]. In the same study, probiotic milk intake during early pregnancy (but not before or during late pregnancy) was significantly associated with lower risk of preterm delivery. However, in the case of both observations, causality could not be established.
Probiotic administration during pregnancy has also generated promising results in terms of treating bacterial vaginosis (as reviewed by Sohn and Underwood, [43]), and infectious mastitis [44,45] and the positive effects of probiotic consumption during pregnancy rendered to the offspring, including prevention of atopic dermatitis [46], eczema and rhinoconjunctivitis [47], have been confirmed in meta-analysis (17 studies, 4,755 children) and a large population-based cohort study (40,614 children), respectively.
Given such promising results, the impact of maternal probiotic supplementation on breast milk composition and the infant microbiome is an important area of research. Breast milk has its own microbiota dominated by members of the staphylococcal and streptococcal genera, but also harbors lactic acid bacteria, bifidobacteria and members of Propionibacterium [48]. These microbes originate from the mother's skin, gut and the infant's oral mucosa [49]. The transfer of maternal gut microbiota to breast milk is proposed to occur through an entero-mammary route via dendritic cells and macrophages which selectively traffic commensal microorganisms [49][50][51]. Despite this, maternal probiotic supplementation with a fermented milk containing L. rhamnosus GG, Lactobacillus acidophilus La-5 and Bif. animalis ssp. lactis Bb-12 four weeks before the expected due date until three months after birth while breastfeeding resulted in the presence of these bacteria in breast milk of only a small subgroup of women and, thus, breastfeeding by probiotic supplemented women is unlikely to be a source of these probiotics in infants [49]. However, a previous study using the same strains reported that probiotic supplementation of pregnant women from 36 weeks of gestation to three months postnatally during breastfeeding reduced the cumulative incidence of atopic dermatitis by almost 40% among offspring at two years of age [52]. Interestingly, a higher prevalence of L. rhamnosus GG was found in stool samples of these infants up to three months of age [53]. Simpson et al. [49] suggest that since breastfeeding does not appear responsible for ongoing transfer of L. rhamnosus GG to infants, early transfer of L. rhamnosus GG may be sufficient to ensure stable colonization in the infant or alternatively children are receiving continued transfer of L. rhamnosus GG from their mother via a different route. However, infants from mothers who had consumed the probiotic milk and who did not develop atopic dermatitis during the two years follow-up had reduced T helper (Th) 22 cells at three months of age which may help explain the preventative effects of maternal probiotic supplementation on atopic dermatitis [54]. Consumption of a multistrain probiotic product (VSL#3) by women during late pregnancy and lactation resulted in a significant increase in both lactobacilli and bifidobacteria in colostrum and mature milk in women who underwent vaginal delivery compared with the placebo group, however, analsysis of the bacterial strains and species revealed that the probiotic microorganisms did not pass from the maternal gut to the mammary gland [55]. No significant differences in bifidobacteria and Lactobacillus numbers were observed in colostrum and mature milk from mothers who underwent caesarian section from either the probiotic or placebo groups. The authors suggest that a systemic effect may be responsible for the probiotic-dependent modulation of breask milk microbiota in vaginally delivering women.
Interestingly, Kuitunen et al. [56] reported that probiotic supplementation of mothers from week 36 of gestation until delivery altered the immunologic composition of breast milk by significantly increasing IL-10 and significantly decreasing casein IgA antibodies, however, no strong and consistent associations were observed between breast milk antibodies and cytokines and allergy development in children up to the age of five. Baldassarre et al. [57] also reported that high-dose probiotic supplementation during late pregnancy and lactation influenced breast milk cytokine patterns, significantly increasing IL-6 levels in colostrum and IL-10 and TGF-β1 levels in mature breast milk. Furthermore, sIgA levels were higher in newborns whose mothers consumed the high-dose probiotic. A recent study reported that infants born to mothers with depressive symptoms had lower levels of faecal sIgA which could presdispose such infants to higher risk for allergic disease [58]. Thus, probiotic supplementation to mothers during pregnancy could circumvent such low IgA levels in newborns. In contrast, Quin et al. [59] reported that maternal probiotic administration during breastfeeding (from birth to introduction of solid food) did not alter breast milk immune markers. In the same study infants whose mothers were self-administering probiotics also received probiotics directly which resulted in an increase in infant faecal sIgA levels. However, the probiotic group had higher incidences of mucosal-associated illnesses as toddlers. As a consequence the authors caution against probiotic supplementation during infancy until rigorous controlled follow-up studies on their safety and efficacy have been performed although the study itself has a number of limitations including the fact that varying brands and doses of probiotics were consumed by participants.
Studies investigating the impact of prebiotics and synbiotics on breast milk composition and subsequently the infant microbiome are limited. However, Kubota et al. [60] reported that FOS intake (4 g, twice daily) by pregnant and lactating women increased levels of the cytokine IL-27 in breast milk. The consequence of this phenomenon for the onset of allergic disorders in children requires further investigation. A synbiotic consisting of different probiotic strains and FOS administered to lactating mothers for 30 days significantly increased breast milk IgA and TGF-β2 levels and the incidence of diarrhoea in infants whose mother's were consuming synbiotic was significantly decreased [61]. Synbiotic supplementation to lactating mothers for 30 days was also reported to positively impact mineral levels in breast milk (zinc, copper, iron, magnesium and calcium) which were shown to decrease significantly in the placebo group and the synbiotic also positively impacted infant growth (weight for age Z score and height for age Z score) [62]. Selenium (Se) is an essential trace elemnent for infants and is found in breast milk although its levels can vary depending on the mother's geographical location due to differences in soil content and hence its accumulation in cereals which are eaten by humans and animals [63]. Taghipour et al. [64] investigated if synbiotic supplementation consisting of FOS and different probiotic strains could increase breast milk Se levels. However, 30 days of synbiotic consumption had no impact on Se levels in breast milk.
Further studies are warranted to fully understand the impact of probiotic/prebiotic/synbiotic supplementation on breast milk composition at the microbiological, immunological and bioactive molecule levels, and to determine the consequence of these changes for both mother and infant in the long term.
Infants
The infant gut microbiota plays an essential role in establishing the gut mucosal barrier, education of the immune system and in preventing enteric pathogen infection [65]. In Part I of this review, we described the development of the infant gut microbiota from birth onwards and while several factors have been shown to influence its composition (host genetics, gestational age, birth mode, feeding regime, antibiotic exposure), the gut microbiota of full-term, vaginally-delivered, exclusively breast-fed infants is generally recognised as representing the healthy microbiota [66,67]. Indeed, owing to its complex mixtures of bioactive components, which change in concentration, structure and function over lactation, human milk is considered the "gold standard" for early life nutrition [68].
In the case of preterm infants, bacterial exposure occurs earlier than normal and antibiotics are frequently administered. Very preterm infants (<32 weeks) and extremely preterm infants (<28 weeks) are at significant risk of sepsis, necrotizing enterocolitis (NEC), feeding intolerance and mortality [69,70]. The preterm infant microbiota has been shown to be lacking in the health-promoting Bifidobacterium species and as a consequence of antibiotic administration can be dominated by Enterobacteriaceae, Enterococcus and Staphylococcus [71]. It is also characterised by a lack of microbial diversity [72] and has an increased abundance of Proteobacteria [67]. In a study investigating the distortions in intestinal microbiota development and late onset sepsis in preterm infants, Mai et al. [73] reported that distortions rather than enrichment of potential pathogens were associated with late-onset sepsis. Likewise, no specific pathogen has been identified as responsible for NEC but inappropriate colonisation of the preterm gut has been deemed the causative factor [74]. Preterm infants with NEC have been reported to harbour increased relative abundances of Proteobacteria and decreased relative abundance of Firmicutes and Bacteroidetes prior to the onset of NEC [75,76].
In a recent article, Athalye-Jape and Patole [69] reported that over 25 systematic reviews and meta-analyses of randomized controlled trials involving~12,000 participants revealed that probiotics significantly reduce the risk of all-cause mortality, NEC ≥ Stage II, late onset sepsis and feeding intolerance in preterm infants and suggest providing probiotics as a standard prophylaxis for preterm infants. In order to gain widespread acceptance, Aceti et al. [77] have pointed out ongoing gaps in the literature and potential directions for future research in relation to probiotic use in preterm infants which include an understanding of the impact of feeding (formula, mother's milk, donor's milk) on the relationship between probiotic supplementation and clinical outcome, efficacy of multi-strain probiotics versus single-strain probiotics, safety issues and long-term consequences for such a vulnerable population. However, given the evidence to date it could be argued that it "may be unethical not to treat" with probiotics to reduce the risk of NEC in preterm infants.
Prebiotics have also proven efficacious for preventing adverse health outcomes in preterm infants. A meta-analysis involving 18 randomized controlled trials consisting of 1322 participants revealed that those in receipt of prebiotics showed significant decreases in incidence of mortality, sepsis, hospital stay duration and time to full enteral feeding; however, there were no differences between control and intervention groups in relation to the morbidity rate of NEC and feeding intolerance [78]. A small number of studies have investigated the efficacy of synbiotics in relation to NEC in preterm infants [79]. In a study involving 400 very low birth weight infants, the rate of NEC was reduced by 2% in the group receiving the probiotic Bif. lactis, but was reduced by 4% in the group receiving Bif. lactis plus the prebiotic inulin compared to a rate of 12% in the prebiotic group and 18% in the control group [80]. The prebiotic FOS in combination with a probiotic mixture consisting of L. acidophilus, Bifidobacterium longum, Bifidobacterium bifidum, and Streptococcus thermophilus significantly reduced the incidence of NEC in preterm infants fed breast milk (2 incidences out of 100) compared to the control group who received breast milk alone (10 incidences out of 100) [81]. In the same study the incidences of Stage II and Stage III (severe) NEC were nil in the test group compared to 5 and 2 cases in the control group, respectively. The incidence of sepsis was also significantly lower in the test group. Likewise, Nandhini et al. [82] reported a 50% reduction in the incidence of NEC of all stages in preterm infants in receipt of a synbiotic consisting of a mix of bifidobacteria and lactobacilli and FOS, however, the severity of NEC, sepsis and mortality were not influenced by synbiotic administration. Despite the apparent success of synbiotics in this small number of studies, a drawback of synbiotics is the difficulty predicting selectivity and specificity and the subsequent mechanisms of action; thus, future studies should focus on unravelling how each component in the mixture, and the mixture as a whole, exerts its (cooperative) effects [79].
Caesarean section has been shown to influence the development and composition of the gut microbiota. In a study involving 192 breast-fed infants, Hill et al. [67] reported that the gut microbiota of the full-term caesarean section infant has a significantly increased faecal abundance of Firmicutes and significantly lower abundance of Actinobacteria compared to the full-term, vaginally delivered infant after the first week of life. A decreased abundance of bifidobacteria has also been reported for six week old infants born by caesarean section [83]. However, the latter study also revealed that this disturbance could be partially restored by exclusive breastfeeding. Likewise, Hill et al. [67] reported that breastfeeding had a beneficial impact on the gut microbiota of infants delivered by caesarean section. With this in mind, it is not surprising that probiotic supplementation to expectant mothers and their infants (for three months) born by caesarean section or receiving antibiotics "benefited" only breast-fed infants in terms of increasing bifidobacteria and reducing Proteobacteria and Clostridia [84].
Probiotic-supplemented infant formula has been on the market in Europe and Asia for over two decades [85]. Such formulae have been shown to result in infant faecal microbiota profiles closer to breast-fed infants [86]. A systematic review of randomized controlled trials up to September 2016 concluded that probiotic-supplemented formulae do not raise safety concerns for healthy infants with regard to growth and adverse effects, however, while some beneficial effects are possible (reduction in number of episodes of gastrointestinal infection, diarrhoea and respiratory symptoms, lower frequency of colic or irritability and better growth) the review concluded there was a lack of robust clinical evidence to recommend their routine use albeit this could be due to the small amount of data on specific probiotic strain(s) and their outcomes rather than an authentic lack of an effect [87]. With this in mind, a meta-analysis conducted in 2018 investigated the efficacy of a single probiotic strain, namely Lactobacillus reuteri DSM17398 to treat infant colic [88]. Four double-blind trials with 345 colic infants were included. The study concluded that the probiotic strain in question is effective for treating colic but only in breast-fed infants. With regard to formula-fed infants, the intervention effects were insignificant, however, the authors state that there were insufficient data to make conclusions and thus there is a critical need for more rigorous randomized controlled trials with this strain in formula-fed infants suffering from colic.
The most common prebiotics used in infant formulae include a 9:1 mixture of short chain galactooligosaccharides (GOS) and long chain FOS [89]. A systematic review of 41 randomized controlled clinical trials concluded that feeding prebiotic-supplemented infant formulae to healthy infants is safe in terms of adverse effects and growth [89]. The primary beneficial effect was stool softening but no robust evidence exists to recommend prebiotic-supplemented formulae. As in the case of probiotics, the lack of sufficient data on specific prebiotics was possibly responsible for this conclusion.
A systematic review involving three randomized controlled clinical trials (n = 475) on the efficacy of synbiotic-supplemented formulae in 2012 concluded that while synbiotics increased stool frequency they had no impact on stool consistency, colic, spitting up/regurgitation, crying, vomiting or restlessness [90]. However, a recent study showed that amino acid-based formula supplemented with Bifidobacterium breve M-16V and FOS over 26 weeks was capable of significantly increasing faecal percentages of bifidobacteria and reducing the Eubacterium/Clostridium coccoides group in infants with non-IgE-mediated cow's milk allergy (n = 35) [91]. Interestingly, reported ear infections and use of dermatological medication were also significantly lower in the synbiotic group. A synbiotic starter formula containing Bif. lactis and FOS fed to 280 infants of age 0.89 months over a three-month period significantly reduced infantile crying and colic, functional constipation and daily regurgitation compared to the reported median prevalence for a similar age according to the literature [92]. Feeding a synbiotic-supplemented formula to infants who had been completely weaned from breast milk to infant formula at 28 days of age until 12 months of age resulted in a significant reduction in the cumulative incidence of lower respiratory tract infections compared to the prebiotic group but as the confidence interval of the estimate was wide, the authors suggest uncertainty with regards to this result [93]. The synbiotic in this case consisted of FOS, GOS and Lactobacillus paracasei ssp. paracasei F19. Feeding caesarean-born infants formula supplemented with Bif. breve M-16V and FOS and GOS from birth until week 16 generated a bifidogenic effect that lasted until week 8, thus emulating the gut physiological environment of vaginally-delivered infants, and reduced Enterobacteriaceae until week 12 [94].
These studies suggest that probiotics, prebiotics and synbiotics have a beneficial role to play in infant nutrition, and particularly in vulnerable infants including preterm and those born by caesarean section or for those for who breast milk is not an option. However, in order to incite greater confidence in both the medical profession and the public in general there is a need for large cohort, possibly multi-centre randomized controlled trials that focus on specific prebiotics, probiotics and synbiotics which assess their impacts and modes of action on the gut microbiota, infant health and wellness and the long-term outcomes for these parameters.
Elderly in Nursing Homes
In Part I of this review we discussed the elderly (> 65 years) microbiota which is generally characterised by a reduction in microbial diversity, a decrease in species associated with short chain fatty acid (SCFA) production, especially butyrate, an increase in opportunistic pathogens [95,96] and even greater inter-individual variation than observed in adults [97]. The gut microbiota of those in long-stay residential care facilities is significantly less diverse than individuals of the same age group who reside within the community and the increased frailty observed in long-stay care residents correlates with loss of community-associated microbiota [98]. In the same study, the distinct microbiota groups identified as a result of residence location also overlapped with diet where individuals in long stay care facilities tended to consume high fat, low fibre diets versus the low fat, high fibre diet of community dwellers. Furthermore, scientists are hypothesizing that the gut microbiota may influence sarcopenia through a gut-muscle axis, a syndrome which affects older individuals (recently reviewed by Ticinesi et al. [99,100]). Sarcopenia is described as depletion of muscle mass and reduction of muscle performance which both result from anabolic resistance or boosted protein catabolism [101]. It is distinct from frailty although the two may overlap [102]. To date, there have been no studies in humans investigating the microbiome of sarcopenic individuals; however, Siddharth et al. [103] identified a distinct faecal microbiota composition associated with age-related muscle wasting in rats which revealed a reduction in several taxa reported to have pro-anabolic and anti-inflammatory properties. Interestingly, the SCFA butyrate was shown to have beneficial effects on muscle mass in ageing mice, partially or wholly protecting them from muscle atrophy [104], and the human commensal L. reuteri inhibited muscle wasting in mice [105].
Osteosarcopenic obesity describes an impairment in muscle, bone and adipose tissue which occurs in elderly individuals in conjunction with an altered gut microbiota, especially in those in long-term care facilities [106]. The increased adiposity associated with osteosarcopenic obesity can manifest as overt clinical overweight/obesity, redistribution of fat around visceral organs or the infiltration of fat into muscle and bone tissues, thus impairing their function [106]. It is more prevalent in older women than older men and women with osteosarcopenic obesity have decreased strength, balance and mobility compared to those with obesity, osteoporotic obesity and sarcopenic obesity alone [107]. The gut microbiota has been shown to regulate bone mass in mice [108] and the probiotic L. reuteri was reported to protect menopausal ovariectomized mice from bone loss [109].
In Part I of this review, we discussed the obese gut microbiota and the link between the gut microbiota and energy storage in the body [2]. Given the reported links between the gut microbiota, muscle, bone, and adipose tissues, such studies suggest that the gut microbiota could be a therapeutic target in the treatment of sarcopenia and osteosarcopenic obesity and aid in the prevention of associated outcomes such as increased risk for falls, fractures, long-term frailty and immobility [106]. This is an exciting area in microbiome research and may have profound implications for the ageing process. The gut-muscle axis is further discussed in Section 2.4 (Physical Activity).
While the nutritional needs of the elderly do not vary significantly from younger adults with similar caloric expenditure and anthropometric and physiological features, elderly individuals are at greater risk of malnutrition [110,111] owing to a number of factors outlined in Part I of this review [2]. Indeed, it has been reported that approximately 30% of individuals over 50 years of age do not consume the RDA for protein [110,112]. Other nutrients which fall short in this demographic include fibre, iron, vitamins D, B 6 and B 12 and folic acid [110,113].
Salazar et al. [110] suggest that nutritional strategies for the elderly should not just focus on nutritional deficiencies but also consider the intestinal microbiota and immune function. With this is mind, the following have been considered relevant targets for interventions in this age group: (1) reduced microbial diversity, (2) low-levels of butyrate-producing bacteria, (3) imbalanced proportions and reduced levels of SCFA, (4) increased incidence of Clostridium difficile infection, (5) higher levels of lactate, (6) methane, and (7) branched chain fatty acids (valeric, isovaleric, isobutyric and caproic acids) [95,97,110,111,[114][115][116][117][118]. For the purpose of this review, we have focused on the impact of interventions in this age group involving fibre, prebiotics, probiotics and synbiotics.
Bahgurst et al. [119] investigated the long-term (12-month) effects of moderate fibre supplementation (an increase in fibre intake of~70%) in a nursing home population, of mean age 83 years, with an emphasis on bowel function, body weight and mineral status. As well as improving bowel function, the fibre supplementation improved nutrient density of the diet without increasing body weight. In a more recent study, potato intake in 32 institutionalised elderly subjects (aged between 76 and 95 years) was directly associated with faecal SCFA concentrations, and apple intake was directly associated with propionate concentration [120]. In the same study, cellulose intake was associated with acetate and butyrate concentrations. While the sample size was low, the approach provides an opportunity to generate improved diets with an emphasis on increasing specific or total SCFAs. Probiotic consumption in the elderly cohort has been shown to improve certain immune parameters as well as beneficially modulating the intestinal microbiota. The immuno-stimulating probiotic Bif. lactis HN019 enhanced immunity in elderly subjects aged 68 to 84 years following consumption of either 5 × 10 10 microorganisms/day or 5 × 10 9 microorganisms/day for three weeks [121]. Daily consumption of a probiotic mixture composed of Lactobacillus gasseri KS-13, Bif. bifidum G9-1 and Bif. longum MM-2 for three weeks by elderly participants (70 ± 1 year) increased IL-10 concentrations compared to the placebo [122]. In addition, 48% of participants in the probiotic group had increased faecal bifidobacteria compared to 30% in the placebo which was significantly different (P < 0.05). Moreover, 55% of participants in the probiotic group had increased lactic acid bacteria and 52% had decreased E. coli compared to 43% and 27% in the placebo group, respectively, representing significant differences (P < 0.05). Bacterial groups matching the butyrate producer F. prausnitzii were also more abundant in stool samples from the probiotic group. The overall changes resembled those observed in healthy younger populations. Gao et al. [123] reported a similar finding in relation to F. prausnitzii levels following long-term probiotic consumption by an elderly cohort.
While consumption of a probiotic cheese containing L. rhamnosus HN001 and L. acidophilus NCFM by an elderly population increased the numbers of said probiotics in faeces, there was no effect on faecal immune markers [124]. However, the probiotic cheese was associated with a trend towards lower C. difficile counts, an effect which was statistically significant in the subpopulation that were found to harbor C. difficile at the beginning of the study. Likewise, consumption of one probiotic-containing biscuit (Bif. longum Bar33 and Lactobacillus helveticus Bar13) per day for one month was found to revert the age-related increase in the following opportunistic pathogens, C. difficile, Clostridium cluster XI, Clostridium perfringens, Enterococcus faecium, and the enteropathogenic genus Campylobacter in elderly volunteers [125]. Consumption of a fermented oat drink containing Bif. longum 46 and Bif. longum 2C by elderly nursing home residents for six months significantly increased faecal bifidobacteria levels [126]. In an attempt to understand how probiotic consumption in the elderly promotes health, Eloe-Fadrosh et al. [127] reported the impact of a single probiotic strain (L. rhamnosus GG) on the structure and functional dynamics of the gut microbiota in healthy elderly individuals following consumption of 10 10 colony forming units (cfu) twice daily for 28 days. The probiotic modulated the gut microbiota transcriptome. In particular, Bifidobacterium genes involved in flagellar motility, chemotaxis and adhesion were increased following probiotic consumption, and gene expression in the butyrate producers Ruminococcus and Eubacterium was also increased. This suggests that this single probiotic strain has the potential to promote anti-inflammatory pathways.
Prebiotic supplementation in the elderly has generated promising results in terms of beneficial alterations to the gut microbiota and also frailty syndrome. Daily consumption of 8 g of short chain FOS for four weeks by healthy elderly individuals led to increases in faecal bifidobacteria counts [128]. Daily doses of GOS at 5.5 g for four weeks in an elderly group resulted in significant increases in bifidobacteria and bacteroides and immune alterations which included lower IL-1β levels and higher C-reactive protein, IL-10, IL-8 and natural killer cell activity [129]. Most recently, prebiotic supplementation which involved a mix of prebiotics at 20 g/day for 26 weeks to frail elderly subjects did not induce global changes in gut microbiota alpha and beta diversity but the abundance of certain bacterial taxa increased including Ruminococcaceae and the levels of the chemokine CXCL11 were significantly reduced [130]. This particular chemokine is produced in response to microbial antigens [131]; although the authors state that the health/clinical benefits are not clear. Buiges et al. [132] investigated the impact of prebiotic supplementation on frailty syndrome in elderly individuals in a randomized, double-blind clinical trial. In this case, the prebiotic in question, Darmocare Pre ® which is a mix of inulin and FOS did not significantly modify the overall rate of frailty but did significantly improve two frailty criteria, exhaustion and handgrip, following 13 weeks of daily consumption. The authors suggest that therapeutics aimed at the gut microbiota-muscle-brain axis should be considered for the treatment of frailty syndrome. More recently, the same prebiotic was tested in nursing home residents and of the 28 participants in the intervention group, 25 revealed reduced frailty index levels where the moderately/severe frail participants showed the greatest reduction [133].
In a prospective, double-blind, placebo-controlled, randomized single centre study involving 40 healthy elderly subjects (aged between 60-80 years), intake of a synbiotic combination of soluble corn fibre with L. rhamnosus GG for three weeks tended to promote innate immunity in elderly women and 70-to 80-year-old volunteers (male and female) by increasing natural killer cell activity [134]. Interestingly, the pilus-deficient version of L. rhamnosus GG, termed L. rhamnosus GG-PB12, with the soluble corn fibre increased natural killer cell activity in older volunteers compared to soluble corn fibre alone. The combination of L. rhamnosus GG-PB12 with the corn fibre also decreased C-reactive protein, an indicator of inflammation in the body. Total cholesterol and LDL-cholesterol was also reduced in individuals who had presented with elevated levels following intake of L. rhamnosus GG with the soluble corn fibre. The genus Parabacteroides was significantly increased as a result of either strain with the corn fibre. Soluble corn fibre alone and soluble corn fibre with L. rhamnosus GG increased levels of Ruminococcaceae incertae sedis. Decreases in the levels of Ruminococcaceae and Parabacteroides have been pinpointed as the main microbial shifts associated with ageing in mice [135,136]. Slight reductions were observed in Oscillospira (positively associated with leanness and health [137]) and the sulphate-reducing Desulfovibrio following L. rhamnosus GG with soluble corn fibre consumption, whereas only Desulfovibrio decreased following intake of L. rhamnosus GG PB12 with corn fibre.
These studies indicate that dietary interventions involving fibre, prebiotics and probiotics in the elderly, and especially those in residential care, can induce beneficial changes to the gut microbiota with potential to improve immune function and gut homeostasis. The gut microbiota of healthy younger adults is considered a suitable reference for the elderly microbiota assuming the younger population shares the same geographical location, historical past and social habits/lifestyle etc. [111,138]. Thus, further studies are required for this cohort to find interventions which can improve the relevant targets of the intestinal microbiota and immune function and generate meaningful physiological changes which translate to improved general health and well-being (e.g., frailty reduction, improved mobility, reduced risk of fracture and falls, improved sleep and overall energy levels etc).
Physical Activity
The impact of exercise on the gut microbiota has only begun to be studied in recent years. In a first study of its kind, Clarke et al. [139] reported increased microbial diversity in a professional rugby team of a preseason camp compared to age-matched and BMI-matched controls. The gut microbiota differences observed in these athletes correlated with protein consumption and creatine kinase, a marker of extreme exercise. In fact, protein accounted for 22% of the total energy intake of athletes compared to 16% in the low BMI control group and 15% in the high BMI control group. A follow-on study investigating the metabolic activity of the gut microbiota of these athletes revealed several differences compared to the control groups [140]. Pathways involved in amino acid biosynthesis, carbohydrate metabolism and antibiotic biosynthesis were increased in athletes. SCFA levels were also increased in the athletic group. Of note, athletes also excreted higher levels of the uremic toxin, trimethylamine-N-oxide (TMAO), which has been discussed in Part 1 of this review [2] as it has been proposed as a risk factor for cardiovascular disease in humans. However, the authors state that the implications of this result are limited and require further study. As expected, the athletes consumed more calories and macronutrients than the control groups. Fibre intake was also higher in the athletic group compared to the high BMI control group.
In order to better understand the impact of exercise on the gut microbiota, Estaki et al. [141] analysed the microbiota of healthy individuals with varying levels of fitness and reported that cardiorespiratory fitness correlated with increased microbial diversity in healthy humans. Six weeks of endurance exercise by overweight women was reported to alter the gut metagenome with an increase in the health-promoting Akkermansia and a decrease in Proteobacteria [142]. Notably, diets did not change during the six weeks control period before the exercise intervention or during the six-week exercise period. Despite the changes to the gut microbiota, systemic metabolites and body composition were not greatly affected. Likewise, five-weeks of endurance exercise by elderly men was found to significantly decrease the relative abundance of C. difficile and significantly increase Oscillospira which correlated with beneficial changes in several cardiometabolic risk factors [143]. Changes in food intake did not differ between control and exercise periods. Allen et al. [144] reported that six weeks of endurance exercise increased faecal SCFA concentrations in lean but not obese participants and the metabolic changes were associated with changes in bacterial taxa and genes capable of producing SCFAs. Interestingly, the exercise-induced changes were reversed when exercise ceased. While these studies demonstrate the beneficial impacts of exercise on the gut microbiota, exercising to the point of exhaustion may induce detrimental changes [99]. For example, intense military training undertaken by soldiers for four days resulted in increased intestinal permeability and changes in microbiota composition which included increased alpha diversity and an increase in the abundance levels of potentially pathogenic taxa (e.g., Staphylococcus, Peptostreptococcus, Peptoniphilus, Acidaminococcus, and Fusobacterium) at the expense of several taxa thought to protect against pathogen invasion (e.g., Bacteroides, Faecalibacterium, Collinsella, and Roseburia) [145]. Thus, as suggested by Ticinesi et al. [99], the impact of exercise on the gut microbiota may depend on the intensity and duration, however other confounders should also be considered including diet, nutrient intake and body composition parameters, a topic that requires further investigation.
We have already mentioned the gut-muscle axis hypothesis in Section 2.3 (Elderly in Nursing Homes) and indeed it has been proposed that the gut-microbiota axis may be two-way with exercise influencing the microbiota and the microbiota influencing muscle [99], the latter of which was observed in the case of muscle-wasting in rats [103]. Ticinesi et al. [99] provided a list of hypothesised pathways linking gut microbiota modulation to muscle function and include (1) bioavailability of dietary proteins and specific amino acids, (2) vitamin synthesis such as folate, B 12 and riboflavin, (3) biotransformation of nutrients such as polyphenols and ellagitannins, (4) intestinal mucosa permeability, (5) bile acid biotransformation, (6) SCFA synthesis. In the case of intestinal dysbiosis, changes in these pathways may have negative consequences for skeletal muscle function. The interaction between the gut microbiota and the immune system is also another factor in the gut-muscle axis hypothesis [99] given the purported links between inflammation and age-related muscle wastage [146]. Further studies in this field are clearly warranted to understand the complex relationships between all these factors. Ultimately, this should help in the design of strategic exercise programmes, diets and probiotic/prebiotic interventions which are optimised for life stage ensuring a healthy gut microbiota for optimal skeletal-muscle function and host health.
Nowadays probiotic supplementation is common practice for many athletes involved in different sports and is generally taken to reduce incidence of infection, especially upper respiratory tract infections and gastrointestinal problems. Upper respiratory illness is reported to account for 35%-65% of illness presentations to sports medicine clinics [147]. These infections are generally caused by common respiratory viruses, allergic responses to aeroallergens and exercise-related trauma to respiratory epithelial membrane integrity [147]. Gastrointestinal disorders in athletes can occur during or after intense physical activity and include bloating, abdominal pain, diarrhoea, and blood in the stool and may be caused by inadequate blood supply to the digestive tract during exercise [148,149]. Gastroesophageal reflux disease (GERD) can also be exacerbated by intense exercise [150].
Interestingly, probiotic supplementation in the form of Lactobacillus casei Shirota to men and women (n = 32) involved in endurance-based physical activities for four months of the winter significantly reduced the incidence of upper respiratory tract infections compared to the placebo group and the proportion of placebo subjects who experienced one or more weeks with upper respiratory tract infection symptoms was 36% higher than those taking the probiotic [151]. Salivary IgA was also significantly higher in the probiotic group, an effect which was not evident at baseline. In a later study with the same probiotic strain, five months of supplementation to university athletes and game players (n = 243) had no impact on upper respiratory tract infection symptoms which the authors state could be attributable to the low incidence of such symptoms during the study [152]. The probiotic was associated with plasma cytomegalovirus and Epstein Barr virus antibody titres which could be interpreted as an improvement in immune status. Consumption of heat-killed Lactococcus lactis JCM 805, also known as LC-Plasma, for 13 days was shown to relieve the morbidity and symptoms of upper respiratory tract infections in male athletes performing high-intensity exercise [153]. This was achieved by activation of plasmacytoid dendritic cells (pDC) which are known to play a significant role in viral infection. Furthermore, the bacterial strain decreased fatigue accumulation during consecutive high intensity exercise. A later study in mice showed that LC-Plasma-activation of pDC in turn attenuates the concentration of fatigue controlled cytokine TGF-β and muscle degenerative genes [154]. Consumption of a probiotic powder containing L. rhamnosus GG and Bif. animalis ssp. lactis BB12 reduced the duration and severity of upper respiratory tract infections in college students and fewer school days were missed [155].
Administration of Lactobacillus fermentum (PCC ® ) to male (n = 64) and female (n = 35) competitive cyclists for 11 weeks generated mixed results in terms of gastrointestinal illness and lower respiratory illness symptoms [156]. Males experienced a reduction in the severity of gastrointestinal illness which became more pronounced as training load increased. The load of lower respiratory illness symptoms was also reduced in males compared with the placebo but actually increased in females on the probiotic. Probiotic numbers increased 7.7-fold more in males compared to an unclear 2.2-fold increase in females. Thus it was concluded that L. fermentum could be a useful nutritional adjunct for exercising males. Consumption of Lactobacillus salivarius for four months in the spring by both men and women (n = 66 in total) participating in endurance-based physical activities had no impact on incidence of upper respiratory tract infections or mucosal immune markers [157]. While probiotic supplementation for one month did not have any effect on severity of upper respiratory tract infections or gastrointestinal episodes in 30 elite rugby union players, it did significantly reduce the number of participants experiencing such symptoms and tended to reduce the number of illness days compared to placebo [158]. Consumption of a multispecies probiotic for three months in the winter by trained athletes (n = 33) reduced the incidence of upper respiratory tract infections compared to the placebo and reduced exercise-induced tryptophan-degradation rates [159]. Despite this, probiotic supplementation did not improve athlete performance. The probiotic L. helveticus Lafti L10 significantly reduced the duration of upper respiratory tract infection episodes in 39 elite athletes during 14 weeks of supplementation in the winter but did not influence severity of symptoms or incidence [160]. A follow-on study indicated that the probiotic modulated mucosal and humoral immunity in elite athletes [161]. The probiotic was also shown to exert certain antioxidant potential in elite athletes following three months of supplementation but further research is warranted to confirm this effect [162]. Interestingly, based on the results of a meta-analysis of randomized controlled trials comparing probiotics with placebo to prevent acute upper respiratory tract infections in children, adults and older people (n = 3720), Hao et al. [163] concluded that probiotics were better than placebo for reducing the incidence of such episodes, the duration of episodes as well as cold-related school absence and antibiotic use. A recent systematic review of the effects of probiotic supplementation on physically active individuals (n = 1680, athletes and non-athletes) concluded that positive effects were reported for several outcomes including respiratory tract infection, markers of immunity and gastrointestinal symptoms; however, the study failed to identify standardised supplementation protocols owing to the distinct protocols employed across the studies, as well as different measured outcomes and small sample size [164].
In terms of performance, supplementation with certain probiotics has been shown to have a beneficial effect by presumably influencing host and nutrient metabolism. For example, taking Lactobacillus plantarum TWK10 for six weeks resulted in significantly higher endurance performance and glucose content in a maximal running treadmill test in eight adults compared to the placebo group (n = 8) such that the authors suggest it could have potential as an aerobic exercise supplement [165]. L. plantarum PS128 was reported to have beneficial effects on high-intensity, exercise-induced oxidative stress, inflammation and performance in a study involving triathletes [166].
Very few studies have investigated the impact of prebiotics on athletes. A multi-strain probiotic/ prebiotic antioxidant intervention for 12 weeks in recreational athletes prior to a long-distance triathlon was shown to reduce plasma endotoxin unit levels and maintain intestinal permeability [167]. Gastrointestinal symptoms such as cramping, diarrhoea, nausea and abdominal pain etc. were also significantly lower in the test group compared with the placebo during the intervention.
In efforts to generate probiotic and prebiotic supplements for physically active individuals, the type and intensity of exercise performed should be taken into account considering that exercise exerts its own effects on the gut microbiota and may even influence the efficacy of the intervention, although this has yet to be investigated. Furthermore, marketing of such supplements should make clear the intended beneficial effects which range from improved immunity against particular illnesses to improved performance. In this regard, double-blind, randomized controlled, multi-centre trials involving larger cohorts of participants with standardised supplementation protocols are required.
In terms of diet, athletes tend to consume more protein than the average population and an early research review in 1984 examining the importance of protein for athletes concluded that athletic individuals should consume 1.8 to 2.0 g of protein/kg of body weight/day which is approximately twice that recommended for sedentary individuals [168]. However, the studies reviewed in Part I of this review [2] in relation to the impact of protein on the gut microbiota clearly showed that dietary source is a critical factor with animal-and plant-derived protein sources generating heterogeneous responses in terms of gut microbiota composition and functionality. Further research is warranted to fully comprehend the consequences of these alterations but Blachier et al. [169] concluded that some caution should be exercised around high protein diets given their effects on the gut microbiota following a review of the topic. In this regard, probiotic supplements geared at the sports industry should be investigated with respect to typical dietary extremes undertaken by athletic individuals.
Stress
In a recent report by a UK charity called the Mental Health Foundation, stress is defined as the "body's response to pressures from a situation or life event" [170]. According to this report almost 74% of people surveyed from a total of 4169 adults felt stressed to the point of being overwhelmed or unable to cope at some point of 2018. Stress can be caused by a variety of events and life situations. Some common stress triggers include workplace related stress, exam stress, and illness. Exam stress can be a major issue for students, negatively affecting sleep patterns and academic performance [171,172]. With regard to work-related stress, a report compiled by the UK Health and Safety Executive stated that over half a million people suffer from work-induced stress, depression or anxiety, resulting in a loss of 15.4 million working days over 2017 and 2018 [173]. Workplace stress is also the major source of stress for adults in the USA [174]. Implementation of work-place policies and procedures is critical in tackling these issues. However, we now know that quality of diet, specific dietary components and supplements can aid in the treatment or prevention of depression, anxiety and stress symptoms [175]. Opie et al. [176] compiled a number of dietary recommendations for the prevention of depression based on current available evidence which included increased consumption of fruits, vegetables, wholegrain cereals, legumes, nuts and seeds, high consumption of omega-3 polyunsaturated fatty acids, limited intake of processed foods and replacement of unhealthy foods with nutritious wholesome foods. In addition, the study recommended following traditional dietary patterns such as the Japanese, Norweigan or Mediterranean diet, the latter of which has been discussed in detail in Part I of this review in terms of its beneficial impacts on the gut microbiota and host health [2]. Thus, the impact of diet on the gut microbiota undoubtedly influences our emotional state. Even in adults without diagnosed mood disorder, gut microbes have been shown to be connected to mood (depression, anxiety and stress) and these relationships differ by sex and are influenced by dietary fibre intake [177]. It is now known that the gut microbiota communicates with the brain along the brain-gut-microbiota axis as evidenced from preclinical and some clinical studies [178]. In Part I of this review [2], we discussed the ability of the gut microbiota to produce neurochemicals including gamma amino butyric acid (GABA), a major inhibitory neurotransmitter in the brain [179], as well as its involvement in serotonin biosynthesis [180] and tryptophan metabolism [181].
The impact of psychological stress on the gut microbiota has been reviewed recently by Karl et al. [182]. To date, most studies have focused on rodent models, many of which have demonstrated a reduction in Lactobacillus following exposure to stress [183][184][185][186][187]. Interestingly, exam stress in humans has been shown to reduce gut lactic acid bacteria [188] and Taylor et al. [177] reported an inverse relationship between Anxiety scale scores and Bifidobacterium in females, while in males an inverse relationship was observed between depression-scale scores and Lactobacillus. Thus probiotic and prebiotic interventions have the potential to impact the gut-brain axis with beneficial consequences for mood and stress behaviours.
Chronic fatigue syndrome is characterised by persistent and relapsing tiredness and 97% of patients report neurological disturbances resulting in a variety of emotional symptoms of which anxiety and depression are the most common [189,190]. In a pilot study involving 39 chronic fatigue syndrome patients intake of L. casei strain Shirota for two months resulted in a significant decrease in anxiety symptoms compared with the control group (P = 0.01) [189]. Lactobacillus and bifidobacteria counts were also significantly increased as a result of probiotic administration. A probiotic mix consisting of L. helveticus R0052 and Bif. longum R0175 was found to relieve psychological distress significantly in healthy human volunteers (n = 55) participating in the clinical trial as measured by the Hopkins Symptom Checklist, the Hospital Anxiety and Depression Scale, the Coping Checklist and urinary free cortisol [191]. Black Depression Inventory scores were reduced in volunteers (n = 20) with major depressive disorder following eight weeks of supplementation with a probiotic mixture consisting of L. acidophilus, L. casei and Bif. bifidum [192]. Several metabolic parameters were also improved including serum insulin levels and homeostasis model assessment of insulin resistance. Interestingly, probiotic administration has also proven beneficial in the case of postpartum symptoms of depression. In this case, 423 women participated in the trial at 14-16 weeks of gestation and consumed L. rhamnosus HN001 daily until six months postpartum [193]. Mothers in the probiotic group reported significantly lower depression scores and anxiety scores compared to mothers in the placebo group in the postpartum period.
In terms of exam stress, consumption of fermented milk containing L. casei Shirota for eight weeks by healthy medical students (n = 24) until the day before examination resulted in significantly reduced salivary cortisol levels and plasma tryptophan levels compared with the placebo group (n = 23) and two weeks after the examination the probiotic group had significantly higher faecal serotonin levels [194]. Furthermore, during the pre-examination period at 5-6 weeks, the rate of subjects experiencing common abdominal and cold symptoms and total number of days experiencing such symptoms was significantly lower in the probiotic group. In rats exposed to water avoidance stress (WAS), the same strain significantly suppressed WAS-induced increases in plasma corticosterone and significantly reduced the number of corticotropin releasing factor-expressing cells in the paraventricular nucleus [195]. In the same study, intragastric administration of the strain, in a dose-dependent manner, stimulated gastric vagal afferent activity.
Modulation of the gut microbiota with prebiotics has also generated promising results in terms of emotional symptoms. For example, consumption of the prebiotic trans-GOS for 12 weeks at 7 g/day (but not 3.5 g/day) significantly improved anxiety scores in individuals suffering from irritable bowel syndrome (IBS) compared with the placebo group [196]. Faecal bifidobacteria were significantly increased in the prebiotic group at 3.5 g/day (P < 0.005) and 7 g/day (P < 0.001). Intake of Bimuno ® -GOS for three weeks significantly reduced salivary cortisol awakening response in healthy volunteers [197]. In the same study, this particular prebiotic resulted in decreased attentional vigilance to negative versus positive information in a dot-probe task. Consumption of short-chain FOS at 5 g/day for 4 weeks significantly increased faecal bifidobacteria in IBS patients and significantly reduced anxiety scores [198].
While these studies highlight the benefits of particular probiotics, prebiotics and their combinations (summarized in Table 1), the beneficial effects rarely impacted every subject in a test group, albeit they impacted enough to generate statistical significance in most cases. One possible reason is the quality and quantity of an individual's baseline microbiota. In the following sections, this becomes very apparent whereby studies have begun to disentangle the disparities between responders and non-responders in terms of gut microbiota composition and behaviour, particularly in response to fibre. Moving forward, it is possible that future interventions will have to be individually-tailored following a comprehensive analysis of an individual's gut microbiome through microbiome testing, the feasibility of which is discussed in Section 6.
Modifying the Microbiota as A Target for Preventing Over/Undernutrition-Potential of Probiotics, Prebiotics and Dietary Fibre
Undernutrition and overnutrition represent forms of malnutrition which manifest due to imbalances in energy and/or nutrient intake [199]. Symptoms of undernutrition include wasting (low weight-for-height), stunting (low height-for-age) and underweight (low weight-for-age) [199]. Overnutrition results from overfeeding, defined as the supply of energy containing nutrients in excess of requirements resulting in fat storage and other undesirable outcomes as discussed in Part I of this review [2]. Overweight and obesity can coexist with undernutrition, a phenomenon described as the "double burden" of malnutrition by the WHO. Thirteen percent of the world's population aged 18 years and over are obese [200]. According to the WHO, 462 million adults are underweight and around 45% of deaths among children under five years of age are linked to undernutrition [199]. Given the link between the gut microbiota and energy regulation in the body, probiotics, prebiotics or fibre may provide effective dietary strategies to restore energy homeostasis through strategic manipulation of the gut microbiota.
Probiotics
Several clinical trials have investigated the impact of probiotics on overnutrition in humans. The studies are discussed in this section and are summarized in Table 2. The bacterium Bif. breve B-3 was used in a randomized, double-blind, placebo-controlled trial involving adult volunteers with BMI ranging from 24 to 30 kg/m 2 [201]. According to the WHO, BMI values from 25.0 to 29.9 represent a pre-obesity nutritional status, while a BMI of 30 falls into class I obesity [205]. In the trial, participants received either placebo (n = 25) or a B-3 capsule (n = 19) (approximately 5 × 10 10 cfu/day) for 12 weeks [201]. Consumption of the B-3 capsule significantly lowered fat mass by week 12. Improvements in some blood parameters related to liver function and inflammation were observed and significant correlations could be made between these and the changed fat mass indicating that Bif. breve B-3 has the potential to improve metabolic disorders. Since some of the participants in this trial were receiving medication for diabetes, hypertension or hyperlipidemia, another randomized, double-blind, placebo-controlled trial was recently performed with B-3 involving 80 pre-obese adults (25 ≤ BMI < 30) without any disorders [202]. While fat area significantly increased in the placebo group at weeks 4 and 8, no changes were observed in the B-3 group. Indeed, body fat mass and percent body fat were significantly lower in the B-3 group at weeks 8 and 12. The probiotic strain slightly decreased triglyceride levels and improved HDL cholesterol from baseline suggesting potential for the strain to reduce body fat in healthy, pre-obese individuals. In overweight and obese adults, six months consumption of Bif. animalis ssp. lactis 420 (10 10 cfu/day) was shown to control body fat mass and reduce weight circumference and food intake [203]. Interestingly, circulating zonulin, a potential marker of intestinal permeability, remained consistently lower in the probiotic group, and changes in zonulin significantly correlated with changes in body fat mass. In addition, changes in high-sensitivity C-reactive protein resembled those of zonulin. Thus, the authors speculate that the probiotic strain exerted its control on body fat mass via circulating zonulin levels and hence gut permeability and by attenuating low-grade inflammation.
Certain probiotic strains have been shown to enhance weight gain to such an extent that they have gained popularity as alternatives to antibiotic growth promoters in animal feed where they are often referred to as direct fed microbials (DFMs) [206]. The mechanisms responsible for this effect include promotion of a favourable gut microbiota, enhanced digestion and absorption of nutrients, altered gene expression in pathogenic microorganisms, and the various mechanistic actions associated with colonisation resistance including immunomodulation [206]. A comparative meta-analysis on the effects of Lactobacillus species on weight gain in humans and animals involving 17 randomized clinical trials in humans, 51 studies on farm animals and 14 experimental models concluded that different Lactobacillus species exert different effects on weight change and these effects are host-specific, however, L. acidophilus administration results in significant weight gain in humans and animals [207]. A more recent systematic review assessing the potential of probiotic diets to significantly influence weight change in obese and non-obese individuals revealed that the effects were species and strain-specific [204]. For example, while L. gasseri BNR17 reduced weight gain, L. gasseri L66-5 promoted it. A systematic review on the effects of probiotics on child growth involving 12 studies, 10 of which were randomized controlled trials, revealed that probiotics have the potential to improve child growth in children in developing countries and in under-nourished children [208].
Kwashiorkar is a form of severe acute malnutrition (SAM) resulting from inadequate nutrient intake coupled with additional environmental insults [209]. By studying Malawian twin pairs during the first three years of life, of which half of the twin pairs remained well-nourished, 43% became discordant and 7% manifested concordance for acute malnutrition, Smith et al. [209] revealed the gut microbiota as a causative factor since the kwashiorkor microbiome with Malawian diet induced marked weight loss when transplanted to mice. Million et al. [210] reported a dramatic depletion of obligate anaerobes in SAM. Indeed, while Enterococcus faecalis, E. coli and Staphylococcus aureus were consistently enriched in cases of SAM, several species of the following families were consistently depleted: Bacteroidaceae, Eubacteriaceae, Lachnospiraceae, and Ruminococcaceae, along with dramatic depletion of Methanobrevibacter smithii. Overall, total bacterial number was decreased and faecal redox potential increased. Such microbes have been termed the healthy mature anaerobic gut microbiota (HMAGM) [211]. Indeed, the first step in gut microbiota alterations associated with SAM is early depletion of the pathogen inhibitor Bif. longum, followed later on by absence of the HMAGM resulting in deficient energy harvest, immune protection and vitamin biosynthesis which are associated with malabsorption, systemic pathogen invasion and diarrhoea [211]. In this regard, Alou et al. [212] used a combination of culturomics and metagenomics to analyse the stool samples of healthy children and kwashiorkor patients to identify potential probiotics to treat SAM. This resulted in the identification of 12 species in healthy children which were absent in kwashiorkor patients. These 12 potential probiotics represent an array of possible functions including antibacterial potential, polysaccharide fermentation, butyrate production, antioxidant potential or simply common members of the gut microbiota from healthy humans and healthy breast-fed infants. The authors propose that this cocktail of probiotics offers a defined, reproducible, safe and convenient alternative to faecal transplantations for the treatment of SAM in children.
Furthermore, it is important to mention that probiotic-mediated beneficial effects may not require live cells. Indeed, Plovier et al. [10] reported that a purified membrane protein from Akkermansia muciniphila, or the pasteurised bacterium, improved metabolism in diabetic and obese mice. Indeed, pasteurisation enhanced its capacity to reduce dyslipidemia, fat mass development and insulin resistance. This finding suggests that the beneficial effects of difficult-to-cultivate microorganisms may still be harnessed for therapeutic use by using dead or injured cells.
Prebiotics
Parnell and Reimer [213] investigated the impact of daily oligofructose supplementation (21 g/day) in healthy adults with BMI > 25 for 12 weeks in a double-blind, placebo-controlled trial. Compared to the control group, which experienced an increase in weight gain (0.45 ± 0.31 kg), the prebiotic group experienced a 1.03 ± 0.43 kg loss in body weight. Glucose regulation was also improved in the prebiotic group who self-reported a reduction in caloric intake. The authors suggest that the suppression in ghrelin expression and enhanced peptide YY (PYY) expression observed in the prebiotic group partly contributes to the reduction in energy intake. In overweight/obese children, aged 7-12 years, daily consumption of 8 g oligofructose-enriched inulin for 16 weeks significantly reduced body weight z-score (reduced by 3.1%), percent body fat (2.4% reduction), and percent trunk fat (3.8% reduction) compared to children who received the placebo who experienced a slight increase in all 3 parameters [214]. The prebiotic group also showed a significant decrease in IL-6 levels from baseline (15% lower), while the placebo group showed an increase (by 25%). Serum triglycerides were also significantly reduced (by 19%) in the prebiotic group. Gut microbiota analysis revealed significant increases in Bifidobacterium species and decreases in Bacteroides vulgatus in the prebiotic group. Levels of primary bile acids increased in the placebo group but remained unchanged in the prebiotic group over the 16-week period. However, twelve weeks of oligofructose consumption at the same quantity in obese and overweight children, aged 7-11 years (8 g prebiotic/day) and aged 12-18 years (15 g/day) in a double-blind placebo-controlled trial had no impact on body weight and body fat [215].
Consumption of inulin-type fructans by obese women at 16 g/day for 3 months led to gut microbiota changes which included an increase in Bifidobacterium and F. prausnitzii, both of which negatively correlated with serum lipopolysaccharides [216] The prebiotic also decreased Bacteroides intestinalis, Bac. vulgatus and Propionibacterium which was associated with a slight decrease in fat mass and with phosphatidylcholine and plasma lactate levels. The authors suggest that the modest changes in host metabolism indicate a role for inulin-type fructans to support dietary advice with regards obesity and related metabolic disorders. In a later randomized, double-blind, parallel, placebo controlled trial, obese women consuming the same prebiotic at the same concentration for three months had significantly lower total SCFAs, acetate and propionate (that positively correlated with BMI), as well as significantly lower fasting insulinemia and homeostasis model assessment (indicator of insulin resistance) compared to the placebo group [217]. The following species were significantly increased in the prebiotic group at the end of the three months, Bifidobacterium adolescentis, Bifidobacterium pseudocatenulatum and Bif. longum, the latter of which negatively correlated with serum lipopolysaccharide and endotoxin.
Synbiotics have shown some promise towards improving growth outcomes in healthy and malnourished children, although there is a paucity of clinical trials in this area. For example, Malawian children, aged 5 to 168 months, suffering from SAM who received ready-to-use-therapeutic-food (RUTF) with a synbiotic for approximately 33 days (median) in a double-blind efficacy randomized controlled trial showed a trend towards reduced outpatient mortality when compared to those who received RUTF alone (P = 0.06) [218]. Despite this, the study showed no differences between both groups in terms of nutritional cure, weight gain, time to cure, and prevalence of clinical symptoms including respiratory issues, fever and diarrhoea. One year consumption of a probiotic-(Bif. lactis, 1.9 × 10 10 cfu/day) and prebiotic-fortified milk by Indian preschool healthy and stunted children resulted in increased weight gain (0.13 kg/year, P = 0.02) and reduced risk of being anemic and iron deficient (P = 0.01) compared to children receiving control milk [219]. A synbiotic consisting of Bif. longum, L. rhamnosus and inulin and FOS fed to healthy 12-month-old toddlers in milk for one year significantly improved weight gain compared to those receiving control milk (difference of 0.93 g/day) [220]. The weight gain resulted in a change in z-score weight-for-age closer to the WHO Child Growth Standard. Fecal lactobacilli and enterococcal counts were also significantly increased in the synbiotic group between 12 and 16 months. A six-month synbiotic supplementation to children with failure to thrive, a common problem in children in underdeveloped countries, resulted in a significant increase in weight gain compared to control children [221]. Indeed, by the end of the six-month trial, the mean weight of the control group was 11.760 ± 0.17 kg which increased from 10.75 ± 0.16 kg initially, while the mean weight of the synbiotic group was 12.280 ± 0.190 kg, increasing from 10.25 ± 0.2 kg initially. More clinical trials investigating the impact of combinations of probiotics and prebiotics on undernutrition are required. An emphasis on gut microbiota changes should help to delineate mode of action and identify the most suitable formulations for specific conditions. The studies discussed in this section are summarized in Table 3.
Fibre
In order to fully appreciate the impact of fibre on the gut microbiota it is important to be aware of the different types and their properties and in this respect several classification systems have been proposed. That proposed by Ha et al. [13] classifies fibres into those that are "microbially degradable" and those that are "microbially undegradable." Combining microbial degradability with the other main properties of fibre, including viscosity and water solubility, Bozzetto et al. [222] presented four main groups based on the concepts of McRorie et al. [223]: (1) non-viscous, insoluble, non-fermentable fibre, e.g., bran, cellulose, hemicelluloses, lignin; (2) non-viscous, soluble, fermentable fibre, e.g., inulin, dextrin, oligosaccharides, resistant starch; (3) viscous, soluble, fermentable fibre, e.g., pectin, β-glucan, guar gum and glucomannan; (4) viscous, soluble, non-fermentable fibre, e.g., psyllium, methylcellulose. Different fibres can exert different effects on the gut microbiota and hence have different physiological consequences for the host (Table 4). In humans, increased fibre intake has been shown to improve certain metabolic parameters associated with obesity and its co-morbidities, such as serum cholesterol levels, particularly in conjunction with energy-controlled dietary regimes. For example, in overweight and obese adults (BMI = 25 to 45), daily consumption of two portions of whole-grain ready-to-eat oat cereal (3 g/day oat β-glucan) as part of a reduced energy dietary programme (~500 kcal/day deficit) with regular physical activity for 12 weeks proved more effective than an energy-matched low fibre diet for reducing LDL cholesterol levels (P = 0.005), total cholesterol (P = 0.038), and non-HDL cholesterol (P = 0.046) [224]. While weight loss did not differ between groups, there was a significant difference in waist circumference as a result of eating the high fibre diet, resulting in a loss of~3.3 cm versus only~1.9 cm on the low fibre diet (P = 0.012). Daily consumption of whole grain wheat bread by Japanese subjects (BMI ≥ 23) for 12 weeks resulted in a significant reduction in visceral fat area (−4 cm 2 ) which was not observed in subjects consuming refined white bread [225]. Similarly, whole grain wheat consumption in conjunction with an energy restricted diet for 12 weeks by post-menopausal women resulted in a greater reduction in body fat percentage (−3.0%) compared to consumption of refined wheat (−2.1%) [226]. While serum total and LDL cholesterol increased by~5% in the refined wheat group (P < 0.01), they did not change in the whole wheat group. Body weight decreased significantly for both groups but did not differ between groups. Consumption of the recommended intake levels of dietary fibre and fat in obese and overweight (BMI = 30.7) pregnant women positively associated with gut microbiota richness whereas high fat with low fibre and low carbohydrate consumption associated with significantly lower gut microbiota richness [227]. The richer gut microbiota correlated with lower maternal inflammatory status. In another study involving overweight and obese pregnant women, low fibre intake was found to increase the genus Collinsella in the gut microbiota, which is positively associated with circulating insulin [228]. The low fibre diet was also associated with a gut microbiota favouring lactate fermentation, whereas the high fibre diet was associated with SCFA producing bacteria.
In a study investigating the existence of a correlation between body weight change over time and gut microbiome composition involving 1632 healthy females from TwinsUK, (national register of adult twins for studying age-related complex traits and disease), Menni et al. [229] found that gut microbiota diversity was negatively associated with long-term weight gain, but positively associated with fibre intake independent of calorie intake or other confounders.
These studies indicate that dietary fibre has a role to play in the control of obesity and its related comorbidities. Further research is needed in order to fully comprehend the impact of the different dietary fibres on the gut microbiota and to delineate the subsequent consequences for host metabolic health. However, the human microbiota is characterised by extensive inter-individual variation, with genetics a significant contributing factor, and it is now becoming clear that the composition of an individual's microbiota will determine how it responds to dietary components, in particular fibre. In this regard, understanding what 'responding' and 'non-responding' microbiota look like is essential as well as how to convert a 'non-responder' into a 'responder.'
The Microbiota Can Be Used as a Biomarker to Predict Responsiveness to Specific Dietary Consitituents, For Example, Fibre
We know that diet can directly or indirectly influence the gut microbiota, and studies have shown that this response can be rapid with changes observed within 1-3 days [7,230,231] when the modifications are "large" [232] such as the all-animal or all-plant products diet [7], or large increases/decreases in fibre [230,231]. However, inter-individual variance is often much greater than the variance introduced as a result of diet [233]. Indeed, an individual's baseline microbiota and health status at the beginning of an intervention influences the extent of potential changes to the microbiota and subsequently the host, and studies are showing that baseline microbiota consisting of responders and non-responders to dietary interventions as well as effectors of host responses (both of which may be the same microorganism or a consortia of microorganisms), is generally linked to habitual dietary trends [6,230,234]. In this regard, an individual's baseline microbiota harbors predictive potential with regard to the effect of dietary constituents on the host and this has been proven particularly in case the of fibre.
Salonen et al. [233] reported that high microbiota diversity before dietary intervention with resistant starch or non-starch polysaccharides associated to low dietary responsiveness of the microbiota. Similarly, obese individuals with low microbial gene richness (low microbiota diversity) in their initial faecal microbiota showed a greater microbiota response in terms of gene richness to a weight loss diet compared to obese individuals represented by high microbial gene richness in their initial microbiota [234]. However, individuals with high gene richness showed a more marked improvement in systemic inflammation and adipose tissue following the intervention suggesting that gene richness could provide a predictive tool towards intervention efficacy in relation to inflammatory variables. Tap et al. [235] also reported that low OTU microbiota richness was associated with a greater microbiota change over time following a large increase in dietary fibre (40 g/day) in healthy adults for 6 weeks whereas high OTU microbiota richness at baseline proved more stable upon high dietary fibre intervention and was associated with high proportions of Prevotella and Coprococcus species and a higher Prevotella:Bacteroides ratio.
Indeed, a number of studies have reported associations between the abundances or lack of abundance of specific species and the responsiveness of the microbiota and the host to dietary intervention. Two overweight men who failed to ferment significant amounts of resistant starch during a 10-week intervention involving a total of 14 participants showed very low numbers of R-ruminococci (relatives of Ruminococcus bromii) and were also non-methanogenic [231]. The gut microbiota of healthy subjects exhibiting improved glucose tolerance following three days of consumption of barley kernel-based bread was enriched with Prevotella copri and after the intervention exhibited a higher Prevotella:Bacteroides ratio compared to non-responders [236]. However, in a follow-on study, the researchers failed to stratify metabolic responders and non-responders based on Prevotella and Bacteroides abundance at baseline, but those with the highest Prevotella:Bacteroides ratio at the beginning of the study displayed improved appetite sensations (less hunger and less desire to eat), reduced insulin responses and reduced inflammatory markers compared to those with the lowest Prevotella:Bacteroides ratio independent of the intervention suggesting that the higher Prevotella:Bacteroides ratio is favourable [237]. Oligotyping of 16s rRNA gene sequencing data, which permits resolution to species level and below, enabled De Filippis et al. [238] to identify distinctive correlation patterns between Prevotella and Bacteroides oligotypes with dietary components and metabolome using faecal samples from omnivore and non-omnivore subjects. The authors concluded that an indiscriminate association between a whole genus and a specific dietary pattern may result in an oversimplified vision of correlations between gut microbiota and diet, failing to take diversity within a genus or even a species into account. Based on three independent cohorts of obese adults from Finland [239], Belgium [216] and Britain [231] involved in different dietary interventions (fibre/prebiotics/weight loss diet) to improve metabolic health, Korpela et al. [240] reported that baseline microbiota of non-responders (in terms of gut microbiota changes) was characterised by average abundances of two Firmicutes species, Eubacterium ruminantium and Clostridium felsineum, which were present at very low or very high baseline abundances in responders. Furthermore, the presence of high levels of Clostridium sphenoides, a common gut inhabitant and Firmicutes member, in the faecal microbiota of obese individuals before dietary invention was associated with a decrease in cholesterol following intervention while obese individuals with abnormally low abundance of this species did not benefit in terms of cholesterol levels. Interestingly, C. sphenoides abundance was not associated with absolute levels of cholesterol and so may not be directly involved in cholesterol metabolism. Dietary fibres were shown to promote a select group of SCFA-producing bacteria in patients with type 2 diabetes [241]. However, when present at greater abundance and diversity, the authors reported an improvement in haemoglobin A1c levels (glycosylated haemoglobin), partly due to increased glucagon-like peptide (GLP)-1 production, and diminution of producers of metabolically detrimental compounds. In a randomized controlled trial investigating the impact of increased intake of whole grains versus fruits and vegetables on the gut microbiota in obese and overweight individuals, Kopf et al. [242] reported that both treatments induced individualised changes but that baseline levels of Clostridiales correlated with the magnitude of change in lipopolysaccharide binding protein which is indicative of change in inflammatory state.
The influence of long-term dietary habits, in particular habitual fibre intake, on gut microbiota responsiveness to specific interventions is now becoming apparent. In a randomized, double-blind, placebo-controlled, cross-over study, Healey et al. [243] classified participants as either high or low dietary fibre consumers prior to three weeks of daily supplementation with an inulin-type fructan prebiotic. The high dietary fibre group revealed significant increases in the relative abundances of Bifidobacterium and Faecalibacterium along with significant reductions in Coprococcus, Dorea and Ruminococcus (Lachnospiraceae family). The gut microbiota of the low dietary fibre group was less responsive showing only an increase in Bifidobacterium. Based on an in vitro approach, Brahma et al. [244] investigated the impact of donor dietary pattern on the fermentation properties of whole grains and brans. Although the samples were taken from donors with similar energy intakes, they differed in terms of their intakes of several beneficial nutrients. Samples from G1 subjects were representative of the superior diet while samples from G2 subjects represented the inferior diet. The G1 microbiota showed higher diversity and greater abundances of beneficial microbes including Faecalibacterium and was better equipped to metabolise the complex carbohydrates than the microbiota from G2 subjects, resulting in greater butyrate production, while the microbiota of G2 subjects produced more acetate and propionate. In another study, Griffin et al. [245] reported that Americans consuming unrestricted diets had less diverse faecal microbiota (termed AMER) compared to the microbiota of individuals consuming calorie-restricted plant-rich diets (termed CRON) and the AMER microbiota lacked many bacterial lineages representative of CRON. Interestingly, transplanting AMER microbiota into gnotobiotic mice and feeding them the CRON diet resulted in community configurations but which were weaker than their CRON counterparts. Placing the AMER communities into a model meta-community composed of several CRON communities resulted in the dispersal of microorganisms between the coprophagic animals which enhanced the reconfiguration of the AMER microbiota in response to the CRON diet and resulted in changes in host metabolic features, all driven by an influx of CRON dietary practice-associated taxa. This artificial metacommunity model provides an opportunity to mine multiple human microbiota for microbial reporters of responses to diet as well as effectors of host response. However, Sonnenburg et al. [246] showed that while microbiota changes in mice resulting from a low MAC diet could be reversed within a single generation by reintroducing MACs, the progressive loss in diversity over multiple generations consuming the low MAC diet could not be reversed with the reintroduction of dietary MACs alone. Importantly, restoration of the microbiota to its original state required the reintroduction of lost taxa along with dietary MACs.
These studies indicate that the microbiota has the potential to serve as an effective biomarker to predict responsiveness to specific dietary constituents with most if not all studies to date focusing on fibre/complex carbohydrates. The responsiveness of the gut microbiota (including responders and effectors of host responses) appears to be largely dependent on baseline microbiota diversity and the specific microbes present or absent at baseline, the latter of which can have a profound influence on the poorly diverse microbiota. Indeed, a highly diverse microbiota as a result of long-term, healthy dietary practices involving adequate fibre consumption remains stable in the face of fibre intervention, is rich in both responders and effectors and is capable of reaping the metabolic benefits for the host. A gut microbiota with low diversity can benefit from dietary intervention but only if the specific responder and effector microbes are actually present even at low abundances. Indeed, Healey et al. [243] showed that lower baseline bifidobacteria concentrations in subjects correlated with a more pronounced bifidogenic response following prebiotic intervention. But poor dietary practices and insufficient dietary fibre intake over a long-term period may actually result in the extinction of beneficial microbial lineages. In this case, dietary intervention with fibre/prebiotics will fail to mitigate a beneficial outcome for the host and will presumably require the addition of specific taxa along with their corresponding MACs in the form of synbiotics. However, despite the presence of resistant-starch degrading microorganisms at low abundances in a subset of healthy young adults, the consumption of resistant starch failed to increase their abundances [247]. Such a phenomenon may be due to the presence of antagonistic microorganisms to the resistant starch-degrading microbes which the authors suggest could require targeted removal prior to intervention and could include the presence of bacteriophages. Another form of dietary fibre may be more suited to the particular microbiota in these individuals, or the synbiotic approach may be required. Clearly, more studies are required to determine gut microbiota responses to specific dietary components, i.e. targeted microbiota dietary intervention, using a top-down approach of gut microbiota analysis from diversity levels to species and even strains, to their gene content and functionality (metabolome, transcriptome, proteome), alongside host clinical and genetic data, for input into machine learning algorithms designed to identify correlations, which subsequently can be investigated for causal evidence, in order to accurately predict individualised responses for maximized health (Figure 1). Indeed, machine learning models for predicting disease from metagenomic datasets have already been developed [248]. Thus, already the potential of the gut microbiota as a biomarker of responsiveness to diet is tangible with opportunity for precision microbiomics beginning to emerge. However, 'causal evidence,' is a critical factor in this workflow and in Section 7 we provide guidelines for evaluating the scientific validity of evidence for providing personalised microbiome-based dietary advice.
Opportunities for Precision Microbiomics
Understanding how the microbiome responds to dietary constituents and the subsequent clinical consequences for the host can be used in the design of precision-tailored diets which ensure maximal nutritional/functional outcome for the host. However, to date only a handful of studies are available to provide specific examples of precision microbiomics in nutrition. For example, composition and functional alterations observed in the faecal metagenome of 145 European women with type 2 diabetes were integrated into a mathematical model that enabled accurate prediction of type 2 diabetes based on metagenomic profiles [249]. The model was capable of identifying women with a diabetes-like metabolism among a group with impaired glucose tolerance. However, the model failed to work on a Chinese cohort revealing that discriminant metagenomic markers for type 2 diabetes differ between Chinese and European cohorts and should be age-and geography-specific. Another example is the direct modulation of colonic microbiota with short-chain GOS to metabolize lactose in lactose intolerant individuals [250]. In this case, GOS failed to elicit a bifidogenic response in three out of 30 participants; however, an increase in bifidobacteria was associated with a decrease in pain and cramping revealing its significance in terms of symptoms. Cho et al. [251] reported that high TMAO producers amongst healthy male adults (≥ 20% increase in urinary TMAO in response to beef and eggs) had significantly more Firmicutes than Bacteroidetes and significantly less microbiota diversity. While the results are based on a short-term feeding study, longer-term feeding trials involving larger cohorts coupled with microbiome data could enable accurate prediction of a high TMAO-producing microbiota and subsequent strategies to alter it. Maintaining normal blood glucose levels is critical for the prevention and control of metabolic syndrome [252] but blood glucose levels are rising at an increased rate as evidenced by the prevalence of prediabetes and impaired glucose tolerance in the general population [253]. Food choices that induce normal PPGRs are critical for controlling blood glucose levels which are in essence controlled by dietary intake. However, until recently, no method existed to predict PPGRs to food. Over the period of a week Zeevi et al. [14] continuously monitored PPGRs in a cohort of 800 healthy and prediabetic individuals in Israel in response to identical meals and noted high variation. They also measured physical activity, anthropometrics, blood parameters, gut microbiota composition and function, as well as self-reported lifestyle behaviours. This multidimensional data was integrated into a machine learning algorithm that was capable of accurately predicting personalised PPGRs and was further validated in an independent cohort of 100 people. Interestingly, the highly variable PPGRs in individuals associated with multiple person-specific microbiome and clinical factors, and tailored diets based on predictions from a machine learning algorithm not only significantly improved PPGRs but also resulted in consistent alterations to the gut microbiota. This study was recently replicated in a different population (from the USA) [254]. Based on a randomized cross-over trial involving 20 healthy subjects comparing the effects of consuming either traditionally made sourdough leavened whole-grain bread or industrially made white bread for one week each, Korem et al. [255] found that the glycemic response varied significantly in response to the different bread types and the type of bread which induced the higher glycemic response in each person could be predicted using microbiome data recorded just prior to the intervention. However, the exact mechanisms involved in the gut microbiota and glycemic control remain to be elucidated.
Non-calorific artificial sweeteners (NAS) were developed to provide sweet taste to foods without the high-energy content of calorie-rich sugars. However, Suez et al. [256] reported that long-term consumption of commonly used NAS in humans significantly and positively correlated with several metabolic syndrome-related clinical parameters including measures of central adiposity, higher fasting blood glucose, higher haemoglobin A1c%, and higher measures of impaired glucose tolerance. Moreover, statistically significant positive correlations were found between multiple taxonomic entities and long term NAS consumption including the Enterobacteriaceae family, the Deltaproteobacteria class and the Actinobacteria phylum. In order to determine if the relationship between blood glucose control and NAS consumption was causal, Suez et al. [256] followed seven healthy volunteers (who did not normally consume NAS in any form) who consumed the FDAs maximum acceptable daily intake of saccharin (5 mg/kg body weight) for 5 days. Four of the seven individuals developed significantly poorer glycemic responses 5-7 days after NAS consumption. Interestingly, the microbiome of NAS responders was distinct from NAS non-responders both before and after NAS consumption and the microbiome of NAS non-responders featured minimal changes after the NAS intervention in contrast to the pronounced compositional changes observed in NAS responders. Transferring day 7 stools from NAS responders into normal germ-free mice induced significant glucose intolerance compared to mice transplanted with day 1 stool (before intervention) from the same NAS responders. Similarly, day 7 stools from non-NAS responders induced normal glucose tolerance in mice. Furthermore, germ-free mice transplanted with responders' day 7 stool replicated some of the dysbiosis observed in humans including a 20-fold increase in Bacteroides fragilis (order Bacteroides) and Weissella cibaria (order Lactobacillales) and a 10-fold decrease in Candidatus Arthromitus (order Clostridiales) [256]; this over-representation of Bacteroides and underrepresentation of Clostridiales has been previously associated with type 2 diabetes in humans [249,257]. Thus, humans exhibit a personalised response to NAS as a result of their microbiota composition and functionality which as the authors state strongly suggests that other nutritional responses may be driven by "personalised functional differences in the microbiome," and the resulting opportunity for "personalised nutrition" may lead to "personalised medical outcome." Wang et al. [258] recently described the bacteriostatic effect of non-nutritive sweeteners such as sucralose and stevia in mice.
Commercialisation of Microbiome Testing
Gut microbiome testing is currently commercially available and takes advantage of the reduced costs associated with next-generation sequencing technologies. Table 5 provides a non-exhaustive list of these companies. While several companies provide doctor-ordered stool tests (e.g., the SmartGut test provided by Ubiome and Genova Diagnostics, USA, etc.) the majority of the companies listed in Table 5 provide direct-to-consumer tests.
The sequencing method used to analyse the gut microbiome has a significant impact on cost, given that companies providing 16s rRNA gene sequencing are generally cheaper (approximately $100/test) than those that use whole genome sequencing and metatranscriptomics (approximately $350 to $400/test). However, the latter two also provide information regarding the metabolic potential of the gut microbiome, providing insights into microbiota-derived metabolites related to health and disease.
Regulation of commercial microbiome testing in specific markets remains unclear and the need for a clear global regulatory direction and guideline is required to advance testing and thus its impact on human health. Furthermore, some commercial laboratories will often modify/optimise their sequencing methods which can lead to inconsistencies when comparing results from different companies. Of course, this has potential risks with regards to interpretation and transferability of results, highlighting the need to develop a set of guidelines to assure consistency in the way different laboratories operate. The Microbiome Quality Control project (MBQC) has been set up to govern such guidelines (https://www.mbqc.org/).
Many companies provide easy-to-understand, detailed reports regarding gut microbiota diversity, microbial members including beneficial and pathogenic microorganisms which influence health and disease, a comparison of the individual's gut microbiome to other participants, and personalised dietary, supplemental and lifestyle advice. Importantly, such tests are not diagnostic given the current level of evidence that is available regarding the gut microbiome and it is essential that consumers availing of these tests are aware of this fact and seek medical advice if experiencing symptoms of any kind rather than self-diagnosing and self-healing via the provided advice which at most can only serve as a personalised guideline. Indeed, many medical professionals and microbiome experts remain dubious about the utility of these direct-to-consumer tests due to the lack of concrete evidence linking particular microbiota signatures to specific host phenotypes including disease, disease risk and potential treatment responses. Over-extrapolation of results on the side of the service provider and over-interpretation of results on the side of the consumer are also risk factors. Indeed, over-interpretation of results on the side of the consumer may lead to unnecessary anxiety and the adaptation of dietary alterations as well as intake of supplements which may do more harm than good or have no effect at all. In addition, we have already mentioned that whole genome sequencing is more informative than 16s rRNA gene sequencing and this is something the consumer should be made aware of. For example, subgroups A, B and C of F. prausnitzii are not discriminated from each other with 16S rRNA gene sequencing, but are identified by metagenomic and metatranscriptomic sequencing. Most commercially available 16S rRNA-based tests report on the relative abundance of F. prausnitzii with no differentiation between F. prausnitzii subgroups A, B and C. This may lead to a misleading interpretation of results as it has recently been found that different subgroups produce butyrate at different levels and have been linked to different disease states. For example, F. prausnitzii A produces comparatively lower levels of butyrate and at high levels has been linked to colon cancer, appendicitis and inflammatory conditions. Similarly, F. prausnitzii B also produces comparatively lower levels of butyrate and at high levels has been linked to atopic dermatitis. Conversely, F. prausnitzii C has been shown to produce the highest level of butyrate of all the three subgroups and also produces an anti-inflammatory protein called MAM. As a result, higher levels of F. prausnitzii C are thought to be anti-inflammatory whereas low levels have been linked to Crohn's disease, ulcerative colitis, colon cancer, type II diabetes and chronic fatigue syndrome [259]. This highlights the importance of understanding the relative abundance of F. prausnitzii subgroups A, B, and C when drawing conclusions about butyrate production and association with disease.
Furthermore, in terms of time, the tests can take anything from two to eight weeks before the consumer receives the results such that gut microbiota changes may have taken place within this time frame depending on the consumer's circumstances (e.g., dietary changes, medical treatment, antibiotic administration, etc) and hence the results may prove meaningless by the time of receipt, a factor the consumer must be made aware of. Indeed, regular microbiome testing would prove more effective but, at this moment in time, may prove cost-prohibitive for most consumers. Despite this, some companies offer discounts for regular microbiome testing and the results of such tests will provide essential data regarding the impact of personalised nutritional advice (assuming the consumer follows it) on the gut microbiome along with its long-term effects. While the analysis is capable of providing an insight into the necessary dietary recommendations to achieve a 'healthy' gut microbiome, one must question the utility of this information at present given that we have yet to define a universal 'healthy' gut microbiome, which may not be possible given the suspected level of specificity that can be associated with an individual's 'healthy' gut. Indeed, a food questionnaire would generate sufficient information to provide personalised dietary recommendations which should subsequently improve the status of the gut microbiome. However, in its favour, gut microbiome testing represents a useful tool in its present state to increase awareness of the gut microbiome and its influence on overall health and the more tests that are performed, the greater the opportunity to move towards precision microbiomics by advancing our current knowledge base.
In relation to future testing, it is also important to consider the importance of host genetics/gene expression, and considering how host genetics can impact the gut microbiome, and be used as proxy for providing personalised dietary advice. For example, numerous genetic variations have been linked to influencing a range of microbiota [260,261] as well as beta-diversity [260]. However, other factors such as diet may mask the effect of genetics on the microbiome making it difficult to predict changes in phenotype without assessing an individual's diet and including this in the interpretation. An example whereby the assessment of host genetics in a commercial setting shows utility in providing personalised microbiome-based recommendations is the association between FUT2 genotype/secretor status and the expression of fucosyllated glycans on host cell surfaces and in secretions [262]. Common FUT2 polymorphisms have been shown to influence the expression of fucosyltransferase2, an important enzyme associated with the production of the dominant human milk oligosaccharide, 2'-fucosyllactose (2'FL), and other fucosyllated oligosaccharides. Lactating mothers who possess the inactive form of the FUT2 polymorphism (approximately 20% of the Caucasian population) do not contain 2'-FL in their breast milk. The absence of this gene (non-secretors) has been associated with delayed establishment of Bifidobacterium spp. in the infant gut and increased risk of diabetes, alcohol-induced pancreatitis and Crohns disease. Interestingly, non-secretor status has also been associated with resistance to infectious disease such as norovirus and rotavirus infection and Helicobacter pylori colonization. As such, genotyping for FUT2 secretor status allows for the identification of infants and adults that can benefit from treatment with probiotics, prebiotics and other dietary components. Hence, future commercial tests may offer genetic testing as a way to help consumers, such as lactating mothers in the case of FUT2, make better choices and optimise health outcomes.
Guidelines for Evaluating the Scientific Validity of Evidence for Providing Personalised Microbiome-Based Dietary Advice
As noted in the previous section, there are risks associated with the rapid commercialisation of microbiome testing including inconsistencies in results between laboratories as well as over-extrapolation of results on the side of the service provider and over-interpretation of results on the side of the consumer. To mitigate such risks, guidelines for evaluating the scientific validity of evidence for providing personalised microbiome-based dietary advice need to be developed. A similar set of guidelines has been proposed for genotype-based dietary advice [263] providing a useful template from which to start.
The guidelines proposed by Grimaldi et al. [263] provide a framework for assessing the strength of the evidence and scientific validity of gene(s) × diet interactions which help determine the 'actionability' of the interaction. Such guidelines can be modified and applied to precision nutrition and the microbiome. These guidelines use the ACCE model (Analytical and Clinical Validity, Clinical Utility and Ethics) as the starting point according to which a medical genetic test should fulfil requirements regarding: i.
Analytical validity-a measure of the accuracy of the genotyping. ii. Scientific validity-concerns the strength of the evidence linking a genetic variant with a specific outcome. iii. Clinical utility-the measure of the likelihood that the recommended advice or therapy will lead to a beneficial outcome beyond the current state of the art. iv. Ethical, legal and social implications that may arise in the context of using the test.
For conducting an assessment according to precision nutrition and the microbiome, analytical validity should be relatively straightforward as projects such as MBQC have been set up to assure consistency by applying standard operating procedures and best practices in how laboratories operate in the microbiome testing field. Similarly, the requirements for scientific validity could also be fulfilled, whereby scientific validity in the context of precision nutrition and the microbiome refers to the strength of the evidence for an interaction between a specific microbiome biomarker or a microbial enterotype and a dietary component or a specific health outcome, disease or risk factors for disease.
The requirements for clinical utility, on the other hand, may be harder to fulfil as it has strict criteria in the medical sense, demanding strong evidence that a given therapy 'will lead to an improved health outcome' [264,265]. A caveat is that defining an 'improved health outcome' due to microbiome-based advice in a generally healthy person is very hard. Additionally, we are still unsure as to what constitutes a 'healthy' microbiome, making it even harder to define an 'improved health outcome'. With regard to the ethical, legal and social implications, as with personalised nutrition, existing rules must be developed for microbiome testing to ensure that the fundamental rights of the consumer are protected and legislation should be put in place to identify direct-to-consumer tests which provide non-scientifically validated information and advice [264,265]. Ethical, legal and social implications of the human microbiome have been discussed elsewhere [266,267]. Therefore, as per the framework proposed by Grimaldi et al. [263], a guideline that evaluates the scientific validity and evidence for providing personalised microbiome-based dietary advice should focus primarily on the assessment of scientific validity, an essential requirement before any nutrition advice should be given.
Proposed Framework for Scientific Evidence Assessment
The scientific validity assessment criteria for microbiota-based dietary advice within an adapted framework would include (i) study design and quality, (ii) biological mechanism and plausibility and (iii) the probability term (Table 6). Whilst the assessment of study design and quality as well as biological mechanisms and plausibility are commonly used to assess the value of scientific evidence, the use of a 'probability term' is not as common. The probability term is the overall judgement of the evidence provided and is based on the European Food Safety Authority (EFSA) guidance document expressing uncertainty in scientific assessment [268] to help describe the likelihood of an outcome where firm conclusions are difficult to draw. A probability term makes it possible for an 'evidentiary conclusion based on many papers, each of which may be relatively weak, to be graded as 'moderate' [probable] or even 'strong' [convincing], if there are multiple small case reports or studies that are all supportive with no contradictory studies". For more detail on the definition and use of each probability term, study design and quality, biological plausibility as well as examples of the type criteria that can be used to assess gene/microbiome x diet interactions, please refer to Grimaldi et al. [263]. Table 6. Proposed framework for scientific evidence assessment of microbiota-based dietary advice.
(i) Study Design and Quality Considerations: Type of microbe/diet/outcome interaction
•
A relatively "simple" interaction with a single strain of bacteria, measuring the outcome (e.g., glucose response) over a number of weeks can give more confidence of "cause and effect." • A "complex" study may involve a prebiotic + several strains administered over several weeks or months to assess the weight management response and is likely to have higher inter-individual variation and therefore may be harder to establish cause/effect: is it the overall diet, or the microbes, or both, having an improved effect?
The type of interaction also determines the confidence and the numbers of times a study should be repeated in order to have a level of confidence. However, there are pros and cons, and all types of studies are required. A simple or "direct" interaction gives confidence but the overall health benefit (e.g., short-term glucose) will be limited. A "complex" interaction is harder to give confidence but comes with a better overall health benefit (e.g., long-term weight management).
Levels of Interaction
• A 'direct' interaction could be administration of a bacterial strain affecting glucose response. • An intermediate interaction: specific prebiotics, fibre etc. with any type of response thus harder to determine if it is the nutrients or microbe growth, or both. • An indirect interaction is the case where a mechanistic interaction between the microbe variant and the dietary component on a health biomarker, including disease, is affected to some extent but is also influenced by many other possibly unknown processes, and it may take years for symptoms to manifest. This type of interaction may not be fully explained physiologically or may be only demonstrated statistically. (ii) Biological Mechanism and Plausibility Considerations: Biological plausibility is a judgement based on the collected evidence of a microbe x diet interaction on a phenotype. An example of high biological plausibility could be a single microbial strain known to have benefits regarding saturated fat metabolism that leads to lower triglycerides and cholesterol. In this respect, Neville and colleagues recently proposed a variant of Koch's postulates to provide a framework to establish causation in the case of a single strain in human microbiota research [269]. On the other hand, a vegan diet high in fibre affects the gut flora and over time the symptoms of metabolic syndrome improve-this type of interaction may not be fully explained physiologically or may only be demonstrated statistically. (iii) Probability Term Considerations: Assessing the validity of a putative microbe × diet interaction is generally complex, and as knowledge deepens, assessment of its validity will develop. The fundamental requirement of a nutrition test (genetic, metabolites, microbiota), as with any health-related test, is that the results should indicate clearly a diet-related recommendation that is beneficial in relation to a concrete aspect of health or performance. Any such advice should fulfil all requirements set out in the framework described here. Inevitably, any assessment of nutrition can only be semi-quantitative at best. We consider that the approach of this framework has the benefit of creating a formal and generic model for the assessment of such evidence and will guide more focused debates on specific points, which may be judged in different ways. Moreover, the framework and associated resources will allow stakeholders such as dietitians, nutritionists and genetic counsellors to improve their knowledge of the microbiome and, at the same time, will provide a valuable resource to assess the various tests that are offered. This framework may also encourage a greater standardisation of research protocols, supporting other initiatives, as well as the reporting of novel and replicated microbiome-environment interactions in other populations.
Conclusions
The field of gut microbiota research boasts thousands of studies, the majority of which have been published in recent years. Many of these are observational, documenting differences between healthy and diseased states allowing for correlations to be made between diversity scores, specific taxa, disease, disease risk and health status, and have been essential in our understanding of the significance of the gut microbiota to overall health and disease. Interventions have highlighted the significance of inter-individual variation in terms of intervention efficacy and this is most apparent for fibre, presumably due to the ability to measure the expected outcome i.e. modulation of gut microbiota composition, increases in SCFAs. Thus, understanding why and how an intervention has failed at the individual level is as critical as understanding why and how it has succeeded. With this in mind, it seems the field is on the brink of being propelled towards precision microbiomics, where inter-individual variation is being embraced and correlation studies are beginning to be supported by causal evidence through thorough experimental validation. This will allow for the design of strategic interventions and ultimately evidence-based dietary advice at the individual level. Criteria ensuring scientific validity for microbiota-based dietary advice are, thus, critical. Such data will not only serve nutrition counselling but will also prove valuable in the field of medicine for the clinical/therapeutic management of individuals. Furthermore, other members of the microbiota including the phageome, virome and mycobiome, are likely to contribute to human health as much as the bacterial component and should be included in analysis in order to gain comprehensive insight. Indeed, precision nutrition through the microbiome offers individuals huge potential to manage disease risk through diet and microbiome-modulating interventions and thus improve both quality and longevity of life. | 2019-07-02T13:47:52.192Z | 2019-06-27T00:00:00.000 | {
"year": 2019,
"sha1": "370df2746deb6637a0af510c6f2b8bc7b8451712",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/nutrients/nutrients-11-01468/article_deploy/nutrients-11-01468-v2.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "370df2746deb6637a0af510c6f2b8bc7b8451712",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212428345 | pes2o/s2orc | v3-fos-license | The Shelf life Prediction of Sorghum Modified Flour Crackers Using Critical Moisture Approach
The objectives of this research were to produce crackers based on sorghum modified flour and to predict its shelf life using a critical moisture approach. The research was conducted in two stages. The first stage was to determine the best crackers of nine formulations. The second stage was to predict the shelf life of the best crackers formulation by using a critical moisture content approach. The best crackers were produced from a mixture of 50% sorghum modified flour, 50% wheat flour and 25% margarine. The products had a puffing ratio of 38,04 %, and hardness of 28.86 N. The moisture sorption isotherm curve of sorghum modified flour crackers could be described using a Hasley model. Using the critical moisture approach, sorghum modified flour crackers packaged in a metalized plastic and stored at 30°C and stored at 84% relative humidity, the product shelf life would be 207 days.
Introduction
The needs of the Indonesian population for flour have increased from years, due to a large number of processed food products that use flour as a basic ingredient, including products such as noodles, bread, cakes, etc. This is supported by [1] which states that Indonesia is the second-largest country to consume noodles after China. This condition is one of the causes of increased demand for flour. Increased utilization of local potential can be used as an opportunity to overcome the demand for flour substitute flour. One of the local potentials that can be raised is sorghum. Sorghum is a cereal crop substitute for rice and corn, however, its use for food products is still very limited. Sorghum flour has a fairly good nutrient content, and this is supported by [2] that sorghum seeds contain carbohydrates 84.16% fat 0.355% and protein 3.58%. Sorghum also has anti-nutritional properties, namely tannin, and phytate. The content of anti-nutritional substances can be reduced by modification using fermentation [3], so as to improve digestibility. Fermentation can also increase the crystallinity of starch granules, thereby increasing rehydration power. Sorghum flour fermentation can be done by using lactic acid bacteria that are considered safe for human consumption, one of which is Lactobacillus plantarum.
Modified Sorghum Flour (MOSOF) can be applied to food products. [4], stated that the use of 25% eggs and partially substituted wheat flour with MOSOF, apparently can produce noodles with good physical and chemical quality. Besides noodle products, MOSOF can also be applied to crackers. Crackers are included in biscuits type, which in their manufacture can be done with/without the fermentation process and there is a lamination process so that it is flat and if broken the appearance looks layered [5]. Crackers do not use eggs in the manufacturing process, so other ingredients such as margarine are needed to improve their texture. One of the cracker's quality parameters is moisture content because it is related to shelf life. The method of determining shelf life with the critical water content approach was chosen because it was considered suitable for products that were easily damaged due to water absorption. This study was designed to determine the shelf life of the best MOSOF formula crackers with the critical water content approach.
Materials
The raw materials used in this study were early matured sorghum seeds, which were obtained from Madurese farmers and Lactobacillus plantarum FNCC 0027 bacteria from the Center for Food and Nutrition Studies (PSPG) of Universitas Gadjah Mada, Yogyakarta. The ingredients for making crackers in this study are moderate protein flour (blue triangle brand), margarine, yeast, sugar, skim milk salt and sodium bicarbonate. The chemicals used for the analysis were concentrated H2SO4, boric acid, BCG-MR indicators, HCl, aquades, K2SO4, NaOH, MRS broth.
Research Stages
This research consists of 3 stages, the preparation of materials, manufacture of crackers and physical analysis of chemical crackers. The explanation is as follows: Stage 1. Material preparation and chemical analysis At this stage, there are several treatments including: a. Making Sorghum Flour Starter (Ariyanti, 2016) Preparation of sorghum flour starter was initiated by inoculation of the rejuvenated starter Lactobacillus plantarum FNCC 0027, to 5 ml of sterile MRS broth and then incubated for 24 hours using a temperature of 37°C. Followed by transfer to 100 ml Erlenmeyer which had previously been filled with 5 g of sorghum flour and 15 ml of distilled water aseptically. Then the incubation temperature is 37°C for 24 hours.
b. Manufacture of Modified Sorghum Flour (MOSOF) and chemical analysis
The modification of sorghum flour was carried out by fermentation using the modified Ariyanti (2016) method, beginning with flour making by washing sorghum seeds using distilled water, and continued by soaking using 0.2% Na2HPO4 for 2 hours at 300C. Sorghum seeds are then washed again using distilled water, then dried at 650C for 3 hours. Dry sorghum seeds are then crushed and poured into a dish mill, then sieved using a size of 60 mesh, to obtain sorghum flour. The sorghum flour that is formed is then fermented, soaked using distilled water using a ratio of 1: 3 (w / v) and inoculated Lactobacillus plantarum FNCC 0027 by 10% for 3 days. The next stage is washing with distilled water, filtering and drying for 2 hours using a temperature of 650C. After that, the flour is refined and sieved using 80 mesh size.
Stage 2. Crackers Manufacturing and Analysis
There are 3 treatments, namely manufacturing crackers, and physical-chemical analysis a. Making Crackers Making these crackers using the Ariyanti method (2016). As for the formulas and explanations regarding making crackers, each can be seen in Table 1 and Figure 1. b. Determination of the best treatment The best treatment was chosen based on the results of the organoleptic test using the hedonic method. This test was conducted by 25 semi-trained panelists, with the test attributes including color, aroma, taste and texture/crispness.
Stage 3. Physical analysis of chemical crackers
This stage is given to the best treatment of organoleptic test results. The analysis conducted is crispness and hardness with a texture analyzer, development ratio, water content [6], ash content [6], fat content [6], protein content [6], carbohydrate by difference (Anonymous, 2005), shelf life [8].
The steps taken in calculating the shelf life (main research) are: 1. Determination of the initial moisture content (Moisture Initial / Mi (Mo)). The determination of initial water content in this study refers to [9], 2. Determination of parameters and critical water content [10]. The parameters used in this study are texture, color, aroma, and taste. From these four parameters then a ranking test was conducted by 25 panelists. The lowest rank was chosen as a critical parameter. Furthermore, the determination of critical water content begins with storage of the product at room temperature for 5 days, with daily water content testing and hedonic testing, on a scale of 1-7. Products with a value of 3 are considered as critical moisture content (panelists rejected). 3. Determination of equilibrium water content (%db) and sorption isothermic curves. The equilibrium water content is obtained by storing the product in 5 different types of saturated salts, in the humidity chamber. The five types of saturated salt can be seen in Table 2. I II III IV V VI VII VIII IX Wheat Flour 40 40 40 50 50 50 60 60 60 MOSOF 60 60 60 50 50 50 40 40 40 Margarine 15 25 35 15 25 35 15 25 35 Sugar 1 1 1 1 1 1 BaCl2 90 60 20 Source: [11] Equilibrium water content (%db) and Aw obtained, henceforth are used to create sorption isothermic curves. Aw is obtained by dividing the RH of each humidity chamber by 100.
4. Determination of sorption isothermic models. Determination of sorption isotherm model by entering the value of equilibrium water content (Me) (%db) and Aw into the sorption isothermal equation formula, and then evaluated using MRD (Mean Relative Deviation). MRD value <5 is determined as the chosen equation because it is considered capable of representing actual conditions. The MRD calculation formula is as follows: Mi = experiment water content; MPI = calculated the moisture content; N = amount of data Equations used in this study are Chen Clayton, Henderson, Hasley, Oswin and Caurie. The selected isothermic sorption model is then used to find the value of slope (b), by entering the calculation results of equilibrium (Me) and Aw. The linear form of the isothermic equation model [12], can be seen in Table 3. ts = estimated shelf lifetime (days) Me = product equilibrium moisture content (g H2O / g solids) Mi = initial product moisture content (g H2O / g solids) B = slope of the isothermic sorption curve Mc = critical water content (g H2O / g solids) k / x = permeability of packaged water vapor (g / m3.day.mmHg) A = surface area of packaging (m2) Ws = product dry weight in the package (g solids) Po = saturated vapor pressure (mmHg)
Experimental design
The experimental design used in crackers products is a single factor Randomized Complete Design (CRD), with 2 replications analysis. The data obtained were then analyzed by ANOVA and DMRT 95% using SPSS 19.
RESULTS AND DISCUSSION Preliminary Research Making Modified Sorghum Flour
In this study, sorghum flour was analyzed using fermentation (MOSOF), then the results were compared with the literature. The average chemical composition of MOSOF could be seen in Table 4.
From Table 1 it can be seen that the modification of sorghum flour using fermentation can increase the water content and starch, while the ash, protein, fat and crude fiber content (% wb) have decreased. This decrease is thought to be caused by the number of water-soluble components lost during immersion. In addition, fermentation can also increase starch content, especially the amylose content, so that it also has an impact on reducing non-starch content. The constituent components of starch are amylose and amylopectin. Meanwhile, the increase in water content in MOSOF is thought to be caused by amylose. [4] stated that amylose has a role in the binding of water and the formation of a strong gel that affects the increase in water content of MOSOF. Fermentation can cause the cutting of branching bonds (amylopectin), this is a result of the metabolic results of lactic acid bacteria that is able to produce pullulanase that plays a role in cutting amylopectin, thus causing an increase in amylose ratio. This is linear with the statement of [15], which states that pullulanase (pullulan 6glucanohydolase, EC 3.2.1.41) has a role as a debranching enzyme, which is to cut off amylopectin and other polysaccharides.
MOSOF Crackers Products
MOSOF is then used to make crackers. From the formula contained in Table 1, the results obtained that the 5th formula with the formula of wheat flour: MOSOF: margarine = 50:50:25, was chosen as the best treatment because it has the highest organoleptic testing value, with the value of taste, color, aroma, and texture. each is 6.30; 5,74; 5.36 and 6.24. The results of the chemical analysis (%wb) of this formula, which includes water, protein, fat, and carbohydrates by differences, are 4.46 each; 7.20; 23.28 and 66.86. Meanwhile, the results of physical analysis including fracture (N) and developmental strength (%) were 28.86 and 38.04 respectively. This formula is then used for the main research, which is testing shelf life with the critical water content method
Main Research
The main research was carried out by calculating the best shelf life estimation of formula crackers by the critical water content method, which was then further calculated using the Labuza equation.
Initial Water Content
The initial moisture content of the MOSOF crackers was 0.0564 g H2O / g solid. This result is not too far from the results of the initial moisture content test of corn crackers which is 0.0467 g H2O / g solid [10], so it can be categorized accordingly.
Critical Water Content Based on Organoleptic Tests
The lowest ranking value is the texture parameter, so the texture parameter is selected as a critical parameter. These texture parameters are then given hedonic testing to determine the critical water content. The results of hedonic testing conducted by 25 panelists, and carried out for 13 days showed that the longer the testing time, the level of preference has decreased. This result is also supported by the increasing levels of water crackers with increasing testing duration. From the hedonic test results, it can be determined that the critical water content value is 9.26. The selection of critical water content is based on hedonic test results with a value of 3. [10], states that a score of 3 in organoleptic testing is considered a product rejected by panelists, therefore hedonic test results with a score of 3 are considered products that have critical water content. The graph of the relationship between the hedonic test (panelist acceptance level) and the length of time the MOSOF crackers are stored can be seen in Figure 2.
Critical Water Content Based on Organoleptic Tests
The lowest ranking value is the texture parameter, so the texture parameter is selected as a critical parameter. These texture parameters are then given hedonic testing to determine the critical water content. The results of hedonic testing conducted by 25 panelists, and carried out for 13 days showed that the longer the testing time, the level of preference has decreased. This result is also supported by the increasing levels of water crackers with increasing testing duration. From the hedonic test results, it can be determined that the critical water content value is 9.26. The selection of critical water content is based on hedonic test results with a value of 3. [10] states that a score of 3 in organoleptic testing is considered a product rejected by panelists, therefore hedonic test results with a score of 3 are considered products that have critical water content. The graph of the relationship between the hedonic test (panelist acceptance level) and the length of time the MOSOF crackers are stored can be seen in Figure 2.
Equilibrium Water Content and Sorption Isothermic Curves
The equilibrium water content is obtained by storing MOSOF crackers in 4 different types of saturated salt solutions, namely MgCl2, NaCl, KCl and BaCl2. The results of the equilibrium moisture calculation are then used to make the sorption isothermic curve. The results show that the isothermal curve of sorption crackers stored in 4 kinds of saturated salt is in the form of the sigmoid. The MOSOF crackers isothermic curve can be seen in Figure 3.
The Sorption Isothermic Model
Equations used to determine the isothermic model of sorption are Chen Clayton, Hasley, Henderson, Caurie, and Oswin. The calculation results of these equations are then corrected using MRD, with a value <5 considered to be able to represent the actual conditions. From the test results obtained that the sorption isothermal equation model that is able to describe the real condition is the Hasley equation, and this is also supported from the MRD results that show results <5. The selected isothermic sorption model is then used to find the slope (b) value. The slope value obtained in this study was 0.303 (value b). The graph of the isothermic sorption model (Hasley) can be seen in Figure 4.
Estimated shelf life
The estimated shelf life is calculated using the Labuza calculation formula. The packaging permeability (k / x) used in this study is 0.0136 g / m 2 .day.mmHg, packaging area (A) 0.0396 m 2 , dry weight (Ws) 10 grams, the saturated vapor pressure at 30° C, (Po) 31.8240 mmHg.
Conclusion
The shelf life of MOSOF crackers packaged in metalized plastic packaging, stored at 84% RH, with storage having a saturated vapor pressure of 30°C has a shelf life of 207 days. Linear (Series1) | 2020-01-30T09:04:05.994Z | 2019-12-03T00:00:00.000 | {
"year": 2019,
"sha1": "6cdceacf2846e884f94c0024fff7e986918a9526",
"oa_license": null,
"oa_url": "http://journal.upgris.ac.id/index.php/ijatf/article/download/4933/2806",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "e3b7019378798d4934d3f71a4506f9f732f3a22d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
265331643 | pes2o/s2orc | v3-fos-license | The Association between Levels of Trust in the Healthcare System and Influenza Vaccine Hesitancy among College Students in Israel
Influenza is a contagious respiratory disease caused by the influenza virus. Vaccination proves an effective approach to preventing influenza and minimizing the risk of experiencing associated complications. However, the influenza vaccine coverage rate among Israeli college students is low due to a sense of complacency, lack of knowledge, and vaccine hesitancy. The current study examined the relationship between the level of trust in the healthcare system and influenza vaccine hesitancy among college students in Israel. This cross-sectional study was conducted via an online questionnaire in April–May 2023. In total, 610 students were surveyed, of whom 57% had been vaccinated against influenza in the past; however, only 12% were vaccinated this year. Negative, significant, and moderate relationships were found between the level of trust in the healthcare system and influenza vaccine hesitancy. Students who had been vaccinated in the past had a higher level of trust in the healthcare system and a lower level of vaccination hesitancy. The linear regression model revealed that the variables of being a woman, not Jewish, vaccinated, and trusting the Ministry of Health, family doctor, and health professionals were associated with a decrease in vaccine hesitancy. These findings are in line with previous research in the field. Based on the present results, it may be advisable to develop intervention programs aimed at increasing confidence in the healthcare system and vaccinations by providing knowledge and addressing students’ concerns regarding vaccination.
Introduction
Vaccine hesitancy refers to delay in accepting or outright refusal of vaccines, even when vaccination services are readily available [1].This issue has been recognized by the World Health Organization [2] as a major health concern, and vaccine hesitancy is listed among the top ten threats to public health.Influenza infections result in approximately 3-5 million cases of severe illness and 290,000-650,000 respiratory-related deaths worldwide each year [3,4].Influenza vaccination is one of the most efficient approaches to reducing the health, societal, and economic impacts of influenza [5,6].The Israeli Ministry of Health recommends receiving an influenza vaccine for every individual above six months old, with an emphasis on the children and the elderly population.The vaccines are provided free of charge at clinics distributed in every neighborhood in Israel, ensuring very high accessibility to vaccines.Despite the seriousness of this illness and the availability of safe vaccines, influenza vaccination rates continue to be low.This presents a global challenge and adds to the burden that this disease imposes on healthcare systems around the world [7].The healthcare system plays an essential role in encouraging vaccine uptake for influenza.Influenza vaccination is crucial for the general population, including student populations in close contact in classrooms and other social settings.Studies have reported low seasonal influenza vaccination rates among students, with coverage ranging from 12% to 30% [8].If the student population is not vaccinated against influenza, the global population will not meet the World Health Organization (WHO) aim for approximately 75% coverage of influenza vaccination.While global healthcare systems face the need to address vaccine hesitancy among the general public, particular emphasis needs to be placed on university students in this regard.Influenza symptoms may persist for multiple weeks, impacting students' class attendance, academic achievements, social engagements, and productivity [9].Additionally, influenza transmission rates within university settings can be notably elevated due to the concentration of dozens of students in shared spaces [10].Moreover, an influenza outbreak on campus holds the potential to extend its spread to the broader community surrounding students, encompassing friends, family members, and high-risk population groups [11].
Previous studies have explored trust in the healthcare system and trust in healthcare providers when seeking to explain health-related behavior.These analyses have revealed a positive correlation between trust in physicians and adherence to medical recommendations, thereby leading to improved health outcomes [12].Conversely, lower levels of trust are linked to reduced utilization of preventive health screenings and lower uptake of the influenza vaccine [13][14][15].The SAGE Working Group on Vaccine Hesitancy recognized trust in the healthcare system and healthcare providers as pivotal determinants of vaccine hesitancy [1,16].Research has indicated higher levels of vaccine hesitancy regarding influenza, COVID-19, or HPV vaccines among specific demographic groups compared to the general population.These groups include healthcare workers, minority communities, and individuals with lower socioeconomic status [17][18][19].Research has underscored the significant impact that a doctor's recommendation can have on a patient's inclination to receive vaccinations [20][21][22].Conversely, individuals who opt not to get vaccinated often cite a lack of trust in these institutions as a primary reason for refusing vaccines [23].Groups with diminished trust in the public health system are approximately half as likely to receive vaccinations compared to those with elevated levels of trust [24] (Gilles et al., 2011).Moreover, healthcare professionals who themselves are hesitant about vaccinations may not adequately address their patients' vaccine concerns [25].
Trust in the public health organizations and experts who provide vaccine recommendations is a significant factor influencing individuals' decisions and beliefs regarding vaccines [23,26].The literature suggests that trust in the healthcare system is built on healthcare professionals' competence (skills and knowledge) and how the healthcare system and its actors (medical staff) work to benefit the patient through acting with integrity, maintaining individual privacy and medical confidentiality, and showing empathy and respect [27].A healthcare system based on trust contributes to creating broader social value, based on the premise that the healthcare system not only produces healthy outcomes among the public and prioritizes improving the state of health in society but, as a social institution, also establishes social norms shaping human behavior [28].In recent years, Israelis have exhibited relatively low levels of public trust in the healthcare system compared to other countries in the OECD, with only half of the Israeli public (52%) reporting that they believed that they would receive the best treatment for a severe illness [29].
Low influenza vaccination rates among students are a worldwide occurrence [30].While vaccine hesitancy has been extensively researched in the general adult population, young individuals have not been a strategic focus of vaccination encouragement and public health communication efforts from the perspective of the Israeli Health and Public Health system.In general, students are young and tend to perceive themselves as healthy with a low risk of falling ill despite the rapidity with which influenza can spread through campuses.Given these concerns regarding the reluctance of students to be vaccinated, in this study, we sought to explore their level of trust in the healthcare system and whether this trust is associated with influenza vaccine hesitancy.The findings help understand the level of trust in the healthcare system among students in Israel and its connection to influenza vaccination hesitancy, aiding in the development of intervention programs accordingly.
Research Procedure
This descriptive, cross-sectional study was undertaken with students from Ashkelon Academic College.In 2023, approximately 4200 students studied at this college in the academic track.Approval for this study was obtained from Ashkelon Academic College Ethics Committee (approval #42-2023).Data were obtained from all College departments.The study ran from 2 April 2023 to 12 May 2023, concomitant with the end of the influenza vaccination season in Israel.The survey questionnaire was programmed using Qualtrics (Qualtrics, Provo, UT, USA) and was distributed to all students via email.One reminder to fill out the questionnaire was sent via email three weeks following its initial distribution.A total of 703 students responded, with 610 completing at least 90% of the questionnaire.This represented a response rate of 87% of all respondents and 15% of the research population.On average, it took 5 ± 1.44 min to complete the questionnaire.The introductory page of the questionnaire explained the aims of this study and ensured anonymity.Completing the questionnaire indicated the students' voluntary agreement and informed consent to participate.Students could stop responding at any time, and there was no obligation to answer any specific question.
Tools
We used an online, closed, anonymous, self-completed questionnaire to collect the data for this study.A professional translator translated the questionnaire from English into Hebrew.The Hebrew-translated questionnaire was then administered to 10 students not attending Ashkelon Academic College to verify the comprehensibility of the questions.The questionnaire was revised based on their feedback.Moreover, the questionnaire underwent content validation through assessment by an expert in public health and epidemiology and an expert in infectious diseases.
The final questionnaire comprised the following components: 1. Demographic information: Gender, age, marital status, religion, department, and year of study.
2.
Vaccination history: This included questions drawn from Ryan et al. [11]: Have you ever been vaccinated against the flu?Have you been vaccinated against influenza this year?3.
Vaccine hesitancy: This included six questions from Silva et al. [31].The respondents were asked to indicate their degree of agreement with each statement in the questionnaire on a Likert scale ranging from 1 (not at all) to 5 (strongly agree) with the option to answer "don't know".The average of the answers was calculated for each participant after reversing the scales for questions 1 and 6 and dropping the "don't know" answers.A higher score was indicative of higher levels of vaccine hesitancy.Cronbach's α for reliability was 0.77.
4.
Level of trust in the healthcare system: This included three questions from Jennings et al. [32] measuring the level of trust in one's doctor, the Ministry of Health, and medical professionals.The response scale ranged from 1 (not at all) to 5 (strongly agree).The variable was constructed by calculating the mean response for each participant.The mean ranged from 1 to 5, with a higher score indicating a higher level of trust in the healthcare system.Cronbach's α for reliability was 0.82.
Data Analysis
The data were analyzed using SPSS 29.0 (IBM, Armonk, NY, USA).Relationships between the variables were examined using Pearson correlation analyses.Since the variables met the criteria of normal distribution, differences between student groups were assessed utilizing t-tests for independent samples and one-way analyses of variance (ANOVAs).To predict the extent of vaccination hesitancy, a multiple linear regression model was used.The model included variables that have been found to be associated with the dependent variable in the univariate analyses.Significance in reported p-values relied on two-sided tests and were considered significant when they fell below 0.05.
Participant Characteristics and Influenza Vaccination History
In total, 610 students participated in this study, of whom 60% were women, 53% were in relationships, and 21% had children.Most participants were Jewish (83%).Nearly half studied in the Faculty of Social Sciences (46%), 35% in Health Sciences, and 19% in Computer Science and Management.The mean age of the respondents was 27.64 ± 7.20 years.The survey population resembled the college's population in terms of gender, age, and faculties composition.More than half had been vaccinated in the past (57%; 61% when excluding participants who could not remember).Among these participants, 12% were vaccinated, 44% intended to get vaccinated, 8% were undecided, and 36% did not intend to get vaccinated.No significant differences were found between the faculties with respect to vaccination history.However, significant differences between faculties were detected regarding vaccination in the study year (χ 2 = 24.66,p < 0.001), with more students in Health Sciences having been vaccinated or intending to be vaccinated (16% and 47%, respectively) compared to Computer Science and Management students (14% and 52%, respectively) or Social Sciences students (11% and 35%, respectively).The characteristics of these participants and their influenza vaccination history are summarized in Table 1.
Level of Trust in the Healthcare System
The distribution of responses to statements that examined the level of trust in the healthcare system is presented in Table 2 after combining categories as follows: answers 1 and 2 were incorporated into the category "weakly agree", while answer 3 was classified as "moderately agree", and answers 4 and 5 were integrated into the category "strongly agree".To assess the level of trust in the healthcare system variable, the mean response for each participant was calculated, with a computed value of 3.06 (SD = 0.88).
Influenza Vaccine Hesitancy
The distribution of responses to statements that examined influenza vaccine hesitancy is presented in Table 3 after combining categories as follows: answers 1 and 2 were combined into the category "weakly agree", answer 3 remained "moderately agree", and answers 4 and 5 were integrated into the category "strongly agree".3.09 ± 1.37 1 The mean was calculated without including the "I don't know" option.* Opposite questions.The data are presented in reverse rank order.
For the purposes of constructing the influenza vaccine hesitancy variable, we calculated the mean response for each participant when excluding the "I don't know" responses and reversing the scale for questions 1 and 6, yielding a mean value of 3.11 (SD = 0.70).
Relationships between the Level of Trust in the Healthcare System and Influenza Vaccine Hesitancy
Negative, significant, and moderate relationships were found between the level of trust in the Ministry of Health, one's family doctor, health professionals, general trust in the healthcare system, and influenza vaccine hesitancy (r p = −0.45,p < 0.001; r p = −0.21,p < 0.001; r p = −0.44,p < 0.001; r p = −0.43,p < 0.001, respectively).In other words, the higher the level of trust in the healthcare system, the lower the degree of influenza vaccine hesitancy.
The Relationship between Influenza Vaccination History and the Study Variables
Significant differences were found between students who had been vaccinated in the past and students who had not been vaccinated with respect to their levels of trust in the healthcare system (t = 3.89, p < 0.001) and vaccination hesitancy (t = 6.69, p < 0.001).Specifically, students who had been vaccinated in the past exhibited a higher level of trust in the healthcare system than unvaccinated students (3.17 vs. 2.87, respectively) and a lower level of vaccination hesitancy (2.95 vs. 3.23, respectively).
Differences between Faculties
Significant differences were found between faculties in terms of level of trust in the healthcare system (F (543) = 4.46, p < 0.05).Students in the Health Sciences faculty demonstrated the highest level of trust, followed by students in Social Sciences and, finally, students in Computer Science and Management (averages of 3.22, 3.01, and 2.92, respectively).Scheffe post-hoc tests revealed that students in the Health Sciences faculty had significantly higher knowledge levels than students in the two other faculties.
Furthermore, significant differences were found between the faculties with respect to levels of influenza vaccine hesitancy (F (565) = 3.17, p < 0.05).Computer Science and Management students had the highest hesitancy level, followed by students in Social Sciences and, finally, Health Sciences (averages of 3.22, 3.10, and 3.00, respectively).Scheffe post-hoc tests revealed that students in the Faculty of Computer Science and Management exhibited significantly higher hesitancy levels than Health Science students.
Regression Model to Predict Influenza Vaccine Hesitancy
Table 4 presents the results of a linear regression model predicting influenza vaccine hesitancy.The coefficients and p-values shed light on how each variable predicts vaccine hesitancy.Being female, not Jewish, vaccinated, and trusting the Ministry of Health, the family doctor, and health professionals were all found to be associated with lower vaccine hesitancy.The best predictors of this lower vaccine hesitancy were the level of trust in the Ministry of Health, the level of trust in health professionals' recommendations, and the incidence of being vaccinated in the past.The explained variance of the model was 30% (p < 0.001).
Discussion
Our results revealed that trust in the Ministry of Health and the belief that it works for the benefit of the entire population of Israel is low (average 2.67) among the college's students, while levels of trust in the recommendations of health professionals regarding vaccines are higher but not satisfactory (average 2.98).Nevertheless, study participants were found to generally trust their family doctor's recommendations (average 3.55).Previous studies conducted in Western countries have also highlighted the disparity in trust and satisfaction levels between local health services and the national healthcare system.While trust and satisfaction rates often range from 80 to 90% at the local level, they decline to approximately 50-60% at the national level.This emphasizes the greater trust that individuals have in their local doctors compared to the national level [33][34][35].
Negative, significant, and moderate relationships were found between all the dimensions of trust in the healthcare system and influenza vaccine hesitancy.The literature indicates that public trust in healthcare professionals is crucial for the health system to function efficiently.Trust is the primary factor influencing individuals' vaccination decisions [21,36].Among other things, when making decisions, individuals must trust the information they are being provided [37].In the context of vaccinations, decision-making is associated with trust in government and public health professionals [26].In line with our findings, studies have reported a negative correlation between an individual's vaccine hesitancy and their trust in the healthcare system and healthcare workers [38][39][40].Physicians' advocacy of vaccinations is recognized as one of the most influential factors affecting public attitudes toward vaccinations [20][21][22].Conversely, hesitancy and skepticism regarding vaccinations can be linked, in part, to a diminished level of trust in physicians [23,41].
A cross-national study conducted during the COVID-19 pandemic found that when trust levels in the healthcare system and the WHO were higher, vaccine hesitancy levels were lower [42].A similar study conducted at the University of North Carolina found that as students' levels of trust in the healthcare system and other information sources rose, their hesitancy levels declined [43].A survey distributed among students from the Central University Center of Baia Mare (Romania) observed a significant correlation between high levels of trust in institutions and the intention to vaccinate [44].The link between trust in the healthcare system, attitudes towards vaccines, and vaccine hesitancy can also be explained using the health belief model [45].According to this model, in order for a change to be effected in a person's behavior or, in this case, to induce a shift from vaccine hesitancy to vaccine acceptance, the person must believe and have confidence that the action being taken can indeed benefit them, meaning that, in this case, the vaccine can help them.The more a given individual trusts the system, the more likely they are to believe that the vaccine can benefit them.
The present results indicated that students who have been previously vaccinated exhibit higher levels of trust in the healthcare system and lower levels of hesitancy compared to students who have not been vaccinated.The theory of planned behavior [46] argues that attitudes and social norms influence the behavior of a given individual.In other words, those who have already been vaccinated likely hold more positive attitudes such that they are less hesitant to vaccinate again.Additionally, it can be assumed that individuals who have been vaccinated live in an environment where social norms emphasize trust in the healthcare system and vaccines.
We also found that students from the Faculty of Health Sciences have the highest level of trust and the lowest levels of vaccine hesitancy level compared to students from other disciplines.Similar findings were also obtained in a study conducted at a university in Saudi Arabia [47] and in Japan [48].Generally, health science students learn about the healthcare system in greater depth than students from other disciplines and encounter it during their internships.This results in higher levels of trust in this system among them compared to students who come into contact with the health system only as patients.Health science students also learn more about the mechanism of vaccines, and this knowledge reduces vaccine hesitancy.
The linear regression model revealed an association between decreased vaccine hesitancy and the variables of being a woman, not Jewish, vaccinated, and trusting the Ministry of Health, family doctor, and health professionals.A study by Shon et al. [49] found that more female students were vaccinated than male students, suggesting that among students, males exhibit higher levels of vaccine hesitancy, as was found in the current study.Also consistent with the results of the current study's regression analysis are the findings of other studies indicating that previously vaccinated students exhibit less vaccine hesitancy [11,49,50].With respect to religion, the current study's findings align with those from other studies, indicating that the Arab sector in Israel has less trust in state institutions, including the healthcare system [21,51].
When delving into the association between trust and vaccine hesitancy, it is crucial to acknowledge the erosion of public trust in governments, healthcare systems, and experts on a global scale due to the influence of the COVID-19 pandemic [40].The pandemic unleashed a flood of misinformation, famously termed an "Infodemic" [52], contributing to the rise in vaccine hesitancy.Freiman [40] advocates for mitigating vaccine concerns and fostering trust among the hesitant by actively engaging and imparting knowledge [53].It is reasonable to anticipate that improving trust will streamline intricate decisions about vaccination [54].
Study Limitations
The present research effort was limited to students from a single college, potentially affecting the ability to generalize these findings to students nationwide.Furthermore, most participants had not been vaccinated against influenza in the study year, and a significant portion expressed no intention of becoming vaccinated.This suggests a potential selection bias, wherein students with greater vaccine hesitancy may have been more inclined to participate in the survey.
Conclusions
Trust in the Ministry of Health, family doctors, and public health professionals are important predictors of vaccine hesitancy.Physicians may be able to build on the trust their patients have in them to address vaccine concerns and increase vaccination rates against influenza.To persuade students to vaccinate, interventions centered on transferring professional knowledge and allaying concerns about vaccinations can be conducted on campuses in collaboration with the management of these institutions, the Ministry of Health, and doctors from nearby hospitals or clinics.It is crucial to make it clear to students that young people can also become seriously ill with influenza and that they are at high risk of infection due to overcrowding in classrooms and other social settings.Lastly, steps to build trust between various components of the healthcare system and the student population should be taken, viewing these students as ambassadors for improving vaccination rates.
Table 1 .
The characteristics and influenza vaccination history of study participants.
Table 2 .
The distribution of responses to the questionnaire focused on the level of trust in the healthcare system questionnaire.
Table 3 .
Distribution of responses to the influenza vaccine hesitancy questionnaire.
Table 4 .
Linear regression model results for predicting influenza vaccine hesitancy. | 2023-11-22T16:15:28.842Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "93b98777d904bea1da572a3e01db510f1a2a1f17",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-393X/11/11/1728/pdf?version=1700385627",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0bdaaa5d38c2536287000628908f361b024196af",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233171406 | pes2o/s2orc | v3-fos-license | On the structural formula of smectites: a review and new data on the influence of exchangeable cations
The fit of the smectite structural formula is reviewed. In addition, a group of samples, both dioctahedral and trioctahedral, are studied, demonstrating the influence of interlaminar Mg that can lead to the erroneous classification of smectite if it is not considered.
Introduction
Smectites have significant technical and industrial applications. In civil engineering, for instance, the behaviour of bentonites, which are natural rocks mainly composed of smectites, is crucial. Bentonites are used in the construction of antipollution barriers of different natures, such as highly radioactive deposits, landfills and contaminated soils. They are used in industry in diverse applications because of their absorbing and adsorbing properties (paint, paper and food industries, foundries, wastewater treatment, as additives in detergents or cat litter, or, because of their rheological properties, in drilling fluids). Thus, these applications derive from ISSN 1600-5767 their unique physicochemical properties. Because of their very small particle size and microporosity, these minerals have a large specific surface area that, together with their layer charge and cation exchange capacity (CEC), gives them the ability to react with inorganic and organic polar reagents, mainly water (by hydration and dehydration). Additionally, they have swelling and rheological properties, and high plasticity. These properties are highly dependent on the amount of layer charge and on its location (Laird, 2006;Christidis, 2008), but also on the layer dimension because it determines the edge site properties (Delavernhe et al., 2015). As an example, the thermal stability of montmorillonites depends strongly on the distribution of octahedral cations over the trans and cis positions (Drits et al., 1995;Emmerich et al., 2009). Therefore, it is essential to know the crystal chemistry of smectites to address their industrial applications.
Smectite crystals are phyllosilicates with a 2:1 structure composed by stacking several layers of one octahedral sheet between two tetrahedral ones. Smectite layers have numerous isomorphic substitutions on both the tetrahedral (mainly Al 3+ , and secondarily Fe 3+ , instead of Si 4+ ) and octahedral positions, as well as vacancies in the octahedral sheet, giving rise to a layer charge. This layer charge is compensated by cations (Mg 2+ , Ca 2+ , Na + , K + ) in the interlayer space that link adjacent layers, are hydrated to different extents and may be exchanged with cations from an external solution. Importantly, the presence of these hydrated exchangeable cations is the reason behind their CEC. The net layer charge per unit formula (p.u.f.) in the smectite group ranges between 0.2 and 0.6, or between 0.4 and 1.2 per unit cell (p.u.c.) (Newman & Brown, 1987;Guggenheim et al., 2006), although Emmerich et al. (2018) found some dioctahedral 2:1 layer silicates with a layer charge of 0.125 p.u.f. that are swellable. The weakly charged layers are held together by the electrostatic attraction of the interlayer cations. In addition to these, smectite and, in general, clay minerals absorb both anions and cations at the edges of the particles to compensate the broken bonds at the boundaries of the layers. Often, Mg 2+ is one of the exchangeable cations, especially in the case of magnesic clays.
Different types of dioctahedral smectite have been recognized depending on the composition of the octahedral and tetrahedral sheets. Schultz (1969) distinguished different types of aluminous smectites and showed that the differences in their thermal properties can be related to their chemical composition: Wyoming, Otay, Tatatila and Chambers types are between the montmorillonite and beidellite end members, in a series of dioctahedral Al-rich smectites. However, regarding dioctahedral smectites, Brigatti & Poppi (1981) affirmed that 'Chemical features do not confirm the continuity of the montmorillonite-beidellite series . . . A miscibility gap is also evident between nontronite and the other compositional ranges.' Although most natural dioctahedral smectites have compositions between them, montmorillonite and beidellite themselves are extremely rare (Christidis, 2011). Dioctahedral smectite with a high octahedral iron content, where octahedral Fe 3+ exceeds Al 3+ , is nontronite. Contrarily, if octahedral Al 3+ exceeds Fe 3+ , the smectite is named as Fe 3+ -rich beidellite or Fe 3+ -rich montmorillonite (Guggenheim et al., 2006). On the other hand, though the substitution of tetrahedral Si 4+ for Fe 3+ can be easily obtained in the laboratory, it appears to be rare or present in amounts below the detection limit of spectroscopic methods in natural samples (Finck et al., 2019). Emmerich et al. (2009) added the configuration, cis or trans, as a new structural parameter required for the classification of dioctahedral smectites.
In trioctahedral smectites, if most octahedral sites are occupied by Mg 2+ ions, the layer charge comes from the substitution of Si 4+ by Al 3+ in the tetrahedral sheet, and the mineral is saponite. Stevensite is a trioctahedral Mg-rich smectite with minor or without tetrahedral substitutions, having a deficit of cations in the octahedral sheet that leads to a low negative layer charge. Other species that have been described for the smectite group according to their crystallochemistry and structural formula are hectorite and swinefordite, which are trioctahedral smectites with Li + as the octahedral cation, volkonskoite, which is dioctahedral and Cr 3+ rich (Mackenzie, 1984;Khoury & Al-Zoubi, 2014), and rare ones such as sauconite, which is a dioctahedral Znbearing smectite (Ross, 1946;Balassone et al., 2017). Newman & Brown (1987) compiled eight structural formulae of saponite with excess octahedral charge, and affirmed that 'The net negative charge on the layers derives from Al for Si substitution in the tetrahedral sites, but this is partially compensated by substitution of trivalent cations into the octahedral sites.' Similarly, Christidis (2011) asserted that 'Saponite is different from the other smectites as part of the negative tetrahedral charge is balanced by substitution of octahedral Mg 2+ by trivalent cations, Al 3+ or Fe 3+ , i.e. the octahedral sheet often bears a positive charge. However, the tetrahedral charge due to substitution of Si 4+ by Al 3+ is much greater and outbalances any possible positive octahedral charge.' However, Wilson (2013), in the compilation of 50 structural formulae of smectites of different composition and origin from different authors, reported that none of the studied smectites showed an excess of octahedral charge, although several would have Mg 2+ as the interlayer cation.
The properties of smectites change not only with the magnitude of the layer charge but also with its distribution throughout the layer, with the exchangeable cations and with their hydration status (Gü ven, 1992;Laird, 1996Laird, , 1999Meunier, 2006). The attractive force on the interlayer cations is more site specific for tetrahedral substitutions and reduces the number of hydration layers around the cations. This is because the Al 3+ ionic substitution for Si 4+ in the tetrahedral sheet causes an under-saturated valence in the three basal oxygens surrounding the Al 3+ ions. Therefore, the negatively charged sites on the layer surface are point like. However, octahedral substitutions induce a more diffuse valence undersaturation for a large number of basal oxygens, because the charge imbalance diffuses through two more layers of ions in the structure. Therefore, the position of the cation substitution within the 2:1 structure influences the position of the negative charge on the surface of the layer (Gü ven, 1992; research papers Meunier, 2006). This homogeneity has implications for the behaviour of the hydrated cations in the interlamellar space and on the surface of the smectites. Thus, for octahedral charged smectites such as montmorillonite, the negative charge is delocalized over surface oxygens so that only weak hydrogen bonds can form with interlayer water. For tetrahedral charged smectites such as beidellite and saponite, however, the charge is more localized and stronger hydrogen bonds can form between surface oxygens and interlayer water (Farmer, 1974). These different distributions of the interlayer charge, together with the different hydration statuses, lead to physicochemical properties that depend on the smectite type.
The structural formula of smectites
The calculation of the structural formula is the only way to classify smectites according to their type and determine the amount and allocation of the charge that, together with the particle size and the cis or trans configuration, regulates most physicochemical properties. At present, however, despite the importance of having a reliable structural formula, it is nearly impossible to obtain the exact structural formula for a clay mineral, particularly for smectites. The first obstacle is to obtain a precise chemical composition avoiding the influence of impurities (e.g. SiO 2 polymorphs, feldspars, zeolites, other clay minerals, carbonates, amorphous impurities etc.), since the composition is often obtained from whole-rock analyses and impurities of these types are commonly contained within the samples. There are some published papers in which the structural formulae were fitted from the results of chemical composition obtained by inductively coupled plasma emission spectroscopy (ICP-ES) or by X-ray fluorescence, either from raw samples or from the <2 mm fraction (e.g. Nadeau & Bain, 1986;Yeniyol, 2007Yeniyol, , 2020. However, in most clayey samples, even the purest, small amounts of other minerals appear, not only in raw samples but also in the clay fraction in which there are frequently more than one phyllosilicate, and the composition of such impurities influences the calculated formula. The interference from impurities can be avoided with electron microbeam techniques (Christidis & Dunham, 1993, 1997. There are two main techniques that allow one to obtain a quantitative chemical composition of isolated particles avoiding the influence of the impurities: electron microprobe analysis (EMPA) and analytical electron microscopy in transmission electron microscopy (AEM-TEM). EMPA has been used in several studies, like those of Ramseyer & Boles (1986) and Altaner & Grim (1990), although it is not used very often because it requires a perfectly polished and even sample surface for quantitative analysis, and as clayey samples are soft they usually have an irregular surface after polishing. However, TEM analyses of individual particles can be obtained from a representative powder portion of a sample, dispersed in ethanol or acetone, and deposited on a C-coated Au or Cu grid. Dispersion of the clay, frequently by sonication, allows the individual crystals or particles to disperse and deposit parallel to the grid surface. In these analyses the particles have to be sufficiently thin to be transparent to most of the primary X-rays produced by the incident beam and, therefore, X-ray absorption and fluorescence can be neglected (Lorimer et al., 1976).
The structural fit from these techniques can be influenced by several technical limitations or by the intrinsic crystallochemical problems of smectite. Among the former, one obstacle is the impossibility of knowing the oxidation states of cations of the same elements like Mn, Ni and mainly Fe, which frequently appears as an octahedral cation as both Fe 2+ and Fe 3+ , and sometimes as tetrahedral cations (Fe 3+ ). Because the sedimentary, edaphic and weathering ambiences in which smectites normally appear are commonly associated with oxidizing conditions, Fe 3+ is ordinarily considered, but this assumption can influence the octahedral occupancy and the distribution and amount of the charge layer. Kaufhold et al. (2019) also assumed all Fe as Fe 3+ in a very detailed characterization of smectites from the Vetzia basin, and they pointed out that the tetrahedral charge values resulting from the structural formula calculation may vary depending on the Fe 2+ /Fe 3+ ratio. García-Romero et al. (2019) studied the chemical composition of a wide group of almost-pure smectites by inductively coupled plasma mass spectrometry (ICP-MS) and determined the amount of Fe 2+ by titration. They found that most samples only had Fe 3+ and Fe 2+ in a few samples with interstratified illite. On the other hand, the loss of light elements like Na and K is a significant problem; to minimize it, Nieto et al. (1996) tested the use of short counting times and compared the analyses obtained for different acquisition times ranging from 30 to 200 s, showing that shorter counting times gave improved reproducibility and normalized formula data.
If the data are obtained from EMPA or AEM at the thin edges of isolated particles, which provide data on domains having a diameter of a few nanometres, the structural formulae have to be the mean of a representative number of point analyses. This is because chemical and structural heterogeneity is typical among the individual crystals, as stated by Kö ster (1996) when he showed the structural and chemical variations in the different size fractions of the 2:1 layer minerals. Christidis & Dunham (1993) showed the wide variation in smectite composition among adjacent crystals found when different particles were analysed with electron microscopy methods, and they suggested that the average structural formulae do not provide enough indications about the variation range of the smectite population in individual samples. According to these authors, the source for this heterogeneity is related to (i) the proportion of tetrahedral charge relative to the octahedral charge, (ii) variable substitutions on octahedral positions, (iii) the relative abundances of exchangeable cations and (iv) the variation in the total layer charge.
In spite of these problems, the structural formulae of smectites obtained from microanalyses, whether from EMPA or from AEM, are probably the best approximation to the real formulae, and these methods have been used by several authors, including Ahn & Peacor (1986) . If the sample is not 100% monomineralic, the fit of the structural formula obtained by analytical electron microbeam techniques is nowadays considered the most accurate method. Probably, since they are not common techniques in clays laboratories, this is why there are relatively few articles in which the structural formulae of smectites are given and discussed, despite the tremendously rich research published in the field of smectites as Meunier (2005) pointed out.
To fit the structural formula of a phyllosilicate properly from the chemical composition it is necessary to fix one of the components. Because all tetrahedral and octahedral cations can be substituted, the number of negative charges is fixed as the sum of oxygen and hydroxyl groups (Lagaly & Weiss, 1976;Kö ster, 1977). In a second step, if the number of Si atoms is insufficient to complete the corresponding tetrahedral positions, some of the Al atoms are considered as tetrahedral. If there are still vacancies on the tetrahedral positions after using all the Al 3+ ions, some of the Fe 3+ ions are located there. The rest of the Al 3+ , Fe 3+ , Fe 2+ and Mg 2+ ions, and other elements such as Ni, Mn, Cr, Ti and Li, are allocated to octahedral positions. However, Ca 2+ , Na + and K + are considered as interlaminar cations, as is logical. In these four steps (defining the negative charge and the tetrahedral, octahedral and interlayer content) it is inevitable that there will be errors that, in the case of smectites, are not trivial.
Firstly, the assumption that all negative charge comes from oxygens and hydroxyl groups can be erroneous, because a part of the negative charge can derive from F À substituting the hydroxyl groups of the octahedral sheet. Different amounts of F À have been found in smectites, ranging from 0.02-0.45% for saponites from the Spanish Tajo Basin (Pozo et al., 2014;García-Rivas et al., 2018) to more than 5% for hectorite (Thomas et al., 1977). A small amount of F À can influence the final fit, though the main problem in having F À is that if it is not possible to fix the negative charge, then the proportion of the cations cannot be normalized with respect to any other element. Other problems are related to the presence of nonexchangeable and non-structural cations (Kaufhold et al., 2011), particle size (White & Zelazny, 1988), and the variable charges and local domains of different octahedral occupancy, as Wolters et al. (2009) pointed out.
A significant problem in fitting the structural formula of a smectite is the Mg allocation. Most smectites contain Mg 2+ to some extent, and it is well known that this can be on both octahedral and interlayer positions. For instance, Christidis (2008) reported that 'The most difficult question concerns allocation of Mg, which is assigned in octahedral sites', although there are numerous reports for exchangeable Mg. Foster (1951) affirmed that 'The presence of exchangeable magnesium in the montmorillonitic clays is more common than is generally recognized', and Christidis (2011) remembered that 'In analysis in which the smectite has not been rendered homoionic with an index cation other than Mg, allocation of Mg is usually a difficult task, because some of the Mg may be exchangeable'. Taking this into account, homoionization with a cation other than Mg 2+ was done by several authors (e.g. Singh & Gilkes, 1991;Christidis & Dunham, 1993;Cuevas et al., 2003;Christidis & Mitsis, 2006;Ferná ndez et al., 2014;Sá nchez-Roa et al., 2016;Kaufhold et al., 2019) prior to obtaining the structural formulae, to ensure that structural Mg is accounted for accurately. As mentioned before, Mg 2+ is one of the main cations on the octahedral position in trioctahedral smectites, and frequently one of the interlayer cations in smectites. However, when the structural formulae are fitted, Mg 2+ must be allocated on the octahedral position by default, unless different data are available.
When the octahedral occupancy is larger than 4 in dioctahedral smectites, some of the Mg might also be present in the interlayer, according to several authors. From this consideration, Herbert et al. 2019), among others, allocate some of the Mg atoms as interlayer cations. According to them, if the sum exceeds 4 or 6 p.u.f., respectively, for dioctahedral and trioctahedral smectites, an amount of Mg equal to the difference in the number of octahedral cations should be allocated to the interlayer. This fitting criterion has also been followed by Elert et al. (2017Elert et al. ( , 2018, even for montmorillonite treated with a mixture of dry Mg-rich lime and water up to the plastic limit. Following this rule, only an approximation to the real structural formula is obtained, because it is not possible to be sure that the number of octahedral cations is exactly 4 or 6. There has also been some research in which the structural formulae were fitted without considering the possible presence of Mg 2+ as an interlayer cation in dioctahedral smectites (e.g. Cole, 1988;Altaner & Grim, 1990;Cheshire & Gü ven, 2005;Cuadros et al., 2011;Vá zquez et al., 2014). The sum of the charges in the interlayer must balance the layer charge produced by the isomorphic substitutions on both tetrahedral and octahedral positions. In the absence of charge balance and in the presence of Mg 2+ , some of the Mg 2+ should be assigned to the interlayer, even though it is impossible to determine the amount precisely. If the amount of exchangeable Mg 2+ is high, the error could be high too. In fact, if Mg 2+ is allocated to the octahedral position by default, and a part is in fact on interlayer positions, a structural formula fitted with all Mg 2+ as octahedral cations will have a lower charge than the real sample. Consequently, not only the layer charge but also the smectite classification could be wrong.
To ensure the correct Mg 2+ positions, that is to say, its real distribution on the octahedral and interlayer positions, it is necessary to exchange the interlayer Mg 2+ and work with samples saturated with a known cation (homoionic samples). Homoionization also changes the cations adsorbed at the edges of the particles, and thus, the smaller the size of the particle, the higher the influence on the formula (Maes et al., 1979;White & Zelazny, 1988).
Taking into account the factors discussed above, in this work the structural formulae of dioctahedral and trioctahedral smectite samples are calculated in order to demonstrate the research papers 254 Emilia García-Romero et al. The structural formula of smectites importance of obtaining an accurate smectite layer charge, by assigning the interlayer cations precisely in the structural formula and, at the same time, evaluating the error when the formulae are calculated without previous homoionization of the samples. To achieve these aims, smectites have been studied in their natural form and after homoionization.
Materials
In the present work, seven smectite samples from different localities and different geological environments have been studied. They also have different chemical compositions and range from dioctahedral to trioctahedral smectites. Three samples (CAR1, CAR2 and LTBB) come from the Cabo de Gata volcanic region, located in the easternmost province of Andalusia in southern Spain. They are almost pure bentonitic deposits formed by the hydrothermal alteration of the acid volcanic rocks (vesicular dark-coloured rhyodacites, glasses and weakly coloured ignimbrites, and tuffs). CAR1 and CAR2 come from the Cortijo de Archidona deposit, and LTBB from the Los Trancos deposit; both deposits have been studied previously (Reyes et al., 1979(Reyes et al., , 1987Ferná ndez Soler, 1992;García-Romero & Huertas, 2017;García-Romero et al., 2019). The WYO sample (Wyoming, USA) comes from the Repository of the Clay Minerals Society. Three samples (ESB6, RESQ and ROS) were collected at the Tajo Basin, located in the centre of the Iberian Peninsula. They are sedimentary clays belonging to the Pink Clays Unit (Martin de Vidales et al., 1991;Pozo et al., 1992;Cuevas et al., 1993;de Santiago Buey et al., 2000;Cuevas et al., 2003;García-Rivas et al., 2018;García-Romero et al., 2019). Tajo Basin is particularly interesting because it is one of the richest basins for Mg clays in the world, with high economic value. Samples ESB6 and RESQ were collected in a quarry in proximity to the locality of Esquivias (Madrid province, Spain), and ROS at the bottom of the Magá n Hill, next to the village of Magá n (Toledo province, Spain).
Methodology
Smectite Ca saturation (homoionization with Ca 2+ ) was done to replace the natural exchangeable cations by Ca 2+ . To make the cationic change, powdered samples were immersed in a 1 M CaCl 2 solution, at room temperature, for three successive 24 h baths. Afterwards, the chloride solutions were removed, and the samples were washed with successive distilled water and centrifugation baths until chloride elimination was achieved. Chloride absence was confirmed with dilute AgNO 3 . Thus, the exchangeable cations that the smectites originally contained were replaced by Ca 2+ Previous mineralogical characterization of the samples was carried out by means of X-ray diffraction (XRD) using a Siemens D500 diffractometer with Cu K radiation and a graphite monochromator. The samples were measured as random powder specimens, and as air-dried, ethylene glycolsolvated or heated (823 K for 2 h) oriented aggregates of the clay fraction (<2 mm). Powders were scanned in the range from 2 to 65 (2) at a scan speed of 0.05 2 in 3 s, and oriented aggregates from 2 to 30 (2), to determine the mineralogical compositions.
The chemical compositions were obtained by point analysis acquired by AEM-TEM. Samples for TEM observations were prepared by depositing a drop of diluted clay suspension onto a copper grid with a holey carbon film. Individual thin grains of the minerals were scattered onto the grids with the (001) planes parallel to the grid holder. In order to ensure the reproducibility of the data, the analyses were carried out at two different laboratories: at the Centro Nacional de Microscopía Electró nica (Spain) (CNME) and at the Centro de Instrumentació n Científica, University of Granada, Spain (CIC). At the CNME two microscopes were used: a JEOL JEM 1400 microscope, with an acceleration voltage of 100 kV and 0.38 nm point-to-point resolution, and a JEOL 3000F field-emission microscope with an LaB 6 filament at an acceleration voltage of 300 kV with 0.17 nm point to-point resolution. Both microscopes incorporate an energy-dispersive X-ray spectrometer (Oxford ISIS EDX, 136 eV resolution at 5.39 keV) analyser system, and an INCA microanalysis suite (Oxford Instruments), equipped with its own software for quantitative analysis. At the CIC, a Philips CM-20 scanning tunnelling electron microscope was used, operated at 200 kV [fitted with an ultrathin window and solid-state Si(Li) detector for energy-dispersive X-ray analysis]. The atomic percentages were calculated by the Cliff-Lorimer thin-film ratio criterion because AEM data were only collected from areas that could be clearly imaged by high-resolution transmission electron microscopy (HR-TEM). This restricts analysis to the very thin edges of the samples, thus satisfying the thin-film criterion of Lorimer et al. (1976). At the CIC, the validity of the K factors employed in the calculation of concentrations from the fluorescence intensities was checked using reference mineral samples according to Cliff & Lorimer (1975). Albite, biotite, spessartine, muscovite, olivine and titanite standards were used to obtain K factors for the transformation of intensity ratios to concentration following the procedures of Cliff & Lorimer (1975). Formulae were determined from atomic concentration ratios based on the number of oxygen atoms in the ideal formula. The structural formulae of the smectites were calculated on the basis of O 20 (OH) 4 . All the Fe present was considered as Fe 3+ (owing to the limitation of the technique), but the possible existence of scarce Fe 2+ cannot be excluded.
Particle morphology and textural relationships were established using HR-TEM at the CNME. The experimental conditions were optimized to avoid structural modification using a low beam intensity (<500 counts on the CCD camera) with an exposure time of 0.8 s to acquire the image. The samples were prepared through treatments to preserve the microtexture and avoid the collapse of the smectite interlayer space. These treatments are conducted in a sequence of successive steps where a small portion of the sample is placed in agar-agar to protect it from future stains. The sample must then be hydrated and the water progressively replaced by research papers methanol; afterwards, the alcohol is replaced by Spurr resin, according to the methodology proposed by Tessier (1984) and Tessier & Pedro (1987). After polymerization of the resin, thin sections (50 nm) were cut by ultramicrotomy. This procedure minimizes dehydration during HR-TEM study and thus helps preserve the natural texture of the sample. The observations were performed using the JEOL 3000F field-emission microscope, equipped with a double-tilt sample holder (up to a maximum of AE23 ) and a CCD camera for digital recording of the images.
Results and discussion
The samples studied here are very pure and have a high proportion of smectite and small amounts of other minerals as impurities, mainly quartz, feldspars and/or calcite (Table 1, and Figs. 1 and 2). Four of the seven samples studied (CAR1, CAR2, LTBB and WYO) are rich in dioctahedral smectites, as shown by their 060 reflection at 0.149 nm (2 = 61.9 ), and the other three (ESB6, ROS and RESQ) are trioctahedral (060 reflection at 0.152 nm, 2 = 60.7 ). The 060 reflection of the ESB6 sample is wider than that of the rest (Fig. 2), indicating a mixture of di-and trioctahedral phyllosilicates. Quartz is the most frequent impurity, though it appears in very small amounts in the WYO and CAR1 samples, and as traces in ESB6, CAR2 and LTBB. ESQ6 also contains illite, kaolinite and feldspars. ROS and RESQ contain a very small amount of calcite. Three of the dioctahedral samples (CAR1, CAR2 and LTBB) have good crystallinity, as evidenced by their narrow 001 reflection and the relative intensities of the smectite reflections. At the other extreme, ROS and RESQ have high defects of staking, as can be seen by the absence of a clear 001 reflection, which rather appears as a very broad band in their XRD patterns. The smectitic nature of this sample is demonstrated by its swelling after ethylene glycol solvation (Fig. 2).
All samples were analysed both before and after their homoionization with Ca 2+ . The mean contents of the major oxides are reported in Table 2 Table 1 Main data for the studied samples, including labels, location and mineralogical compositions (from XRD data) of impurities that appear with the smectite.
The order of impurities is related to their abundance, starting with the most abundant. Minerals in parentheses are 5% in weight, and minerals indicated with an asterisk (*) are present at trace level.
Regarding the structural formulae of the dioctahedral samples, CAR1 and CAR2 come from different points of the same deposit (Cortijo de Archidona, Spain) and their chemical compositions are similar (Table 2). However, because they are natural samples they have small compositional differences that lead to a different distribution of charges. These small differences in chemical composition imply a difference in their structural formulae and classification: because its tetrahedral charge is higher than its octahedral one, CAR1 has to be classified as a low-charge beidellite, whereas CAR2 does not have tetrahedral charge and is classified as a low-charge montmorillonite. Samples (Caballero et al., 2005;García-Romero & Huertas, 2017). In both cases, the layer charge is very low, just above or below the lower smectite limit. This is especially true for CAR2, in which this parameter is À0.30 p.u.c. After fitting, the natural sample from Los Trancos (LTBB) is classified as a low-charge beidellite, with a layer charge of À0.29, below the theoretical limit for smectites (À0.4 p.u.c). In previous work, this sample was also classified as a montmorillonite (Reyes et al., 1979;García-Romero & Huertas, 2017). The formulae fitted from the mean chemical compositions of these three samples after homoionization correspond to montmorillonites. The small variations in the MgO content (<1%) in Ca smectites imply a change in the structural formulae with respect to the natural samples. The calculated layer charge increases for the non-homoionic samples in the three cases (Table 3 and Fig. 4).
The classification of samples CAR1 and LTBB changes from low-charge beidellite to montmorillonite (Fig. 5) because despite having more charge, most of it is now located on the octahedral sheet. CAR2 changes the montmorillonite subtype, according to the classifications of Schultz (1969) and Emmerich et al. (2009), because the homoionic sample has a small tetrahedral charge, while the natural sample does not. The difference in the structural formulae is not very large when comparing the numbers of tetrahedral and octahedral cations of each sample, which change by a maximum of 0.1 p.u.c. However, the structural formulae of the homoionic samples fit accurately, and better than the natural samples, because the layer charge values are in the smectite range in the three cases (À0.88, À0.73 and À0.58 for CAR1 Ca, CAR2 Ca and LTBB Ca, respectively). Cuadros et al. (1994) reported that, in a group of smectites from hydrothermal alteration of a very homogeneous volcanic tuff of acid composition from the Cabo de Gata deposits, similar to CAR1, CAR2 and LTBB, some of the samples chemically characterized as beidellite behaved as montmorillonites in the Li test (Greene-Kelly, 1953). This could be related to an erroneous structural formula deriving from the presence of Mg 2+ as the exchangeable cation, as in the case of CAR1 and LTBB. In the same way, the excess of positive octahedral charge reported by Newman & Brown (1987) and Christidis (2011) could be caused by an erroneous assignment of Mg 2+ to the octahedral layer instead of the interlayer.
The WYO sample, however, only shows some slight changes in the structural formula after homoionization due to the small amount of MgO in the natural sample, which leads to a small difference in its chemical composition after homoionization with Ca 2+ .
All trioctahedral samples were collected at the Tajo Basin (Spain) and they belong to the same unit, the Pink Clays Unit, so their chemical compositions should be similar. These clays have been studied by different authors and have been characterized as stevensite (Cuevas et al., 1993(Cuevas et al., , 2003de Santiago Buey et al., 2000;García-Rivas et al., 2018), kerolite or interstratified kerolite/stevensite (Martin de Vidales et al., 1991;Pozo et al., 1992Clauer et al., 2012), and as a fine-grained interstratification of turbostratic talc and saponite (Steudel et al., 2017). The lack of agreement on the classification of Pink Clays is easily understood by considering the chemical compositions and structural formulae obtained from our natural samples.
As seen in Table 3, samples RESQ and ROS present some problems. RESQ could be classified as a kerolite, because it does not have interlayer charge. Consequently, the classification as kerolite or interstratified kerolite/stevensite made by several authors could be correct. However, in the homoionic sample the layer charge increases and the sample can be classified as a low-charge stevensite. In the case of ROS, its layer charge is too low for a smectite (À0.19) and its octahedral and tetrahedral charges are close, so this sample should be classified as kerolite, in agreement with other authors who classified samples from the same unit as kerolite or interstratified kerolite/stevensite. However, the classification as kerolite does not agree with the properties of this clay, namely its partial swelling ability (Fig. 2) and its high specific surface area of 392 m 2 g À1 (de Santiago Buey et al., 2000). The HR-TEM photographs (Fig. 6) show the characteristic morphological features of RESQ, displaying the edges of particles composed of small subunits that form the larger particles. Both have the common sigmoidal appearance and parallel lattice planes. The subunits are thicker in their central portions, with tapered margins and curved cross sections. They have a very small particle size and numerous stacking faults HR-TEM images of sample RESQ showing the particle edge composed of small subunits that form the larger ones. They have a very small particle size and numerous stacking faults and edge dislocations. Note the characteristic smectite morphological features with their common sigmoidal appearance. The subunits are thicker in their central portions, with tapered margins and curved cross sections. and edge dislocations, as described by de Santiago Buey et al. (2000) and .
After homoionization, the formulae of trioctahedral smectites change (Table 3) and their layer charge increases (Fig. 4). RESQ and ROS change from kerolite to stevensite in the homoionic samples, although with very low charge. ESB6 Ca has 4.93 octahedral cations p.u.c. and tetrahedral charge, and it should be classified as intermediate between beidellite and saponite (Fig. 5). However, it is necessary to take into account that the 060 reflection of the ESB6 sample is wide, as has already been indicated above, which means that it is a mixture of di-and trioctahedral phyllosilicates. The sample contains discrete illite and a trioctahedral smectite (saponite) with minor proportions of dioctahedral mica layers interstratified (Fig. 7) or small clusters of illite included in the smectite particles, in agreement with Hoang-Minh et al. (2019). This explains the minor proportions of interlayer K + that remain in its structural formula after homoionization, when the point analyses on smectite particles are obtained. The structural formulae obtained after homoionization of these trioctahedral smectites are more accurate because the uncertainty in the position of Mg 2+ has been avoided. Fig. 7 shows small areas with 10 Å spacings included in the general 14 Å spacing smectite. Small 10 Å areas have been observed randomly distributed along the ESB6 sample. Those 10 Å areas commonly display different features since they have a straight and regular grid with 10 Å spacing, free of dislocations, stacking faults and edge dislocations. The presence of these 10 Å micaceous layers leads to a higher tetrahedral charge in the mean value for the particle The study of these trioctahedral smectites, mainly of the very complex samples from the Pink Clays Unit, prior to and after homoionization, shows the importance of the correct allocation of octahedral Mg 2+ . As for dioctahedral smectites, the layer charge of the exchanged trioctahedral smectites is higher. There are other cases in the literature in which the structural formulae for trioctahedral 2:1 minerals, without removing the interlayer Mg 2+ , fitted for minerals of very low charge (Yeniyol, 2007). Some of these results could be partially influenced by a lack of knowledge of the octahedral and interlayer Mg 2+ distribution.
Because fitting the structural formulae consists of distributing cations on the octahedral and tetrahedral positions, homoionization has an effect not only on the positions occupied by Mg 2+ but also on the full distribution of the cations, as has already been indicated by some authors [such as Christidis (2008), and references therein]. Because the charge of the layer must be between 0.4 and 1.2 for O 20 (OH) 4 (Guggenheim et al., 2006), an increase in the interlayer charge improves the fitted structural formula considerably, and it reveals a more accurate crystal chemistry of smectites, decreasing their chemical artefacts and, in some cases, modifying their classification (Table 3 and Fig. 4). This change is a consequence of the Mg 2+ which was wrongly assigned to the octahedral position, and our data show that Mg 2+ is more common than generally recognized in montmorillonitic clays. The changes detected are only a response to the recalculation of the relative proportions of the cations with the new formulae. Ca 2+ is the only cation expected in the interlayer position in the analysis of homoionic smectites.
Finally, Emmerich et al. (2009) concluded 'The smectite structure reveals five features that allow an unambiguous description of a sample: 1) identification as either a dioctahedral or a trioctahedral smectite; 2) layer charge; 3) charge distribution between tetrahedral and octahedral sheets; 4) cation distribution within the octahedral sheet and 5) Fe content. In addition, the nature of interlayer cations should be given as they influence certain properties of montmorillonites.' To analyse these structural parameters, the structural formula must be fitted for a sample with no interlayer Mg 2+ . Currently, the only way to do this is to perform the chemical analysis after homoionization with a different cation. This ensures that (i) the assignation of the cations to the tetrahedral and octahedral positions, and therefore the distribution of the layer's charge, is correct, and (ii) the structural parameters can be related to the physicochemical properties.
Final remarks
Homoionization with Ca 2+ produces an important difference in fitting the structural formulae, not only for trioctahedral and Mg-rich smectites, which is expected, but also for dioctahedral smectites.
For both dioctahedral and trioctahedral samples, the interlayer charge increases notably in the homoionic samples because the octahedral charge increases. Additionally, changes are observed in the tetrahedral content and charge. In the homoionic samples, the number of octahedral cations is closer to four and six in dioctahedral and trioctahedral smectites, respectively, with respect to the natural samples. Overall, a better fit of the formulae is obtained for the Ca 2+ homoionic smectites. Furthermore, the classification of the smectite type changes for several samples after homoionization, which eliminates the interlayer Mg.
Because the structural formulae obtained after homoionization of the samples are more accurate, it can be concluded that homoionization improves the structural formulae fitting for both dioctahedral and trioctahedral smectites. In this context, homoionization is strongly recommended routine to avoid mistakes, especially when the structural formulae, structural parameters and, in general, crystalchemical data must be related to the physicochemical properties of the samples for practical applications. | 2021-04-08T05:14:43.195Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "62d89aa9fa8cb6c9e1cdbabe9f2394b61e5daa72",
"oa_license": "CCBY",
"oa_url": "https://journals.iucr.org/j/issues/2021/01/00/vb5011/vb5011.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "62d89aa9fa8cb6c9e1cdbabe9f2394b61e5daa72",
"s2fieldsofstudy": [
"Materials Science",
"Geology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
232217791 | pes2o/s2orc | v3-fos-license | Genome-wide identification, characterization, and expression analysis of the NAC transcription factor family in orchardgrass (Dactylis glomerata L.)
Background Orchardgrass (Dactylis glomerata L.) is one of the most important cool-season perennial forage grasses that is widely cultivated in the world and is highly tolerant to stressful conditions. However, little is known about the mechanisms underlying this tolerance. The NAC (NAM, ATAF1/2, and CUC2) transcription factor family is a large plant-specific gene family that actively participates in plant growth, development, and response to abiotic stress. At present, owing to the absence of genomic information, NAC genes have not been systematically studied in orchardgrass. The recent release of the complete genome sequence of orchardgrass provided a basic platform for the investigation of DgNAC proteins. Results Using the recently released orchardgrass genome database, a total of 108 NAC (DgNAC) genes were identified in the orchardgrass genome database and named based on their chromosomal location. Phylogenetic analysis showed that the DgNAC proteins were distributed in 14 subgroups based on homology with NAC proteins in Arabidopsis, including the orchardgrass-specific subgroup Dg_NAC. Gene structure analysis suggested that the number of exons varied from 1 to 15, and multitudinous DgNAC genes contained three exons. Chromosomal mapping analysis found that the DgNAC genes were unevenly distributed on seven orchardgrass chromosomes. For the gene expression analysis, the expression levels of DgNAC genes in different tissues and floral bud developmental stages were quite different. Quantitative real-time PCR analysis showed distinct expression patterns of 12 DgNAC genes in response to different abiotic stresses. The results from the RNA-seq data revealed that orchardgrass-specific NAC exhibited expression preference or specificity in diverse abiotic stress responses, and the results indicated that these genes may play an important role in the adaptation of orchardgrass under different environments. Conclusions In the current study, a comprehensive and systematic genome-wide analysis of the NAC gene family in orchardgrass was first performed. A total of 108 NAC genes were identified in orchardgrass, and the expression of NAC genes during plant growth and floral bud development and response to various abiotic stresses were investigated. These results will be helpful for further functional characteristic descriptions of DgNAC genes and the improvement of orchardgrass in breeding programs. Supplementary Information The online version contains supplementary material available at 10.1186/s12864-021-07485-6.
truncatula, loss of MtNST1 function resulted in reduced lignin content associated with reduced expression of most lignin biosynthetic genes [28].
In addition, NAC genes also play an important role in the response to abiotic stresses. In Arabidopsis thaliana, AtNAP is a negative regulator that represses AREB1 under salt stress [29]. ANAC069 recognizes the DNA sequence of C[A/G]CG[T/G], which negatively regulates tolerance to salt and osmotic stress by reducing ROS scavenging capability and proline biosynthesis [30]. In wheat (Triticum aestivum), the overexpression of TaRNAC1 enhances drought tolerance [31]. The overexpression of TaNAC69 results in enhanced dehydration tolerance and the transcript levels of stress-induced genes in wheat [32]. The overexpression of TaNAC29 increased salt tolerance by enhancing the antioxidant system to reduce H 2 O 2 accumulation and membrane damage [33]. Overexpression of OsNAC6/SNAC2 could also improve the drought, salt and cold tolerance of rice seedlings [34,35]. In rice, ONAC022 enhanced drought and salt tolerance by regulating an ABA-mediated pathway [36]. Furthermore, the NAC transcription factor JUNG-BRUNNEN 1 enhances tomato tolerance to drought stress [37]. In Arabidopsis, the heteroexpression of the Miscanthus NAC protein MINAC12 was found to result in activation of ROS scavenging enzymes to improve drought and salt tolerance [38]. A previous study illustrated that NAC genes are related to vernalization and flowering in orchardgrass by transcriptome analysis [39].
Orchardgrass (Dactylis glomerata L.) is one of the most important cool-season perennial grasses and is native to Europe and North Africa [40]. Orchardgrass is grown widely across the world due to its high biomass and nutritional quality, good shade, drought and barren tolerance, and high feed quality [41]. In addition, orchardgrass is also an important species in rocky desertification control in southwestern China. Therefore, orchardgrass has great economic and ecological value, and identification of functional genes is required to improve orchardgrass productivity. NAC genes have been widely studied in various plant species, such as Arabidopsis thaliana [13], Oryza sativa [7], Zea mays [42], Glycine max [43], Solanum tuberosum [44], Pyrus bretschneideri [45], Fagopyrum tataricum [46], and Panicum miliaceum [47]. However, the NAC gene family in orchardgrass has not been systematically studied. With the completion of Dactylis glomerata L. genome sequencing, a systematic analysis of the NAC family during orchardgrass is expected to accelerate molecular breeding in orchardgrass [48]. In this study, we identified 108 orchardgrass NAC genes and classified them into 14 subgroups, including the orchardgrass-specific subgroup Dg_NAC. Comprehensive and systematic characteristics, including gene structure, conserved motif compositions, chromosomal distribution, gene duplications and phylogenetic characteristics, and homologous relationships were further investigated. In addition, the expression of DgNAC genes during plant growth and floral bud development and the response to various abiotic stresses were analyzed. The present results will be useful for illustrating the molecular mechanisms of orchardgrass adaptability under various environmental conditions, further analysis of the functional characteristics of candidate DgNAC genes and providing valuable clues for molecular assisted breeding in orchardgrass.
Identification of the DgNAC genes in orchardgrass
Members of the NAC family were identified in the orchardgrass genome using the Hidden Markov Model (HMM) search with the HMM profile (PF02365) of the NAM domain. A total of 108 candidate gene models were matched across the whole genome and designated DgNAC001 to DgNAC108 based on their order on the chromosomes (Additional file 1). The basic information of 108 DgNAC genes was analyzed in this study, including the CDS length, protein sequence length, relative molecular weight (MW), and isoelectric point (pI) (Additional file 1). The protein sequence length of all DgNAC proteins ranged from 134 (DgNAC031) to 938 (DgNAC094) amino acids. The MW of the proteins varied from 14.70 to 181.91 kDa. The pI ranged from 4.28 (DgNAC042) to 10.25 (DgNAC012), with an average of 6.79, suggesting that most DgNAC proteins were weakly acidic.
Phylogenetic analyses and classification of DgNAC genes
To explore the evolutionary relationship of the NAC gene family in orchardgrass, an unrooted phylogenetic tree was constructed by using the amino acid sequences of DgNACs and AtNACs (Fig. 1). The results showed that 108 DgNAC genes could be divided into 14 subgroups, including an orchardgrass-specific subgroup named Dg_NAC. As shown in Fig. 1, the NAC proteins of orchardgrass were distributed in the ONAC003, ANAC063, AtNAC3, NAP, ATAF, ONAC022, TERN, TIP, ANAC011, OsNAC7, NAC1, NAC2, and NAM subgroups and orchardgrass-specific subgroup DgNAC. However, in orchardgrass, no NAC members were identified from the OsNAC8, SENU5, and ANAC001 subgroups. Among the 108 DgNAC proteins, only one DgNAC protein belonged to NAC1, the subgroups NAP, ANAC011 and NAC2 contained five DgNAC proteins each, and the orchardgrass-specific subgroup Dg_NAC included 15 DgNAC proteins, whereas the NAM subgroup contained the most DgNAC proteins (16).
Gene structure and protein motif analysis of DgNAC genes
To obtain more insights into the evolution of the NAC family in orchardgrass, the structural features of all the identified DgNAC genes were analyzed. As shown in Fig. 2b, among the DgNAC genes, 17 (approximately 15.74%) were intronless, 20 (12.96%) had one exon, nearly half (50, 46.30%) had three exons, and only 2 genes (DgNAC011 and DgNAC094, with 15 and 11 exons, respectively) had more than ten exons. Among the 15 orchardgrass-specific NAC genes, more than half (10, 66.67%) had only one exon.
To reveal the protein structural diversification of DgNAC proteins, 10 conserved motifs were identified by MEME (Fig. 2c). The amino acid sequences of each motif are listed in Additional file 2. The lengths of these conserved motifs varied from 10 to 55 amino acids. Motifs-1, − 2, − 3, and − 5 were the most conserved parts (Fig. 2c). The orchardgrass-specific NACs DgNAC068 and DgNAC078 contain one type of motif, whereas DgNAC035 contains the highest number of motifs (8 types). The motifs of DgNAC members within the same subgroups display similar patterns, indicating that the same subgroup of genes have similar functions. However, the specific biological function of most of these motifs is unclassified and remains to be further investigated.
Chromosomal locations and synteny analysis of DgNAC genes
To clarify the distribution of DgNAC genes on 7 chromosomes of orchardgrass, the MG2C program was used to map DgNAC genes on the chromosome (Fig. 3). A total of 108 DgNACs were randomly designated onto 7 chromosomes. Chromosome 2 had the highest number of DgNAC genes (20, 18.5%), and chromosome 7 harbored the lowest number (7, 6.5%). The orchardgrassspecific NAC genes are distributed on chromosomes 1, 3, 4, 5 and 6, and one-third of them are on chromosome 5. The duplication events of DgNAC genes were also examined in this study. The results showed that only 5 pairs of genes of tandem duplicates in the DgNAC gene family were identified, including DgNAC14/15, DgNAC15/16, DgNAC21/22, DgNAC31/32, and DgNAC42/43, and they were linked with the red line, (Fig. 3). The tandem duplicated genes were present on chromosomes 1, 2, and 3, and only one pair of genes was common on chromosome 3.
To further explore the evolutionary relationship of the NAC gene family in orchardgrass, five comparative syntenic maps were constructed, which consisted of a dicotyledonous plant (Arabidopsis thaliana) and five monocotyledonous plants (Oryza sativa, Brachypodium distachyon, Hordeum vulgare, Sorghum bicolor and Setaria viridis) (Fig. 4). Seventy-seven DgNAC genes showed a syntenic relationship with Brachypodium distachyon, Setaria viridis (69), Oryza sativa (69), Hordeum vulgare (68), Sorghum bicolor (64) and Arabidopsis thaliana (6) (Additional file 3). The number of homologous To further analyze the role of NAC genes in the regulation of orchardgrass flowering, we used RNA-seq data to analyze the transcript levels of all 108 DgNAC genes in different floral bud development stages. The DgNAC genes exhibited different expression profiles with floral bud development. Several DgNAC genes presented similar expression patterns from the before vernalization (BV) stage to the heading (H) stage, such as DgNAC087 and DgNAC107, with gradually increased expression levels (Fig. 6, Additional file 6). Some genes showed preferential expression during the floral bud development of orchardgrass. Among them, eleven genes in the vernalization stage, four genes (DgNAC048/049/056/ 090) in the after vernalization stage, and twenty genes in the heading stage showed high transcript abundances. These DgNAC genes may play a critical role in the different floral development stages. In addition, the special temporal expression patterns of DgNAC genes may be related to changes in environmental conditions. For example, DgNAC genes respond to low temperatures in vernalization and long days in the heading stage.
Expression patterns of DgNAC genes in response to different abiotic stress Gene expression patterns can provide crucial information for determining gene function. To investigate the role of NAC genes in orchardgrass under various abiotic stresses, 12 DgNAC members were selected for quantitative expression analysis in response to ABA, PEG, heat, and salt treatment durations (Fig. 7). Some DgNAC genes were induced/repressed by multiple treatments, such as DgNAC092 was inhibited by ABA, PEG, heat, and salt treatments, and DgNAC023 was induced by salt and ABA treatment after 3 h. In contrast, multiple DgNAC genes can be induced simultaneously by the same treatment. For instance, four DgNAC genes (DgNAC034/050/075/082) were induced by ABA treatment, and six genes (DgNAC034/050/054/061/066/084) were induced by salt treatment. Interestingly, the expression level of DgNAC034 was higher than that of other selected genes under salt and heat treatment. The expression levels of many DgNAC genes, such as DgNAC008, DgNAC023, DgNAC079 and DgNAC092, were reduced by heat treatment. Furthermore, some genes showed opposing expression patterns under different treatments; for example, DgNAC023 was induced by ABA and salt but repressed by heat treatment.
To understand the potential function of orchardgrassspecific NAC genes in resisting environmental stress, we also analyzed the transcriptional levels of DgNAC genes from the Dg_NAC subgroup. The results showed that Dg_NACs are differentially expressed under submergence and heat tolerance (Fig. 8). In the submergencetolerant cultivar 'Dianbei', DgNAC045, DgNAC094 and DgNAC085 were significantly upregulated after submergence treatment for 8 h (Fig. 8a). For drought stress treatment (18 d), the expression of DgNAC043, DgNAC010, and DgNAC095 was significantly upregulated in the roots of the tolerant variety 'Baoxing' (Fig. 8b). Under heat conditions, DgNAC062 and DgNAC077 were significantly upregulated in the heat-resistant variety 'Baoxing', while these two genes were downregulated in the heat-susceptible variety '01998' (Fig. 8c).
DgNAC gene identification and evolutionary analysis in orchardgrass
The NAC gene family is an important transcription factor in plants that plays roles in the regulation of growth, development, and stress responses [49][50][51]. Genomewide identification of NAC genes has been studied in many plant species, while little is known about this gene family in the high-quality forge D. glomerata. In this study, a total of 108 NAC genes were identified based on the D. glomerata genome database [48], which was higher than the 104 NAC genes identified in Capsicum annuum [52], 82 NAC genes identified in Cucumis melo [53], 80 NAC genes identified in Fagopyrum tataricum [46], and 96 NAC genes identified in Manihot esculenta [54] but lower than the 115 NAC genes identified in Arabidopsis thaliana [13], 151 NAC genes identified in Oryza sativa [55], 152 NAC genes identified in Zea mays [42], 152 NAC genes identified in Glycine max [43], 110 NAC genes identified in Solanum tuberosum [44], and 204 NAC genes identified in Chinese cabbage [56]. Evidence from physical and chemical parameters and gene structure and protein motifs confirms that genes originating from progenitors can gradually evolve and expand. Duplication events are important in the rapid expansion and evolution of gene families, and the size difference might be due to the more duplication events that occurred in other species after differentiation from their earliest ancestors. For example, the orchardgrass genome experienced one genome duplication event [17], while the Arabidopsis genome went through five such events [57]. A collinearity analysis demonstrated that there were 5 pairs of tandem replications without segmental duplication events (Fig. 3). Tandem replication of NAC genes has been observed in many species, such as Arabidopsis thaliana, Oryza sativa, Solanum tuberosum, and Panicum virgatum. However, the duplication event of orchardgrass increases the genome size rather than increasing many NAC gene members, which may be related to the expansion of long terminal repeat retrotransposons (LTR-RTs) [48].
The unrooted tree was constructed using NAC protein sequences from orchardgrass and Arabidopsis to explain the phylogenetic relationship. According to the sequence homology with Arabidopsis, all 108 DgNAC genes were divided into 13 subgroups [13]. The results were inconsistent with other species, such as Fagopyrum tataricum (15 subgroups) [46], Capsicum annuum (14 subgroups) [52] and Capsicum annuum (12 subgroups) [47], suggesting that NAC proteins exhibit diversity in various species. The results of conserved motif analysis of orchardgrass NAC proteins further confirm the classification of the DgNAC family. Only 8 pairs of homologous genes were found in orchardgrass and Arabidopsis by collinearity analysis, whereas more homologous gene pairs were identified in the five monocotyledons, including those of Oryza sativa, Brachypodium distachyon, and Hordeum vulgare (Fig. 4). The results indicated that the NAC genes are more homologous and conserved in monocotyledons. Generally, the expression level of a gene determines its function, while the functions of genes are related to their expression patterns [58]. Transcription factors usually play a key role in controlling the expression of tissue-specific genes [59][60][61]. In this study, the tissue-specific expression pattern showed that more than 40 DgNAC genes exhibited higher expression in roots than other detected orchardgrass tissues, such as DgNAC008/052/026/023/034/061/045. Similar results were also found in other plants, such as Fagopyrum tataricum [46], Panicum miliaceum [47] and Triticum aestivum [62]. DgNAC046, DgNAC087, and DgNAC103 exhibited higher expression than the other genes in the stems of orchardgrass, and they may play an important role in stem development. In addition, previous studies have demonstrated that the development of tissues could be promoted by overexpression of tissue-specifically expressed NAC genes, such as NAC15 from poplar, which enhanced wood formation [63], and the NAC domain transcription factor PdWND3A affected lignin biosynthesis and composition in populous [64]. In general, genes in one branch of the phylogenetic tree often have the same function and similar expression profiles. Although DgNAC021 and DgNAC022 are duplicated genes within the same subgroup, the expression pattern of DgNAC021 was different from that of DgNAC022, which might be caused by variation in gene regulation after duplication events, and the differential expression patterns of duplicated DgNAC genes indicated that they might have experienced functionalization during the evolutionary process [65,66]. The NAM subgroups may regulate cell division and leaf development [67][68][69][70][71][72], and the gene DgNAC090 is most highly expressed in the leaf followed by the root, indicating that DgNAC090 may function in leaf development and cell division through expression in both the leaf and root (Fig. 5). These results demonstrated that DgNAC genes are widely involved in the tissue development of orchardgrass. Orchardgrass is a high-quality perennial forage grass, and flowering time is a critical factor affecting forage quality and utilization. In the current study, the potential role of NAC genes in the regulation of orchardgrass flowering time was investigated by using transcriptome data. The DgNAC genes were most highly expressed in different floral bud development stages (Fig. 6). Among them, DgNAC033 had a special expression pattern during the vernalization and after vernalization stages in orchardgrass, suggesting that it has an important function in the induction of flower primordia. A previous study indicated that the CUC1 gene regulates shoot apical meristem formation in Arabidopsis [72]. After vernalization of orchardgrass, three DgNAC genes (DgNAC034/050/082) showed high expression in vegetable growth and before the heading stage, indicating that these genes may play an important role in young inflorescence development and regulation of flowering time. The overexpression of the BnNAC485 gene in Brassica napus alters flowering time [73]. In Arabidopsis, NAC050 and NAC052 are involved in transcriptional repression and flowering time control by associating with the histone demethylase JMJ14 [74]. Overall, the expression of DgNAC genes varies in different floral bud development stages, which potentially regulates orchardgrass flowering time.
Orchardgrass is a widely adapted perennial forage grown on all continents. Orchardgrass is more tolerant of shade, drought, and heat than other cool-season perennial grasses. In plants, most of the NAC genes involved in the response to abiotic stress, such as drought, salinity, and heat, have been studied. However, there are few reports of NAC genes involved in the abiotic stress response in orchardgrass. Therefore, one of the goals of this study was to obtain more insights into the expression patterns and putative functions of DgNAC genes in response to various abiotic stresses. The expression levels of 12 DgNAC genes under four stress treatments (ABA, PEG, salt, and heat) were calculated (Fig. 7). All 12 DgNAC genes were induced by these treatments; in particular, DgNAC034 and DgNAC050 were significantly upregulated after PEG treatment for 12 h, salt treatment for 6 h, and heat treatment for 3 h, and DgNAC092 was repressed by all treatments. The expression pattern of orchardgrass-specific NAC genes under submergence, drought, and heat stress showed that NAC may play an important role in orchardgrass adaptation and resistance to various environmental stresses. These results provide new insight into how the accumulation of DgNAC effectively reduces abiotic stress damage.
Conclusions
In the current study, a comprehensive and systematic genome-wide analysis of the NAC gene family in orchardgrass was first performed. A total of 108 DgNAC genes were identified and classified into 14 subgroups, including the orchardgrass-specific subgroup Dg_NAC. Comprehensive and systematic characteristics, including gene structure, conserved motif compositions, chromosomal distribution, gene duplications and phylogenetic characteristics, and homologous relationships were further investigated. In addition, the expression of DgNAC genes in various tissues, developmental stages of floral bud development, and responses to various abiotic Fig. 8 Expression profiles of orchardgrass-specific NAC genes in response to submergence stress, drought stress, and heat stress. a The heat map of DgNAC genes in submergence treatment for 0, 8, and 24 h, 'Dianbei' is submergence tolerant and 'Anba' is submergence sensitive. b the heat map of DgNAC genes in leaf and root of highly drought-resistant variety 'Baoxing' under drought treatment for 0 and 18d. c the heat map of part of orchardgrass-specific NAC genes under heat treatment for 0, 10, and 26d, 'Baoxing' is heat resistant, '01998' is heat sensitive stresses implied that DgNAC may participate in the development and stress tolerance of orchardgrass. These results are useful for revealing the adaptability of orchardgrass under various environmental stresses. This comprehensive analysis of the NAC gene in orchardgrass is a valuable resource for further studying the functional characteristics of DgNAC genes and cultivating highquality orchardgrass varieties.
Identification of NAC genes in orchardgrass
The orchardgrass genome resources were downloaded from the orchardgrass genomics database (http:// orchardgrassgenome.sicau.edu.cn/) [48]. For the identification of NAC proteins, the hidden Markov model (HMM) file of the NAM domain (PF02365) was downloaded from the Pfam database (http://pfam.sanger.ac. uk/) as the query [75]. HMMER 3.0 was used to scan the annotated protein with the NAM HMM file. The proteins acquired through the NAM HMM were aligned by ClustalW (E-value <1e-20 ) and used to rebuild an orchardgrass-specific NAM HMM file using hmmbuild in HMMER 3.0. The orchardgrass-specific NAM HMM was used to identify the NAC proteins from orchardgrass genome annotations, and the cutoff value was set to 0.01 [76]. The NAM conserved domain of all candidate genes was further confirmed by the Conserved Domains Database (CDD, http://www.ncbi.nlm.nih.gov/ cdd/) and PFAM program [75]. Finally, the physical and chemical parameters of the DgNAC proteins were predicted by ProtParam (http://web.expasy.org/protparam/), including the CDS (coding sequence) length, molecular weights (MW), and isoelectric points (PI).
Phylogenetic analysis and classification of the DgNAC gene family
The NAC protein sequences of Arabidopsis were downloaded from the Arabidopsis genome TAIR 11 (https:// www.arabidopsis.org/) [77]. All the identified DgNAC genes were assigned into different groups based on the classification of AtNACs [13]. Geneious 2020 was used to construct neighbor-joining (NJ) trees with the following parameters: Blosum62 cost matrix, Jukes-Cantor model, global alignment and bootstrap value of 1000.
Gene structure and motif analysis
The exon-intron display was constructed according to the Gene Structure Display Server (GSDS, http://gsds. gao-lab.org/) program [78] according to the available CDS and genomic information of the DgNACs. The Multiple Expectation Maximization for Motif Elicitation (MEME, http://meme-suite.org/tools/meme) program [79] was used to identify the conserved motifs in DgNAC protein sequences with parameters that maximum 10 motifs and range of motif width 6 to 200.
Chromosomal mapping and gene duplication analysis
The chromosomal positions of the DgNAC genes were acquired from the orchardgrass genome annotations. The chromosomal map of DgNAC genes was drafted by MapGene2Chrome (MG2C, http://mg2c.iask.in/mg2c_ v2.0/). DgNAC gene duplication was examined by using MCScanX software with default parameters. The Dual Synteny Plotter of TBtools (https://github.com/CJ-Chen/ TBtools) [80] was used to analyze the homology of the NAC gene between orchardgrass and the other plants (including Arabidopsis thaliana, Oryza sativa, Brachypodium distachyon, Hordeum vulgare, Sorghum bicolor and Setaria viridis).
Plant material, growth condition and stress treatments
The Dactylis glomerata cv. DONATA (Registered No. 398) seeds were provided by DLF (Beijing, China). The seeds were sown in pots (18.5 cm length, 13.5 cm width, and 5 cm deep) filled with sterilized quartz and ddH 2 O in growth chambers. The parameters of the growth chamber were set as a 22°C 14 h photoperiod and a 20°C 10 h dark period. After 1 week of germination, seedlings were irrigated with Hoagland's solution for another 60 days. Then, the seedlings were separately subjected to various stress treatments, including drought, ABA, salt, and heat. For salt, ABA, and drought treatments, the plants were subjected to 250 mmol NaCl, 100 μmol ABA, and 20% PEG 6000 (W/V) Hoagland's solution, respectively. For heat treatment, the plants were exposed to high temperature at 40°C/35°C (day/ night). Several DgNAC genes were selected to analyze the expression profile under various stresses by qRT-PCR analysis. The samples were collected at 0, 3, 6, 12 and 24 h after treatments. All materials harvested from each treatment were immediately frozen in liquid nitrogen and stored at − 80°C before RNA isolation. All experiments were conducted three times with three biological replicates for qRT-PCR analysis.
RNA isolation, cDNA synthesis, and qRT-PCR
The Hipure HP plant RNA mini kit (Magen, R4165-02) was used to extract total RNA. DNA-free RNA was used for the synthesis of cDNA by using ReverTra Ace® qPCR RT Master Mix (TOYOBO, FSQ-301) according to the manufacturer's recommendations. qRT-PCR was performed with a Bio-Rad CFX96 instrument using SYBR® green real-time PCR master Mix (TOYOBO, QPK-201). Primers used for qPCR were designed with primer 6.0, and glyceraldehyde 3-phosphate dehydrogenase (GAPD H) was selected as the reference gene (Additional file 4) [81]. The detailed methods of reaction and relative quantitative calculations have been described in a previous study [39]. The transcriptome data of various orchardgrass tissues were obtained from the orchardgrass genome database (Additional file 5) [48], and the transcriptome data of vernalization and floral bud development of orchardgrass were obtained from Feng et al. (Additional file 6) [39]. The RNA-seq data of orchardgrass-specific NAC genes (Additional file 7) under submergence, drought and heat stress were obtained from Zeng et al. [82], Ji et al. [83], and Huang et al. [84], respectively. | 2021-03-14T06:16:14.573Z | 2021-03-12T00:00:00.000 | {
"year": 2021,
"sha1": "93ece712f306c941222f05c2a1573f84feba15c1",
"oa_license": "CCBY",
"oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/s12864-021-07485-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "794b0a545ecb2d746de12730b786cd35143f1963",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250009255 | pes2o/s2orc | v3-fos-license | Clathrin adaptor AP-1–mediated Golgi export of amyloid precursor protein is crucial for the production of neurotoxic amyloid fragments
One of the hallmarks of Alzheimer’s disease is the accumulation of toxic amyloid-β (Aβ) peptides in extracellular plaques. The direct precursor of Aβ is the carboxyl-terminal fragment β (or C99) of the amyloid precursor protein (APP). C99 is detected at elevated levels in Alzheimer’s disease brains, and its intracellular accumulation has been linked to early neurotoxicity independently of Aβ. Despite this, the causes of increased C99 levels are poorly understood. Here, we demonstrate that APP interacts with the clathrin vesicle adaptor AP-1 (adaptor protein 1), and we map the interaction sites on both proteins. Using quantitative kinetic trafficking assays, established cell lines and primary neurons, we also show that this interaction is required for the transport of APP from the trans-Golgi network to endosomes. In addition, disrupting AP-1-mediated transport of APP alters APP processing and degradation, ultimately leading to increased C99 production and Aβ release. Our results indicate that AP-1 regulates the subcellular distribution of APP, altering its processing into neurotoxic fragments.
Alzheimer's disease (AD) is characterized by the progressive loss of cognitive functions associated with learning and memory impairments. The AD brain typically presents an accumulation of intraneuronal fibrillary tangles and extracellular amyloid plaques (1,2). These amyloid plaques are mainly composed of insoluble amyloid-β (Aβ) peptides, which are fragments of the amyloid precursor protein (APP) (3). APP may undergo non-amyloidogenic processing by α-secretases, generating a carboxyl-terminal fragment (CTF) known as CTFα or C83. Non-pathogenic fragments are subsequently generated upon γsecretase cleavage. In contrast, Aβ production results from the amyloidogenic processing of APP by the β-secretase enzyme (BACE-1 [beta-site APP cleaving enzyme 1]). BACE-1 cleavage generates a longer CTF (CTFβ/C99) that is further processed by γ-secretase to produce Aβ peptides (4). The intracellular production/accumulation of C99 is, therefore, a prerequisite for Aβ biogenesis and extracellular amyloid plaque formation. A growing body of evidence has directly linked the intracellular accumulation of C99 to early neurotoxicity and cognitive dysfunction in AD onset (5,6). A recent study indicates that intracellular accumulation of C99 in human neurons is the direct cause of lysosome function impairment (7) that is frequently observed in early onset AD (8).
APP and its processing secretases are transmembrane proteins synthesized in the endoplasmic reticulum (ER) that traffic through the secretory and endocytic pathways (6). The itinerary taken by these proteins within the endomembrane systems is crucial in determining the amyloidogenic processing of APP (6). The prolonged retention of APP or β-secretase at the Golgi apparatus or in early endosomes was shown to increase the pool of C99 and Aβ species (9)(10)(11)(12). Adaptor protein (AP) complexes are components of vesicle coats and critical for protein sorting in the late secretory pathway. There are five APs described in mammals, AP-1 to AP-5, and each adaptor selects a subset of proteins in specific compartments to be delivered to target membranes (13,14). AP-1 is a clathrin adaptor complex that interacts with the cytoplasmic termini of membrane proteins and has been implicated in bidirectional protein trafficking between the trans-Golgi network (TGN) and early endosomes. AP-1 is also involved in polarized transport to the cell surface of neurons and epithelial cells (13).
The selective transport of cargo mediated by AP-1 is crucial for cell homeostasis, as mutations in AP-1 subunits result in neurological diseases (15,16).
The AP-1 complex is formed by four different subunits: two large (γ and β), one medium (μ1), and one small (σ1). The main function of this complex is to recruit transmembrane cargo and cytosolic proteins required for vesicle formation and transport (14). AP-1 is potentially the most diverse AP since three of its subunits present isoforms encoded by distinct genes. Specifically, there are two isoforms for the γ (γ1 and γ2), two for μ1 (μ1A and μ1B), and three for the σ1 (σ1A, σ1B, and σ1C) subunits (17). All these isoforms are ubiquitously present in different cell types, except for μ1B, which is specifically expressed in polarized epithelial cells (18). The μ1 subunit plays an essential role in cargo selection through a tyrosine-binding pocket that recognizes the cytosolic domains of transmembrane proteins containing a tyrosine-based YXXØ sorting signal (where X represents any amino acid and Ø a hydrophobic residue) (19). Interestingly, μ1B preferentially binds to a subset of noncanonical sorting signals in basolateral proteins, indicating that μ1A and μ1B isoforms may comprise functionally distinct AP-1 variants (19,20). AP-1 complexes containing either μ1A or μ1B are frequently termed AP-1A and AP-1B, respectively.
The cytosolic tail (CT) of APP contains sorting motifs that mediate its interaction with the μ4 subunit of AP-4 (10) and a possible interaction with the AP-1 subunit isoforms μ1A and μ1B (21). It was shown that AP-1B, a variant that is not expressed in neurons, mediates the polarized trafficking of APP in epithelial cells (21). However, the functional role of the ubiquitously expressed AP-1A variant in APP trafficking and processing remains unknown. In this study, we developed a novel approach to monitor APP trafficking and cleavage in the secretory pathway, using a dual-tagged APP construct in a retention using selective hooks (RUSH) system that is amenable to flow cytometry, imaging, and protein biochemistry assays. Using these assays, we confirmed the importance of the APP CT on its distribution in the secretory pathway and have identified the amino acid residue Y682 as part of a sorting motif regulating APP anterograde transport and processing. We show that Y682 is crucial for APP interaction with μ1A, although it is not involved in AP-4 interaction (10). Functional analysis demonstrated that AP-1A is required for the efficient Golgi exit of APP and delivery to early endosomes. Consistently, the APP Y682A mutant that does not interact with AP-1A is retained in the TGN, and it is less efficiently transported to early endosomes. Disrupting APP-AP-1 interaction slows down the production of the Aβ precursor fragment C99, suggesting that APP processing by β-secretase in the Golgi is less efficient than in endosomes. Despite this, after extended periods, Golgi retention leads to intracellular accumulation of C99 and increased amyloid-β release. Taken together, our results demonstrate that AP-1 mediates the delivery of APP from the Golgi to early endosomes, directly affecting the processing and production of amyloid-β.
Results
The cytosolic tail of APP contains sorting motifs for anterograde transport in the secretory pathway To investigate the trafficking and cleavage of APP through the secretory pathway, we used the RUSH system (22). In this system, a reporter protein fused to a streptavidin-binding peptide (SBP) can be reversibly trapped in the ER by co-expression of streptavidin fused to an ER protein or ER retention signal. Upon the addition of biotin, the reporter-SBP chimaera is released from the ER hook and follows its itinerary within the secretory pathway in a synchronized fashion. To use this system to study APP trafficking, we generated a RUSHcompetent APP reporter with dual fluorescent tags. On the lumenal N terminus is the SBP, to provide ER retention, alongside a HaloTag. On the cytosolic C terminus is an mNeonGreen tag (Fig. 1A). This construct is herein referred to as Halo-APP-mNeonGreen. The presence of the two fluorescent proteins at opposing ends of APP allows monitoring of APP cleavage in live cells. The cells used for this experiment are HeLa, which have the machinery to process APP (23) and produce the CTFs (10). Through the addition of soluble biotin, Halo-APP-mNeonGreen is exported from the ER, and its trafficking and processing can be monitored.
HeLa cells stably expressing streptavidin-KDEL and Halo-APP-mNeonGreen were stained with fluorescent Halo-JFX650 ligand, treated with biotin and imaged every hour for 4 h. Before the addition of biotin, both Halo-JFX650 and mNeonGreen were co-localized in a reticular pattern at the ER (Fig. 1B). After biotin addition, both Halo-JFX650 and mNeonGreen accumulated in the juxtanuclear region over time. As APP was synchronously trafficked through the cell, independent Halo-JFX650 and mNeonGreen puncta became visible. This indicates that APP cleavage had occurred, and the C-terminal and N-terminal APP fragments had been independently sorted into different compartments in the cell.
As APP is trafficked through the cell, it is cleaved into different fragments depending on its co-localization with various secretases in different subcellular environments. Luminal APP fragments are lost to the extracellular space (24,25), whilst the cytosolic fragments are degraded intracellularly (26). We, therefore, reasoned that the loss of both fluorescent tags could be monitored by flow cytometry using the RUSH system. HeLa cells stably expressing streptavidin-KDEL and Halo-APP-mNeonGreen were incubated in suspension at 37 C (Fig. 1C). The release of Halo-APP-mNeonGreen from the ER was induced through the addition of biotin to the cell culture media, and its trafficking was allowed to proceed for 0 to 5 h. The Halo-JF646 and mNeonGreen fluorescence of each cell was then measured by flow cytometry (Fig. 1D). As expected, with biotin addition, there is a time-dependent decrease in both mNeonGreen-tagged CTFs and Halo-tagged N-terminal fragments. The kinetics of APP trafficking observed here match those of other systems (11).
We hypothesized that the observed decrease in HaloTag fluorescence was due to secretion of the luminal APP fragment. To test this, cells expressing Halo-APP-mNeonGreen were seeded in a 6-well plate, and ER release was induced in each well for 0 to 5 h. The medium was then removed, concentrated, and the Halo-JFX650 fluorescence was visualized on a nitrocellulose membrane to assess levels of Halo-JFX650 secretion during APP trafficking and processing (Fig. 1E). Halo-JFX650 secretion was also quantified using a fluorescent plate reader assay (Fig. 1F). Using both assays, we observed a steady increase in Halo-JFX650 fluorescence in the conditioned medium over 5 h after ER export. This indicates that the Halo-tagged N terminus of APP is cleaved and secreted to the extracellular space.
Together with the secretion of the Halo-tagged N terminus, the amyloidogenic and non-amyloidogenic proteolytic pathway generates the C-terminal membrane fragments C99 and C83, respectively. γ-secretase enzyme liberates from these fragments a soluble APP intracellular C-terminal domain Figure 1. Processing of Halo-APP-mNeonGreen can be quantitatively monitored using RUSH. A, schematic of dual-tagged APP RUSH construct with major APP cleavage sites for β-, α-, and γ-secretases and caspases (Csp) highlighted. B, confocal microscopy of Halo-APP-mNeonGreen over 4 h after biotin addition to induce ER export. Blue = nuclear DAPI staining. Images taken on a Zeiss 880 microscope at 63× magnification. Main panels scale bar represent 10 μm; insets scale bar represents 2.5 μm. C, schematic representation of RUSH APP workflow. Stable Halo-APP-mNeonGreen RUSH cells are lifted and incubated at 37 C (after pre-treatment with appropriate drug). Biotin was added to cells for 0 to 5 h to induce ER export. After 5 h, fluorescent intensities were measured by flow cytometry as a readout of APP processing. D, mNeonGreen and Halo-JF646 fluorescence intensities at 0 and 5 h after ER export. E, immunodot blot of Halo-JFX650 fluorescence in culture medium over 5 h after Halo-APP-mNeonGreen is exported from the ER. F, quantification of Halo-JFX650 secretion after ER export of Halo-APP-mNeonGreen using a fluorescent plate reader assay. Halo-JFX650 fluorescence in the medium was quantified in a 96-well plate using a Clariostar plus plate reader. G and H, mNeonGreen (G) and Halo-JF646 (H) fluorescence intensities measured by FACS 0 to 5 h after ER export of Halo-APP-mNeonGreen, in cells pre-treated with 10 μg/ml BFA for 1 h or 25 μM DAPT (a γ-secretase inhibitor) for 24 h prior to inducing ER export. Fluorescence intensities are expressed as a percentage of the 0 h time point. ****p ≤ 0.0001. Pᔆ values indicate significance of the slope. P i values indicate significance of the Y-intercept. ****p ≤ 0.0001; **p ≤ 0.01. APP, amyloid precursor protein; BFA, brefeldin A; DAPI, 4 0 ,6-diamidino-2-phenylindole; ER, endoplasmic reticulum; ns, not significant; RUSH, retention using selective hooks.
(AICD) in the cytosol. C99, C83, and AICD have been shown to undergo intracellular degradation by several different pathways (26)(27)(28). To test whether these fragments are degraded by the proteasome, cells were pre-treated with MG132, a proteasome inhibitor, before the release of Halo-APP-mNeonGreen from the ER (Fig. S1A). Proteasome inhibition by MG132 prevented the loss of C-terminal mNeon-Green fluorescence, indicating that these APP fragments generated in our system are indeed degraded by the proteasome. As expected, MG132 treatment did not affect the Nterminal HaloTag-APP fluorescence. In summary, the timedependent decrease observed in mNeonGreen fluorescence is, in part, due to proteasomal degradation of the CTFs generated in amyloidogenic and non-amyloidogenic processing, whilst the decrease observed in HaloTag fluorescence during this time is due to secretion of the N-terminal fragment into the extracellular space.
To demonstrate that this assay physiologically recapitulated endogenous APP trafficking, we used the RUSH system in combination with several molecular inhibitors. To determine whether γ-secretase is involved in the processing of the cytosolic mNeonGreen fragment, cells were pre-treated with DAPT (Sigma-Aldrich), a γ-secretase inhibitor, before ER release. This caused stabilization of the CTF-mNeonGreen fluorescence, whilst the reduction in Halo-JF646 fluorescence was unaffected (Fig. 1, G and H). This is in agreement with previous evidence demonstrating that DAPT treatment precludes AICD formation and subsequent processing and degradation. Importantly, this demonstrates that the loss of mNeonGreen observed using this assay is a direct effect of APP processing.
To determine if APP processing took place after Golgi exit, we pre-treated cells with brefeldin A (BFA) for 1 h before release from the ER (Fig. 1, G and H). BFA inhibits the GTPase exchange factor resulting in Golgi tubulation and the redistribution of Golgi-localized and secretory proteins into the ER (29,30). In the presence of BFA, mNeonGreen and Halo-JF646 fluorescence were stabilized compared with control samples. To confirm this using an orthogonal approach, we incubated cells at 20 C to prevent protein export from the Golgi (31) (Fig. S1B). Blocking Golgi export by both methods prevented the decrease in mNeonGreen and Halo-JF646 fluorescence, indicating that efficient APP processing takes place after its exit from the Golgi.
In summary, the dual-tagged Halo-APP-mNeonGreen RUSH system recapitulates both the kinetics, trafficking pathways, and pharmacological dependence of endogenous APP and represents a novel assay to monitor APP trafficking and processing.
The Y682 on the APP CT prevents Golgi accumulation of APP Previous literature indicates that the short CT of APP is important for its interactions with adaptor complexes through the presence of several tyrosine-based sorting motifs such as YXXØ, a well-characterized AP-1/2 recognition site (32), and YKFEE, an AP-4 binding site (10). To elucidate functionally important APP interactors, we first identified potential adaptor-binding sites in the CT of APP ( Fig. 2A). Using an unbiased approach, we generated seven constructs with specific mutations in the CT, each designed to perturb a potentially important trafficking motif. A stable clonal cell line was generated for each mutant, and its processing was monitored after ER release using the RUSH system, as described previously. In the positive control, a mutant missing the whole CT, we see a significant decrease in the loss of both mNeonGreen and Halo-JF646, indicating a defect in trafficking and subsequent processing. The mutation at the caspase cleavage site (D664A) (33) had no detectable effect on either trafficking or processing and phenocopies the WT APP tail in this system. We see a significant decrease in the loss of both mNeonGreen and Halo-JF646 with different degrees of severity for most of the other trafficking motif mutations, aside from Y687A, which was not significant for the loss of Halo-JF646. Aside from the positive control (APPΔCT), APPY 682A had the most severe effect on APP processing (Fig. 2, B and C), indicating that this YXXØ motif is essential for the proper anterograde trafficking and processing of APP.
To further characterize the requirement of Y682 in APP trafficking, we co-expressed an APP WT-mCherry and an APP Y682A-GFP construct (Fig. 3A) in H4 neuroglioma cells and directly compared their distribution pattern in the same cell. We observed that APP/CTFs Y682A is markedly present at the juxtanuclear region and the cell surface, differently from the distribution of WT APP that was mostly found in cytosolic punctate structures (Fig. 3B). Furthermore, we characterized the distribution of APP/CTFs Y682A and APP/CTFs WT for different endogenous markers of the late secretory pathway, such as TGN46 (TGN), EEA1 and HRS (early endosomes), and CD63 (late endosomes). We observed that APP/CTFs Y682A accumulated at the juxtanuclear region and presented a significantly increased co-localization with the TGN46 marker (Figs. S2A and 3C). In addition, we found reduced amounts of APP/CTFs Y682A signal associated with the early endosomal proteins EEA1 and HRS, compared with WT APP/CTFs (Figs. S2, B and C and 3, D-E). Finally, we observed that APP/ CTFs Y682A showed a significant increase in its association with the endolysosomal protein CD63 in comparison to WT APP/CTFs (Figs. S2D and 3F). Last, we confirmed the increased co-localization of APP/CTFs Y682A with the Golgi protein GM130 at the juxtanuclear region of primary neurons (Fig. 3, G and H), in comparison to WT APP/CTFs-mCherry. Together, these results indicate that the Y682 residue on APP is part of a sorting signal for reduced Golgi/TGN residency of APP/CTFs.
APP interacts with both μ1A and μ1B subunits of AP-1 via multiple contact points The results presented previously indicate that the APP CT contains information for efficient anterograde transport and that Y682A is crucial in this process. AP-1A is known to mediate transport between the TGN and endosomes (13), and its interaction with APP has been previously shown (21).
However, the functional relevance of this interaction has not been elucidated. Initially, we sought to confirm the APP-μ1A interaction using co-immunoprecipitation assays. We co-expressed hemagglutinin (HA)-tagged μ1A together with either GFP, APP-GFP, or C99-GFP constructs (Fig. 3A) in human embryonic kidney 293 (HEK293) cells (for a high yield of exogenous protein production) and used GFP-trap beads to pulldown GFP from cell extracts. μ1A co-immunoprecipitated with APP-GFP and C99-GFP but not with GFP alone (Fig. 4A).
As the Y682 mutant displayed the strongest defect in anterograde transport (Fig. 2B), we sought to test whether this residue is involved in AP-1 interaction. We used yeast twohybrid (Y2H) interaction assays to identify the sequence requirements for the APP-μ1A interaction (Fig. S3A). In these experiments, we used the CT of TGN38, a prototypical μ1A interactor (34), as a positive control. Initially, we confirmed the interaction of the APP CT with μ1A as well as the previously reported interaction with μ4 (10) (Fig. 4B). In contrast, the APP CT did not interact with μ2 (AP-2) or μ3 (AP-3), subunits that showed a strong interaction with the TGN38 CT (Fig. 4B). We then analyzed the role of each tyrosine residue within the APP CT in its interaction with μ1A and μ1B. Y2H assays showed that APP Y653A substitution partially reduces the affinity to μ1A and μ1B (Fig. 4C). Similarly, a minor reduction in the interaction with μ1A and μ1B was observed with the APP I656A mutant and with APP bearing double Y653A/ I656A mutations (Fig. S3B), indicating that the canonical 653 YXXØ 656 motif in the APP CT is not essential for interaction with μ1A or μ1B. In contrast, we observed that the Y682A and Y687A substitutions abrogate the interaction with both μ1A and μ1B (Fig. 4C). These substitutions did not prevent the interaction with μ4 (Fig. 4C), confirming previous observations (10), although Y687 was proposed to be marginally involved with μ4 interaction (10). Interestingly, the F690 residue within the 687 YKFFE 692 motif, necessary for μ4 interaction (10), is also required for APP interaction with μ1A ( Fig. S3, C and D). Together, these results indicate diverse and conserved sequence requirements for APP interactions with μ1A or μ4. In addition, it reveals that the APP Y682A is a useful tool to study the function of AP-1 interaction while preserving the interaction with AP-4.
Our findings showed that different motifs present in the APP CT are required for the interaction with μ1A. We, therefore, sought to identify the APP-binding site in μ1A. The C-terminal domain of μ subunits is known to contain two distinct interaction sites for tyrosine-based sorting motifs located on opposite surfaces of the molecule (35-37) (Fig. 4D). The so-called Asite, also known as the tyrosine-binding pocket, typically recognizes the canonical YXXØ motifs, whereas the B-site was originally discovered in μ4 as the APP YKFFE-recognition site (10,38). To test the importance of the μ1A A-site in APP interaction, we performed Y2H assays using μ1A carrying point mutations previously reported to abolish the interaction with YXXØ signals, specifically D174A or W408S (32,39). As expected, the interaction of the TGN38 CT with μ1A was lost with mutations in both residues ( Fig. 4E) (40). Interestingly, we found that the interaction of APP with μ1A was abolished by the W408S substitution (Fig. 4E), suggesting that the A-site in μ1A is required for this interaction. Based on the structural homology to μ4, we also analyzed the importance of a putative B-site in μ1A, using μ1A mutants carrying substitutions in F238A or S266D residues (37). The μ1A F238A substitution reduced the interaction with the APP CT, whilst the S266D substitution had no effect (Fig. 4F). Similarly, the TGN38 CT interaction with μ1A requires both the A-site ( Fig. 4E) (40) and the B-site (Fig. 4F). These results demonstrate that the B-site in μ1A is required for the interaction with APP and TGN38.
To investigate the role of AP-1 in anterograde trafficking of APP, we generated CRISPR/CRISPR-associated protein 9 (Cas9) KOs for both AP1μ1A and μ1B in our stable Halo-APP-mNeonGreen HeLa RUSH system. Using the fluorescenceactivated cell sorting-based approach described previously (Fig. 1C), we observed a significant decrease in the loss of both mNeonGreen and Halo-JF646 fluorescence in μ1A KO cells (Fig. 4, G-I), a behavior that was similar to the APP Y682A mutation in parental WT cells. We validated the AP1μ1A KO in this HeLa cell line by Western blotting (Fig. 4I); however, we were unable to detect μ1B by Western blotting in the control WT HeLa cell line, supporting previous data that indicate μ1B is not expressed in HeLa cells (18). . Residue Y682 of the APP tail is essential for proper localization of APP. A, schematic representation of APP fused with either GFP or mCherry at the C terminus and its processing products. APP fragments after processing include C99, C83, and AICD-γ, all fused with GFP or mCherry. The recognition site of 6E10 antibody is indicated in full-length APP and C99. APP was expressed with a F615P substitution to reduce α-secretase cleavage. DAPT was used to inhibit γ-secretase activity and facilitate C99 fragment visualization. B, confocal microscopy images of H4 neuroglioma cells co-transfected with APP WT-mCherry and APP Y682A-GFP and immunolabeled with anti-TGN46. Green arrows indicate puncta exclusively of APP/CTFs Y682A-GFP. Red arrows highlight puncta with only APP/CTFs WT-mCherry. C-F, APP/CTFs WT-GFP and APP/CTFs Y682A-GFP co-localization with markers measured using Pearson's coefficient. Values represent mean ± SEM from at least eight different cells. G, rat cortical neurons co-transfected with APP WT-mCherry and APP Y682A-GFP and immunolabeled with anti-GM130. H, APP/CTFs WT-mCherry and APP/CTFs Y682A-GFP co-localization compared in the same transfected neuron using Pearson's coefficient. Merge channel is the combination of APP WT-mCherry, APP Y682A-GFP, and GM130 channels. Main panels scale bar represents 10 μm; insets (2×) scale bar represents 2.5 μm. **p ≤ 0.01; ***p ≤ 0.001. Statistical significance was calculated by two-tailed Student's t test in C-F and H. APP, amyloid precursor protein; TGN, trans-Golgi network.
Depletion of AP-1 increases the localization of APP in the TGN
The requirement of AP-1 in APP trafficking was confirmed in HeLa cells expressing APP-GFP (Fig. 3A). As expected, APP/CTFs-GFP was mostly present in dispersed punctate structures in control HeLa cells (Fig. 5A). In contrast, APP/ CTFs-GFP was accumulated in the perinuclear region in μ1A CRISPR/Cas9 KO HeLa cells (20), where it co-localizes with the Golgi protein GM130 (Fig. 5, B and C). This change in localization was rescued in μ1A KO cells expressing exogenous HA-tagged μ1A (Fig. 5, D and E).
To confirm the function of AP-1 in APP trafficking in different systems, we first used RNAi to knockdown the γ1 subunit of AP-1 in H4 human neuroglioma cells and analyzed the subcellular distribution of endogenous APP. We observed that whilst in control conditions, APP is mostly present in punctate structures dispersed in the cytosol (Fig. S4A), knockdown of AP-1 redistributes APP to the juxtanuclear region, where it co-localizes with the TGN marker TGN46 (Fig. S4, B-D). As an alternative approach to testing the importance of functional AP-1 in APP trafficking, we over-expressed the μ1A W408S mutant in H4 cells. This mutant is efficiently incorporated into the AP-1 complex (40) and acts as a dominant negative of μ1A-dependent AP-1 cargo transport, preventing the interaction with APP (20,40). In comparison to μ1A WT, over-expression of μ1A W408S increased the endogenous APP signal (detected with an anti-C99 antibody) in the juxtanuclear area in close association with TGN46 (Fig. 6, A, B and F). Finally, to test whether the interaction of APP with μ1A is functionally relevant in neurons, we co-expressed APP-mCherry with GFP, μ1A WT-GFP, or μ1A W408S-GFP in primary rat cortical neurons at 12 days in vitro. APP/CTFs-mCherry was mostly localized in punctate structures at the cell body in either GFP or μ1A WT-GFP-expressing neurons (Fig. 6, C, D and G). In neurons co-expressing μ1A W408S-GFP, APP/CTFs-mCherry appeared more concentrated in the juxtanuclear area and showed increased co-localization with the Golgi marker GM130 (Fig. 6, E and G). Mapping of μ1A and APP tail residues involved in AP-1-APP interaction. A, GFP-Trap immunoprecipitation from cells co-expressing either GFP, C99-GFP, or APP-GFP with μ1A-HA. B, Y2H assay of APP cytosolic tail with the medium subunit from AP-1 to AP-4. Yeast growth in medium without histidine (−HIS) indicates that there is an interaction between proteins. TGN38 tail was used as a positive control for interactions with μ1A, μ2, and μ3. Empty represents the plasmid required for yeast transformation but with no protein expressed. C, Y2H assay between APP tail with tyrosine point mutations and subunits μ1A, μ1B, and μ4. D, μ1A C terminus 3D structure (Protein Data Bank ID: 1W63). Red indicates tyrosine-binding pocket (YXXØ), blue indicates potential APP YKFFE recognition sequence (a homologous μ4-binding site), modified from Ref. (36). E, Y2H of APP tail interaction with μ1A containing twopoint mutations in the tyrosine-binding sites (D174A and W408S). F, Y2H of APP tail interaction with μ1A containing point mutations in the homologous μ4binding site (μ1A F238A and μ1A S266D). G and H, transient CRISPR KOs of μ1A, μ1B, and μ1A + μ1B in Halo-APP-mNeonGreen RUSH cells. mNeonGreen and Halo-JFX646 fluorescence levels measured every hour for 5 h after APP ER export using flow cytometry. Fluorescence intensities expressed as a percentage of the 0 h time point. ****p ≤ 0.0001; ***p ≤ 0.001; **p ≤ 0.01. Pᔆ values indicate significance of the slope. P i values indicate significance of the Y-intercept. I, Western blots to assess efficiency of the μ1A and μ1B CRISPR knockouts. Control is WT HeLa cells (where μ1B is not expressed). GAPDH was used as a loading control. AP-1, adaptor protein; APP, amyloid precursor protein; ER, endoplasmic reticulum; HA, hemagglutinin; ns, not significant; RUSH, retention using selective hooks; TGN, trans-Golgi network; Y2H, yeast two-hybrid.
Together, the results establish that AP-1 acts as a piece of essential sorting machinery in controlling the distribution of APP within the secretory pathway.
AP-1 mediates the efficient exit of APP from the Golgi and arrival at the early endosomes
To test if AP-1 mediates anterograde transport of APP from the Golgi to the early endosomes, we monitored APP trafficking in the secretory pathway using the RUSH system shown in Figure 1. In this case, APP is fused to an SBP and mCherry at the N terminus and GFP at the C terminus (therein termed mCherry-APP-GFP; Fig. S5A). We transfected WT HeLa and μ1A KO cells, stably expressing streptavidin-KDEL hook, with the RUSH mCherry-APP-GFP construct and followed both mCherry and GFP fluorescence before and after biotin addition. Before biotin treatment, mCherry/GFP fluorescence was found to co-localize in a reticular pattern in both WT and KO cells (Fig. S5, B and G, 0 min). Upon biotin addition, mCherry/GFP begin to accumulate in the juxtanuclear region in both cell populations (Fig. S5, C, D, H and I, 15-30 min). Intense juxtanuclear localization was followed by the display of mCherry/ GFP punctate structures, which were more evident in WT cells compared with μ1A KO cells (Fig. S5, 60-120 min). Interestingly, μ1A KO cells presented a higher number of discrete puncta containing mCherry alone compared with WT cells (Fig. S5, E, F, J and K, red arrows), suggesting that in μ1A KO cells, N-terminal fragments may leave the Golgi more efficiently than CTFs. In addition, this observation strongly indicates that the pool of the juxtanuclear accumulation could correspond to CTFs rather than full-length APP.
The APP-RUSH results indicate a delay in anterograde transport of APP in cells lacking AP-1. To confirm this observation, we repeated these experiments and stained cells with either Golgi (GM130) or early endosome (HRS) markers by immunofluorescence. In these experiments, we monitored the GFP-containing molecules only, since they represent either full-length APP or APP CTFs. These findings show that whilst APP reaches the Golgi with similar efficiency in both WT and μ1A KO cells, the lack of AP-1 causes a clear delay in APP/ APP-CTFs Golgi export (Fig. 7, A, B, D and E). This phenotype is accompanied by more efficient delivery of APP to early endosomes in WT compared with μ1A KO cells (Fig. 7, H, I, K and L). We also analyzed the anterograde transport of APP mutant Y682A and its co-distribution with either Golgi or early endosome markers using the RUSH system. The distribution pattern of APP Y682A in control cells was similar to that of WT APP in μ1A KO cells (Fig. 7, C, F, J and M). Together, these results show that either the absence of μ1A or the disruption of APP-μ1A interaction delays APP/CTF Golgi export and its delivery to early endosomes (Fig. 7, G and N).
AP-1-mediated transport of APP affects the production and intracellular accumulation of C99 APP processing depends on its association with at least three secretase proteins in the endomembrane system (6). If βsecretase initiates the process, APP undergoes amyloidogenic processing, which requires γ-secretase activity to release the Aβ peptides (Fig. 3A). On the other hand, if APP processing is initiated by α-secretase, APP is directed to a non-amyloidogenic pathway and, after γ-secretase processing, a short peptide called P3 is released (Fig. 3A). To investigate how AP-1mediated trafficking of APP influences processing, we monitored APP cleavage by Western blot using the RUSH system. WT HeLa and μ1A KO cells constitutively expressing streptavidin-KDEL were transfected with a RUSH mCherry-APP-F615P/D664A-GFP construct. These mutations in the APP CT inhibit α-secretase and caspase cleavage, favoring the visualization of the C99 fragment (41)(42)(43)(44). Moreover, the experiments were performed in the presence of a γ-secretase inhibitor, DAPT, to avoid further processing of C99 and allow its detection (10,28,44,45) (Fig. 8A). At 20 h posttransfection, cells were incubated with soluble biotin for 0 to 18 h, lysed, and analyzed by Western blotting with a regionspecific antibody (clone 6E10, indicated in Fig. 3A) to monitor C99 levels (Fig. 8, A and B).
The kinetics of C99 fragment generation were slower in the μ1A KO compared with WT cells, suggesting that β-secretase cleavage is less efficient in the Golgi than in endosomes. Despite this, 18 h after ER release, C99 accumulated intracellularly at higher levels in μ1A KO compared with WT cells (Fig. 8, A and B). In addition, we analyzed the processing of APP at steady state in WT HeLa and μ1A KO cells, with and without DAPT treatment, using APP-F615P/D664A-GFP. Compared with WT cells, AP-1-defective cells showed increased levels of both APP and C99 (Fig. 8, C and D) with an increased ratio of this pathogenic fragment (Fig. 8E), as observed for APP in the RUSH system at 18 h after ER release (Fig. 8, A and B). This indicates that APP transport mediated by AP-1 controls the processing of APP and intracellular levels of C99. Finally, to test whether the steady-state build-up of C99 changes amyloid-β production, we monitored the levels of amyloid-β secretion in the culture media of WT HeLa and μ1A KO cells expressing APP-GFP, using commercially available ELISA assays. The results show that AP-1-defective cells release higher levels of both Aβ-40 and Aβ-42 fragments compared with WT HeLa cells (Fig. 8, F and G).
Our findings suggest that AP-1 is essential to mediate the efficient transport of APP and C99 from the Golgi to endosomes, a crucial step in lysosomal-mediated intracellular clearance of pathogenic C99, the direct precursor of amyloid-β (Fig. 8H).
Discussion
Deciphering the molecular mechanism regulating APP trafficking is of great interest as its amyloidogenic processing is a major causative factor of AD. Evidence shows that C99 fragments accumulate in the hippocampus of AD mouse models during early pathological stages (46), and it is thought to contribute to synaptic plasticity impairments (25,47,48). It has been suggested that C99, rather than Aβ plaques, is responsible for neuronal death in AD (49). Here, we showed that the clathrin adaptor AP-1A is an essential factor controlling the distribution of APP and C99 within the secretory pathway. Our data show that AP-1A binds the APP/ C99 CT at the TGN and mediates transport from the TGN to the endolysosomal system. Disrupting this AP-1-mediated transport route causes the intracellular build-up of C99, most likely because of impaired Golgi exit and endosomal delivery, leading to the reduced clearance of C99 (Fig. 8H).
APP is known to interact with both AP-1A and the epithelial cell-specific variant AP-1B (21). However, knowledge about the functional relevance of these interactions was restricted to AP-1B, which was implicated in the delivery of APP to the basolateral domain of polarized epithelial cells (21). To understand the functional role of AP-1A-mediating APP transport through the secretory pathway, we used synchronized trafficking assays. We show that the APP tail contains information required for its efficient anterograde transport and mapped Y682 as a crucial residue mediating this process ( Figs. 1 and 2). We also show that Y682 is essential for the interaction with both AP-1A and AP-1B (Figs. 4 and S3). This tyrosine residue is part of the 682 YENPTY 687 domain, a wellcharacterized sorting signal in APP (6). The specific analysis of APP Y682 was important because this tyrosine residue is not required for APP interaction with the μ4 subunit of AP-4 (10). Therefore, this mutant enabled us to investigate APP transport dependent on AP-1 interaction, without disturbing APP interaction with AP-4. However, mutations in 687 YKFFE 691 abolished the interaction of APP with μ1A, μ1B, and μ4 (Fig. S3), indicating that AP-1 and AP-4 bind to common residues within this APP region.
We further demonstrated that APP and TGN38 interactions with AP-1 involve two structurally separate domains in μ1A, termed A-site and B-site (Fig. 4). Consistently, previous work showed the participation of two recognition sites in μ1A for interaction with the CT of the viral glycoprotein NiV-F (37). The distance between the two binding sites in μ1A is 30 Å, and an unstructured peptide composed of 14 amino acids is approximately 45 Å long (37). Considering that the APP CT has 47 amino acid residues and the TGN38 CT has 33 residues, simultaneous interaction with both binding sites in μ1A should be physically possible. Previous work reported a third domain in μ1A composed of basic amino acids that work together with the tyrosine-binding pocket to regulate major histocompatibility complex I molecules and Nef interaction (19,50), also suggesting that stable μ1A interaction with AP-1 cargo relies on multiple interaction points.
A central finding from our study is that APP interaction with AP-1A is required for its efficient Golgi export, which was demonstrated using several complementary approaches. Initially, we used a quantitative approach to show that μ1A expression is essential for APP anterograde trafficking and processing (Fig. 4). Consistently, γ1 depletion by RNAi in H4 neuroglioma cells, a condition that also compromises the expression of μ1A (51, 52), redistributes endogenous APP from the cell periphery to the juxtanuclear region and increases its co-localization with a TGN marker (Fig. S4). A similar phenotype was also observed in HeLa μ1A KO cells expressing APP-GFP (Fig. 5). Importantly, redistribution of APP to the Golgi in the KO cells was reversed by expression of exogenous μ1A, confirming the specificity of the μ1A KO phenotype (Fig. 5).
Supporting our findings that AP-1A activity is required for normal APP trafficking, the over-expression of a μ1A mutant that does not bind APP (μ1A W408S), but remains capable of forming an AP-1 complex (20, 40), increased the association of APP with TGN46 in H4 cells (Fig. 6). The expression of this dominant-negative version of μ1A has been used in several studies to identify AP-1 cargo proteins and study their transport mediated by AP-1 (20,37,40). Consistent with the results in H4 cells, expression of μ1A W408S in primary cultured neurons also increased the co-localization of APP with a Golgi marker in the cell body (Fig. 6).
Evidence for the role of AP-1 in APP trafficking was further strengthened by comparing the subcellular distribution of WT APP and the APP Y682A mutant that is unable to bind AP-1 while preserving AP-4-binding capacity. In both primary neurons and H4 cells, the amount of APP Y682A mutant retained in the Golgi/TGN is higher compared with WT APP (Fig. 3). Moreover, APP Y682A is enriched at the cell surface, displays less association with early endosomal markers, but shows an increased association with late endosomal markers (Figs. 3 and S2). While these observations are consistent with previous reports (24,(53)(54)(55)(56), none of these previous studies have correlated the function of AP-1 with changes in APP localization. A possible interpretation of these findings is that AP-1 mediates an efficient direct route of APP transport from Golgi to early endosomes, but in the absence of AP-1 interaction, APP may follow the constitutive secretory pathway, or an AP-4 mediated route (10), to the cell surface and/or late endosomes. Figure 8. AP-1 accelerates processing of APP to C99 and contributes to C99 clearance. A, Western blots of APP processing in WT and μ1A KO cells expressing streptavidin-KDEL, transfected with mCherry-APP-F615P/D664A-GFP, and incubated with 1 μM DAPT for 16 h. ER export was induced for 0 to 18 h. Membrane probed using 6E10 antibody specific to amyloid-β (as indicated for Fig. 3A). B, quantification of C99 levels in (A). C, Western blot of WT HeLa and μ1A KO cells transfected with APP-GFP F615P/D664A and incubated with or without 1 μM DAPT. Membranes probed using an anti-amyloid-β antibody. D, quantification of C99 levels in (C). E, the ratio of C99/APP in WT and μ1A KO cells from (C). APP and C99 levels were measured by band densitometry using Fiji software. Values represent mean ± SEM from at least three independent experiments. *p ≤ 0.05; **p ≤ 0.01; and ***p ≤ 0.001. Statistical significance was calculated by two-tailed paired Student's t test in B, D, and E. F and G, ELISA showing the increased levels of Aβ1-40 (F) and Aβ1-42 (G) in the conditioned media of μ1A KO HeLa compared with WT cells, both expressing APP-GFP. A commercially purchased ELISA kit was used. Values represent mean ± SEM from six independent experiments. **** p ≤ 0.0001. Statistical significance was calculated by a two-tailed paired Student's t test. H, proposed model for AP-1-mediated APP trafficking. APP is synthesized in the ER and transported to the Golgi complex and then the TGN. In normal conditions, APP is efficiently sorted from the TGN to early endosomes by AP-1, where APP processing with C99 generation mainly occurs. In the absence of functional AP-1 (right panel), transport of APP from the TGN to early endosomes is delayed, increasing APP levels at the TGN. Prolonged retention results in C99 accumulation at the TGN. The delay of APP and C99 in exiting the TGN also reduces delivery to the endolysosomal system for clearance. This results in intracellular build-up of APP and C99 but with an increased ratio of the pathogenic fragment. AP-1, adaptor protein 1; APP, amyloid precursor protein; ER, endoplasmic reticulum; TGN, trans-Golgi network.
It is well established that AP-1 mediates transport between the TGN and endosomes, with evidence for participation in both anterograde and retrograde transport routes depending on the cargo (14). When AP-1 activity is perturbed, we observe an accumulation of APP at the TGN during steady state (Figs. 3, 5 and 6). Despite this, these experiments did not allow us to distinguish between a defective anterograde transport from the TGN and accelerated retrieval from endosomes. Therefore, to define the transport route mediated by AP-1 in APP transport, we used the RUSH system (22) to monitor the anterograde transport of APP in a synchronized fashion (Fig. S5). Using this approach, we show that the efficient exit of APP from the Golgi and its delivery to early endosomes is impaired when the APP-AP1A interaction is disrupted. Similar results were shown by either mutating the interaction motif in APP or depleting cells of μ1A (Fig. 7).
Retention of APP in the Golgi/TGN was previously shown to promote Aβ secretion and increase the intracellular levels of C99, upon inhibition of γ-secretase activity (10)(11)(12). It was proposed that BACE1-mediated cleavage of APP is enhanced when APP export from the TGN is blocked (11). Consistently, we found that impaired Golgi/TGN export because of AP-1 depletion increased the intracellular levels of C99, and consequent Aβ-40 and Aβ-42 release, at steady state (Fig. 8). Despite this, we have previously shown that autophagosomes are critical organelles in the turnover of APP and its CTFs and that this process involves their transport to the endolysosomal system (41). This could provide an alternative explanation for the increase in C99 levels when Golgi export is defective, impairing delivery to the endolysosomal system. Using the synchronized RUSH transport assay, we can distinguish between increased BACE-1 processing from impaired C99 turnover, and our data support the latter hypothesis (Fig. 8).
Collectively, our results define AP-1A as key sorting machinery directly controlling the subcellular distribution of APP, with a decisive role in transporting APP/C99 from the TGN to the endolysosomal system, contributing to pathogenic C99 fragment clearance (Fig. 8F). Therefore, defects in AP-1mediated transport of APP/C99 can be regarded as a potential contributing factor to AD etiology.
Streptavidin-KDEL
A streptavidin-KDEL construct was generated for use as the ER hook in the RUSH system. The streptavidin gene (generous gift from Juan S. Bonifacino; National Institute of Child Health and Human Development, National Institutes of Health (57)) was subcloned into the Clontech pQCXIP Retroviral Vector by Gibson Assembly, using the following primers:
Halo-APPwt-mNeonGreen
To generate the Halo-APP-mNeonGreen RUSH constructs, DNA fragments for the signal peptide (SP) and SBP (generous gift from Juan S. Bonifacino (57)) were subcloned into the Clontech vector eGFP-C1, to generate an SP-SBP-GFP-LAMP1_delYQTI construct. A HaloTag was then subcloned into the N terminus of LAMP1 in place of the enhanced GFP tag by Gibson assembly. This generated an SP-SBP-HaloTag-LAMP1_delYQTI construct. To do this, the following primers were used: Backbone The SP-SBP-HaloTag-LAMP1_delYQTI backbone was digested with XhoI and BamHI to excise the LAMP1 gene fragment, generating an SP-SBP-HaloTag-APP-mNeonGreen construct.
Subsequently, the HaloTag-APP-mNeonGreen gene fragment was amplified by PCR using the following primers: Forward primer: CATTT TGGCA AAGAA TTGTG TACAA GGATC CGCTA GCGCT ACGCG C Reverse primer: GCCTG CACCT GAGGA GTGAA TTCAC GCGTG GATCC TATTT ATAAA GCTCG TCCAT GCC The PiggyBac Transposon backbone (a kind gift from Michael Ward; National Institute of Neurological Disorders and Stroke, National Institutes of Health) was digested using MluI and BsrGI, and the Halo-APP-mNeonGreen gene fragment was inserted by Gibson assembly. This construct was used to generate stable Halo-APP-mNeonGreen cell lines by co-transfection with a transposase plasmid (kindly donated by Michael Ward).
Halo-APPmut-mNeonGreen PCR primers were designed to introduce point mutations into the CT of APP. Halo-APPwt-mNeonGreen was amplified by PCR using mutation-specific primers, and a KLD reaction was carried out (NEB; catalog no.: M0554S). Sanger sequencing was used to confirm the presence of the correct mutation in each construct.
Transient CRISPR KO plasmids
To generate transient CRISPR KOs of μ1A and μ1B, the IDT Alt-R CRISPR/Cas9 guide RNA tool was used to design two custom-guide sequences per gene. These guides were cloned into a pKLV-U6gRNA(BbsI)-PGKzeocin2ABFP vector, using the BbsI restriction sites. A Cas9 viral expression backbone and the packaging vectors pMD.G and pCMVR8.91 were kindly gifted to us by Paul Lehner (University of Cambridge).
Monocistronic mCherry-APP-GFP
The mCherry-APP-GFP RUSH monocistronic vector comprises a cyctomegalovirus promoter to express APP fused to the IL-2 signal sequence, SBP, and mCherry at the N terminus, and GFP at the C terminus. The complementary DNA fragment comprising IL-2-SBP-mCherry-APP from bicistronic mCherry-APP-GFP was subcloned into the commercial plasmid pEGFP-C2 (Clontech) upstream of the GFP coding sequence. The APP sequence contains two-point mutations, F615P and D664A, inserted by site-directed mutagenesis (QuickChange-Agilent). Sanger sequencing was used to confirm the presence of the correct mutation in each construct.
Other plasmids
The plasmids containing the full-length sequences of mouse μ1A (including D174A and W408S mutants), mouse μ2, rat μ3A, human μ4, and the C-terminal domain of human μ1B (residues 137-423), subcloned in the pACT2 vector in fusion with the Gal4-activation domain (Clontech) have been described previously (19,40). Mouse μ1A and W408S mutant fused in the C terminus with a 10-amino-acid linker and three copies of HA epitope in the pCI-neo vector (Clontech) were previously described (40). These plasmids were kindly donated by Juan Bonifacino. Human APP CT (isoform 695) and TGN38 CT were subcloned in fusion with Gal4-binding domain in the vectors pGBKT7 and pGBT9, respectively (10,34). All point mutations in APP and μ1A subunits were generated by site-directed mutagenesis (QuickChange-Agilent) or by KLD (New England Biolabs) and confirmed by Sanger sequencing. APP containing the mutations F615P and D664A subcloned in pEGFP-N1 (Clontech) were previously described (10). APP F615P/D664A were then subcloned into pmCherry-N1 using XhoI and HindIII restriction sites.
Antibodies
For immunofluorescence assays, the following antibodies were used: the monoclonal mouse antibodies to APP/amyloid- Cell culture, transfections, and RNA interference Stable CRISPR/Cas9 μ1A KO and WT HeLa cells were a gift from Margaret S. Robinson (University of Cambridge) and have been previously described (20). H4 (human neuroglioma) cells were obtained from the American Type Culture Collection. PEAK cells are HEK-293T cells transfected with the large T antigen of SV-40 (59) and were used for co-immunoprecipitation experiments. To generate HeLa cell lines expressing ERtargeted streptavidin fused to KDEL (ER hook), retrovirus coding streptavidin-KDEL was produced in HEK-293T cells and used to transduce HeLa cells. Transduced cells were selected by incubation in complete media in the presence of 1 μg/ml puromycin. Cells were maintained as previously described (60). DNA transfections were performed using Lipofectamine 2000 (Thermo Fisher Scientific). The siRNAs were purchased from Dharmacon as nucleotide duplexes with 3 0 dTdT overhangs, designed to target human γ1 (5 0 -GGAAGAGCCUAUU-CAGGUA-3 0 ) (51). Transfections of siRNA were performed in two rounds with an interval of 48 h between treatments using Oligofectamine reagent (Thermo Fisher Scientific).
Mammalian cell culture for the APP mutant screen
HeLa cells were cultured in Dulbecco's modified Eagle's medium (DMEM) (Sigma; catalog no.: D6429) supplemented with 1% MycoZap and 10% fetal calf serum at 37 C with 5% CO 2 . Where puromycin resistance was conferred by stably expressing streptavidin-KDEL, the ER hook, HeLa cells were cultured in 1 μg/ml puromycin. Streptavidin-KDEL cells were further engineered to stably express Halo-APPwt-mNeonGreen using the PiggyBac system. To do this, streptavidin-KDEL HeLa cells were plated at 60% confluency in a 6-well plate and adhered overnight. Cells were then transfected with both a transposase and transposon plasmid using
Transient CRISPR KO cells
To generate a stable Cas9-expressing Halo-APP-mNeonGreen cell line, HeLa cells stably expressing streptavidin-KDEL and Halo-APP-mNeonGreen were infected with lentiviral particles carrying Cas9 plasmid DNA (generous gift from Paul Lehner, University of Cambridge), followed by selection in 150 mg/ml blasticidin (Gibco). Lenti-XTM 293T cells were used to package the pKLV-zeocin vectors encoding the guide RNAs into lentiviral particles. After 48 h, the viral supernatants were harvested, filtered using a 0.45 μM filter, and concentrated to 10× using a Lenti-X Concentrator (catalog no.: 631232; Takara Bio). To achieve transient AP1μ1A and μ1B knockouts, approximately 25 × 10 5 Cas9 Halo-APP-mNeonGreen cells were transduced with 30 μl of 10× concentrated lentiviral particles in a 48-well plate. After 48 h, cells were replated into a 6-well plate and incubated for 4 days. On day 6, RUSH assays were carried out and analyzed by flow cytometry, as described later. Depletion of AP1μ1A and μ1B was verified by immunoblotting.
RUSH assays
For flow cytometry, HeLa cells stably expressing both streptavidin-KDEL and Halo-APPwt-mNeonGreen were lifted with trypsin, pelleted at 500 relative centrifugal force (RCF) for 5 min, and washed in PBS (CaMg). The cells were resuspended in CO 2 -independent media (DMEM + 25 mM Hepes) and aliquoted into eppendorfs at 500,000 cells in 500 μl. For non-perturbed RUSH, eppendorfs were incubated at 37 C in a heat block (DB200/2; Techne). A solution of 2× D-biotin (B4501; Sigma-Aldrich) in DMEM (500 μM final concentration) was added to each eppendorf in turn to generate ER export times of 0 to 5 h. In the last 30 min, 2× Halo-JF646 HaloTag ligand was added to each sample. Protein trafficking was stopped by transferring samples to ice, followed by centrifugation (500 RCF, 4 C, 5 min). Samples were resuspended in 500 μl ice-cold PBS and filtered using Cell-Strainercapped 5-ml round-bottom tubes (catalog no.: 352235; Corning). HeLa cells stably expressing the streptavidin-KDEL hook but no APP construct were used as negative controls (with/ without biotin). A minimum of 30,000 cells per sample were analyzed using an LSRFortessa cell analyzer (BD Biosciences), gating for mNeonGreen and Halo-JF646-positive cells. Data were analyzed using FlowJo software (FlowJo Software, version 10.7.1, for Mac OS X, Becton Dickinson & Company [BD] 2006-2020).
For 20 C block experiments, the aforementioned protocol was carried out at 20 C (Fig. S1). Several molecular inhibitors were used in combination with the protocol described previously (Fig. 1). The concentration and length of inhibitor treatment varied depending on the recommended conditions or those used previously in the literature. The concentration of each inhibitor was maintained during the 5 h RUSH. The following pre-treatments were used: BFA: 10 μg/ml BFA for 1 h prior to inducing ER export; DAPT: 25 μM DAPT for 24 h prior to inducing ER export and MG132: 10 μM MG132 for 2 h prior to inducing ER export.
To prepare samples for immunofluorescence microscopy, cells were transfected with a bicistronic mCherry-APP-GFP RUSH plasmid and incubated for 4 h, followed by an additional 16 h incubation, as described in the figure legend. Cells were then treated with 40 μM final concentration of soluble biotin (Sigma-Aldrich) and incubated for different time points, after which cells were fixed for immunofluorescence microscopy.
To prepare samples for Western blots, a HeLa cell line expressing ER-targeted streptavidin-KDEL were transfected with a monocistronic mCherry-APP-GFP RUSH plasmid and incubated for 4 h, followed by an additional 16 h in the presence of 1 μM DAPT, as described in the figure legends. Cells were then treated with 40 μM final concentration of soluble biotin (Sigma-Aldrich) and incubated in the presence of 1 μM DAPT for different time points, after which cells were lysed for Western blot.
Sequence alignment and protein model images
The sequence of CTs of APP homologous proteins from different species and the human APP gene family were obtained from the National Center for Biotechnology Information and aligned using the free software Clustal Omega alignment (61). Surface representation of μ1A C-terminal domain was collected from Protein Data Bank (1W63) (62). Protein model manipulation was performed using PyMOL (www.pymol.org).
Y2H assays
Y2H assays were performed using the yeast AH109 reporter strain as described previously (63). Yeast containing both plasmids with GAL4 activation domain and binding domain Golgi export of amyloid precursor protein mediated by AP-1 were selected in a restriction medium without leucine and tryptophan. Protein interactions were visualized by yeast growth in a selective medium lacking histidine, leucine, and tryptophan.
GFP-trap assay
GFP-Trap agarose (ChromoTek) assay was performed following the manufacturer's recommendations. Briefly, 80% confluent PEAK cells placed in a 100 mm plate were transfected with plasmids to express GFP, APP-GFP, or C99-GFP with μ1A-HA. After overnight expression, cells were lysed with lysis buffer containing 10 mM Tris-HCl (pH 7.5), 150 mM NaCl, 0.5 mM EDTA, and 0.5% Nonidet P40. The cleared protein content was incubated with beads for 1 h at 4 C under rotation. Next, beads with the binding proteins were washed three times with wash buffer (10 mM Tris-HCl [pH 7.5], 150 mM NaCl, and 0.5 mM EDTA), followed by the analysis of the attached protein by immunoblotting.
Immunodot blot assay
HeLa cells stably expressing both streptavidin-KDEL and Halo-APPwt-mNeonGreen were seeded at 40% confluency in a 6-well plate 24 h before inducing ER export. On the following day, cells were stained with JF646 HaloTag ligand for 1 h and washed twice in PBS (CaMg) to remove unbound JF646 dye. The media were aspirated from one well and replaced with complete DMEM + 500 μM biotin every hour for 5 h, to induce ER export for 0 to 5 h. Subsequently, the media were removed from each well and centrifuged at 500 RCF for 5 min, followed by a second centrifugation at 17,000 RCF for 5 min in a 4 C centrifuge. 700 μl of each sample was spun in a 3 kDa amicon column, and a media exchange was carried out so that the protein was suspended in approximately 90 μl PBS (CaMg). Each sample was then made up to 100 μl total with PBS (CaMg). 1 μl (1%) of each sample was added in a line to nitrocellulose membrane. The membrane was left to dry for 30 min before it was soaked in PBS (CaMg) and sealed between two acetate sheets. The membrane was then imaged using a BioRad ChemiDoc Imaging System.
Fluorescence plate reader assay
HeLa cells stably expressing both the streptavidin-KDEL hook and Halo-APPwt-mNeonGreen were trypsinised and washed twice in PBS (CaMg). Cells were incubated in imaging media (DMEM without phenol red + 2% bovine serum albumin + 4 mM L-glutamine + 25 mM Hepes) + JFX650 HaloTag ligand for 30 min at 37 C. After this time, cells were washed twice with imaging media (5 min/wash at 37 C) to remove the unbound HaloTag. Subsequently, cells were resuspended in imaging media, aliquoted into eppendorfs, and incubated at 37 C. Every hour, 2× biotin (500 μM final concentration) in imaging media was added to the corresponding tube to induce ER export. Eppendorfs were vortexed every 30 min. After 5 h, protein trafficking was halted by transferring samples to ice, followed by centrifugation (500 RCF; 5 min; 4 C). The supernatant was removed from cells and kept on ice. The cell pellet was washed in PBS (CaMg) before lysis in 500 μl radioimmunoprecipitation buffer + 1 μl benzonase for 30 min on ice. After lysis, the media samples and lysates were centrifuged at 17,000 RCF for 15 min at 4 C. The supernatants were collected from samples and 200 μl of each was loaded into a 96-well imaging plate (PerkinElmer). Two bovine serum albumin standards, diluted in either cell medium or lysis buffer, were loaded alongside the samples to measure Halo-JFX650 saturation levels. Samples were quantified in a CLARIOstar Plus plate reader (BMG Labtech).
SDS-PAGE and immunoblot analysis
SDS-PAGE and immunoblotting analysis were performed as previously described (58). In brief, denatured cell extracts from 200,000 cells were resolved on either gradient Tris-Glycine acrylamide gels (Bio-Rad) or home-made 10 to 16% Tris-Glycine acrylamide gels. Proteins were transferred to a polyvinylidene difluoride membrane using a wet transfer protocol and blocked with 5% skimmed milk for 1 h. Membranes were incubated with the appropriate primary antibody overnight at 4 C, washed with PBS with Tween-20, and then incubated with a secondary antibody for 1 h at RT. The membranes were visualized using either Clarity (catalog no.: 1705061; Bio-Rad) or Western-Bright Sirius (catalog no.: K-12043-D10; Advansta) enhanced chemiluminescence solutions and a ChemiDoc Imaging System equipped with the ImageLab software (Bio-Rad Laboratories).
Primary neuron culture and transfection
Primary neuronal cultures were isolated from embryonic day 18 Wistar rats using the same protocol as previously described (64). Briefly, cortical areas were dissected, subjected to digestion with trypsin (Sigma-Aldrich), and mechanically dissociated with DNAse (Sigma-Aldrich). Cells were plated onto 22 mm glass coverslips coated with poly-L-lysine hydrobromide (0.5 mg/ml; Sigma-Aldrich). The plating medium consisted of Neurobasal medium (Gibco) supplemented with 1% penicillin-streptomycin (Invitrogen), 0.5% L-glutamine (Formedium), 2% B-27 (Gibco), and 5% horse serum (Invitrogen). The following day, the plating medium was changed for horse serum-free feeding medium. Cultures were maintained at 37 C and 5% CO 2 in a humidified incubator. Transfection was performed using Lipofectamine 2000 (Life Technologies) and cortical neurons at 12 days in vitro (65). Immunocytochemistry assays were performed 16 h after transfection, as previously described (65). The animals were treated in accordance with the Animal Welfare and Ethics Review Body Committee, and experiments were performed under the appropriated project licenses with local and national ethical approval. All experiments involving animals have been designed in consideration of the guidance provided by NC3Rs (nc3rs.org.uk).
Fluorescence microscopy for Figure 1 For fixed cell imaging, cells were seeded on Matrigel-coated glass coverslips. The Matrigel coating solution consisted of 500 μl concentrated Matrigel (Corning; catalog no.: 354277) in 50 ml DMEM/F12 medium (Gibco). 1 ml/well Matrigel solution was added to coverslips in 6-well plates and incubated for 24 h before cells were seeded. HeLa cells stably expressing Halo-APP-mNeonGreen and streptavidin-KDEL were seeded at 100,000 cells/well on Matrigel-coated glass coverslips 24 h before biotin addition. Subsequently, media were removed from each well in turn and replaced with 2 ml complete DMEM + 500 μM biotin/well to generate ER export times of 0 to 5 h. After 5 h, cells were fixed in a cytoskeleton fixation buffer (300 nM NaCl, 10 mM EGTA, 10 mM glucose, 10 mM MgCl 2 , 20 mM Pipes [pH 6.8], 4% paraformaldehyde [PFA], and 2% sucrose) for 10 min at RT, before washing twice in PBS (supplemented with 100 mg/l calcium chloride and 100 mg/l magnesium chloride ions [Gibco]) and once in water. Coverslips were mounted in ProLong Gold Antifade Mountant containing 4 0 ,6-diamidino-2-phenylindole to visualize the nucleus (Invitrogen).
Immunofluorescence microscopy
Immunofluorescence was performed as previously described with no modifications (51). Briefly, cells were fixed for 15 min at RT with 4% (w/v) PFA in PBS. PFA-fixed cells were permeabilized with 0.01% (w/v) saponin in blocking solution (0.2% [w/v] pork skin gelatin in PBS) for 30 min at 37 C, and double labeled with specific primary and secondary Alexaconjugated antibodies. Cells were imaged on a Zeiss confocal laser-scanning microscope 780 (Zeiss). Post-acquisition image processing was performed with Fiji/ImageJ software (https:// imagej.net/software/fiji/) (66). Co-localization analysis was performed using the Fiji/ImageJ plugin co-localization threshold to determine the Pearson's correlation coefficient among two channels, using Z-stacks with 0.3 μm intervals of at least seven cells from three independent experiments. Pearson's correlation coefficient represents all nonzero-zero pixels that overlay the images of the channels and returns a value between −1 and +1, where +1 is a total positive correlation, 0 is no correlation, and −1 is a total negative correlation.
Statistical analysis
For fluorescence-activated cell sorting analyses, the mean of a minimum of 30,000 cells per repeat was standardized as a percentage of the 0 h measurement and, since the measured fluorescent levels are expected to decay exponentially over time, transformed by a natural logarithm (time points 1-5 h). We found a good fit for a linear decay after logarithm transformation. Statistical analysis with three repeats of each condition was performed using R with the lme4 package, and statistical significance was considered when p < 0.05. Separately for each baseline/condition (i.e., either CRISPR KO or mutation) combination, a random intercept model from linear mixed-effect regression was employed to model the change in fluorescence intensity. The fixed effects included in the model were (a) the overall mean intercept, (b) time of measurement, (c) whether the mutation was present, and (d) an interaction effect between the time of measurement and the presence of the mutation modifying the slope of the linear decay. In addition, we fitted a model with factors (a, b, and c) only and a third model with factors (a and b). The three models were compared by likelihood-ratio tests. When comparing the first and second models, a small p value indicates a significantly different slope for the condition curve from the baseline curve ("slope significance"), whereas when comparing the first and third models, a significant test result indicates that including any condition-level effects (intercept and slope) yields a better fit ("intercept significance").
For all other figures
Statistical data are demonstrated as mean ± SEM, and the n samples are indicated in the figure legend for each analysis. The statistical analysis to determine significance is described in each figure legend. The p values are labeled as follows: *p ≤ 0.05; **p ≤ 0.01; ***p ≤ 0.001; and ****p ≤ 0.0001. Differences were considered statistically significant at p ≤ 0.05. Data were plotted and analyzed using either GraphPad Prism 5.0 software (GraphPad Software by Dotmatics) or PyCharm CE (JetBrains).
Aβ ELISAs
To determine the content of Aβ1-40 and Aβ1-42 fragments of APP in conditioned media, equal numbers of HeLa cells were mock transfected or transfected with a plasmid encoding APP-GFP using Lipofectamine 2000 and incubated for 20 h. After incubation, cells were washed three times with Opti-MEM and incubated for another 16 h in Opti-MEM. After incubation, conditioned media were collected, cleared from cell debris, and equalized for total protein content. The levels of secreted Aβ1-40 and Aβ1-42 were determined using commercially available kits (Thermo Fisher Scientific; catalog nos.: KHB3481 and KHB3544, respectively), according to the manufacturer's instructions. The ELISA data were from six independent experiments.
Data availability
All data are included in the article and supporting information.
Supporting information-This article contains supporting information. | 2022-06-25T15:07:23.213Z | 2022-06-01T00:00:00.000 | {
"year": 2022,
"sha1": "d4bc51e6dee5f0d1bcf79bd846cfa74a069c186e",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/article/S0021925822006147/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "17b8b05d211f99ca08a392e4d9551ed7c7dae50c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
254108679 | pes2o/s2orc | v3-fos-license | Propagation of monochromatic light in a hot and dense medium
Photons, as quanta of electromagnetic fields, determine the electromagnetic properties of an extremely hot and dense medium. Considering the properties of the photons in the interacting medium of charged particles, we explicitly calculate the electromagnetic properties such as the electric permittivity, magnetic permeability, refractive index and the propagation speed of electromagnetic signals in an extremely hot and dense background. Photons acquire a dynamically generated mass in such a medium. The screening mass of the photon, the Debye shielding length and the plasma frequency are calculated as functions of the statistical parameters of the medium. We study the properties of the propagating particles in astrophysical systems of distinct statistical conditions. The modifications in the properties of the medium lead to the equation of state of the system. We mainly calculate all these parameters for extremely high temperatures of the early universe.
Introduction
We re-investigate the behavior of the first generation of leptons and the corresponding electromagnetic field in extremely hot and dense medium of the electrons below the electroweak temperature. This system is composed of different types of particles but the significant contribution comes from the light particles with masses lower than the existing temperature in the early universe. The overall behavior of particles in a medium is a net result of interaction of propagating particles with the medium. The background corrections become much more significant at high temperatures and densities where the propagating particles can modify the properties of the medium [1][2][3][4][5][6]. We consider the light mass particles in a heat bath of the electrons and photons at temperature below the decoupling temperature. Thus the higher generations of leptons are not expected to be affected signifa e-mail: masood@uhcl.edu icantly by temperatures that are much below their masses, or to affect these themselves. Radiative background corrections due to the heavy intermediate vector bosons of electroweak interactions are also suppressed because of their heavy mass. Therefore, we study the background contribution of radiation while it is interacting with matter and the temperature of hot electrons is below 2 MeV, the decoupling temperature. The system is considered to be in thermal equilibrium in specified regions of the stellar bodies.
In this paper, we study a pure gas of the electrons and photons that can be converted into electromagnetic plasma of the photons and electrons at high temperatures which are sufficiently smaller than the W and Z masses. Background contributions of interactions between electrons and photons in the medium are incorporated through statistically corrected propagators with the distribution function of the photons and electrons in the medium. However, electromagnetic properties of the propagating neutrinos (with nonzero mass) will be modified due to the electron's induced magnetic moment. The heat bath with a high concentration of the electrons and photons can still be considered as a relativistic plasma of the electrons and photons (for μ m). Photon acquires a temperature dependent screening mass and a Debye screening length which can be calculated from the longitudinal component of the vacuum polarization tensor. The electromagnetic properties of the medium are modified which may cause the phase transition under suitable conditions. The photon can then be treated as a plasmon as they transfer certain amount of energy to electrons and are responsible for the radiative corrections. The plasmon at higher energy can decay into a neutrino-antineutrino pair, which can couple with the photons in a medium through electrons as a higher order effect. The magnetic moment is a perturbative effect. First order radiative corrections in Fig. 1 indicate the magnetic moment of the electron (Fig. 1a) and neutrino (Fig. 1b, c). Electron induces a nonzero magnetic moment to neutrino due to the interaction of the electron with the neutrino in e − ν e → e − ν e in the minimal standard models with very tiny mass of the First order radiative corrections to plasmons in the electroweak model results into a coupling of the photon with electron or neutrino or the interaction of leptons with the magnetic field. Radiative corrections to electromagnetic vertex (a), the tadpole diagram (b) corresponds to a neutral current and gives a nonzero contribution for an asymmetric combination of an electron-positron background only. The bubble diagram (c) gives a major contribution to the magnetic dipole moment of the neutrino neutrino, as the magnetic moment is the property of matter. Thermal background corrections due to the second or higher generation of particles will always be suppressed even if those particles are injected in the medium from outside.
Calculational scheme
The properties of the electrons as electromagnetically interacting particles in an extremely hot and dense system are studied using the renormalization scheme of quantum electrodynamics (QED) in statistical media for different ranges of temperature and chemical potential [7][8][9][10][11][12][13]. We use the renormalization scheme of QED in a real-time formalism. This scheme is valid in a heat bath of real particles below the decoupling temperature. All of the Feynman rules of QED remain unchanged. The statistical effects are included through the statistical distribution functions. Massless vector boson interaction is incorporated by the Bose-Einstein distribution function and the Fermi-Dirac distribution function [14] is used for the fermionic contributions, Equation (1) corresponds to a closed system with equal and opposite chemical potential (μ) of the fermions and antifermions in a CP symmetric background. The first term in parenthesis corresponds to the particle distribution, whereas the second term corresponds to the antiparticle distribution in hot and dense medium. It is convenient to expand the distribution functions of particle and antiparticle in powers of mβ (for constant μ), where m is the mass of the corresponding particles and β = 1/T. All the statistical parameters μ, T and β are expressed in units of m, the electron mass. In a heat bath of the electrons, at very high temperatures, the properties of the electrons are modified corresponding to temperature and density of the system. The physically measurable values of the electron mass, charge and wave function of the electrons in a medium are calculated as renormalization constants [6] of QED in a hot and dense heat bath for different ranges of temperature and chemical potential. Without getting into details of the calculations, we use the physically measurable parameters of the propagating particle with the renormalization constants of QED to determine the electromagnetic properties of the medium which can be converted into a relativistic plasma. The thermal contributions (calculated in a real-time formalism) to the renormalization constants of QED are combined with the relevant renormalization constants in vacuum to find the renormalized parameters of the theory in a statistical medium. These renormalized finite quantities correspond to the physically measurable values of the parameters such as electron mass [7,8], charge [9][10][11], wave function [12] and the magnetic moments [13,[15][16][17][18][19][20][21]. We can then replace the rest mass of the electron by the physically measurable renormalized mass: The superscripts R and Phys correspond to the renormalized parameters and physically measurable quantities, respectively. The corresponding relations between renormalized wave function of electron and that of the corresponding vacuum value are given as such that the probability of finding particles in certain states becomes a function of the statistical parameters of the medium. The electromagnetic fields is expressed as and the physical mass Eq. (2a), wave function Eq. (3a) and the electromagnetic fields Eq. (4a) give the physically measurable values of the corresponding parameters in a hot and dense medium. The QED Lagrangian of such a system can then be written as In this scheme of calculations, the renormalization constants of QED are considered to be the effective parameters of the theory. The renormalization constants of QED give the physical mass and the charge of the electrons and the corresponding wave function at finite temperature and density. The vacuum polarization tensor μν for such a system can be written by replacing the photon and electron propagator in vacuum by the one in the medium in a real-time formalism such that 4 Tr γ μ (p + K + m) γ ν (p + m) with [14] Γ whereas K α , the 4-momentum of the photon, is related to the energy ω and momentum k: with β μν (K , μ) = − 2πe 2 2 The polarization tensor β μν (K , μ) can generally be written in terms of the longitudinal and transverse components L (k, ω) and T (k, ω), respectively, such that it satisfies the relation Whereas the transverse component of the polarization tensor P μν can be written in terms of the parameters of the statistical medium: and the longitudinal component of the polarization tensor Q μν is expressed in the statistical medium: whereas, and Such that they satisfy the conditions: The structure of the vacuum polarization tensor becomes clearer if we put it in the component form, Defining b i j = k i k j K 2 , the elements of the polarization tensor P μν can be expressed as whereas the elements of the polarization tensor Q μν attain the form in vacuum. In the absence of the longitudinal component of the photon (in vacuum), Eq. (11) reduces to showing that all the light is transversally polarized such that ε(K ) = 1 and μ(K ) = 1, as well. Transversality of photon is the property associated with the masslessness of the photon. When the photon acquires a plasma screening mass at nonzero temperature, it gives a nonzero contribution to the longitudinal component of the polarization tensor, which results in slow down or trapping of the electromagnetic signal in a medium. The reduction in the speed of electromagnetic waves depends on the statistical properties of the medium. A nonzero parallel component in three dimensional space can still maintain the circular polarization in an isotropic medium and the possibility of trapping of light in a medium affects the transverse propagation in an anisotropic medium in extreme statistical conditions. Due to the mass of the photon light slows down in the transverse direction by losing some energy to longitudinal direction. Temperature and density corrections to the QED parameters in a hot and dense medium are reviewed in the next section. The magnetic field is not included explicitly. However, in the presence of charged leptons at high temperatures, significantly large magnetic field is expected. In a closed system with isotropic matter distribution, a constant magnetic field is expected at a constant temperature. Therefore, at a given temperature, magnetic field effect is incorporated through the potential energy contribution to the charged leptons and the energy of particles will be modified in the presence of the magnetic field as where corresponds to the Landau level and B is a constant magnetic field. B can be replaced by the time varying magnetic field to incorporate the change in magnetic energy with time. We postpone a detailed study of this effect of different type of magnetic fields for now.
QED parameters in a medium
The thermal corrections to the QED parameters can be written as a function of temperature T and chemical potential μ in the form of Masood's a, b, c, . . . functions expressed as a i (mβ, μ) [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23] and referred to as Masood's functions hereafter, The first term in brackets is a measure of thermal corrections related to the increase in kinetic energy due to the coupling of particles with radiation, whereas the a, b and c functions are evolved from the integration of the Fermi-Dirac distribution, and it vanishes at low temperature where the presence of hot fermions is negligible in the system. We have where +(−)μ correspond to the chemical potential of the fermion (antifermion) in the medium. The wave function renormalization constant of QED can be written as [6] and the charge renormalization constant is calculated [6] to be The photon in the medium develops a plasma screening mass which can be obtained from the longitudinal and transverse component of the vacuum polarization tensor L (0, k) and T (k, k) where K 2 = ω 2 −k 2 and corresponds to monochromatic light.
In this scheme of calculations, longitudinal and transverse components ( L and T , respectively) of the vacuum polarization tensor μν play a crucial role in the calculation of the electromagnetic properties of a medium. The electromagnetic properties such as electric permittivity ε(K ) and magnetic permeability μ(K ) as well as the refractive index, propagation speed and the magnetic moment of different particles in the medium are studied by using these basic properties of the medium. The electric permittivity ε(K ) and the magnetic permeability μ(K ) are related [9] to L and T : Whereas at extremely high temperatures, ω = k is not allowed because the above equations are valid at high temperature only. At those temperatures, longitudinal component cannot vanish so ln (0) singularity is ruled out and the Debye shielding length turns out to be whereas, in the relativistic plasma, it depends on the temperature quadratically. Now the electric permittivity ε(K ) and magnetic permeability μ(K ) of such a medium can be calculated from the longitudinal and transverse components [9] to be Equations (21) and (23) show the dependence of the longitudinal and transverse components of the polarization as well as the electric permittivity and the magnetic permeability of the medium as a function of ω and k corresponding to the propagating electromagnetic waves. The dependence of ε(K ) and μ(K ) on the temperature induces temperature dependence to the propagation velocity and the refractive index of the medium. These quantities also vary with the energy and the momentum of the photons. This is a distinct feature of the extremely hot medium of the early universe that the refractive index will depend on the wave properties as well as the temperature of the medium. The speed of propagation of electromagnetic waves in such a medium can be expressed as and the refractive index of the medium turns out to be It is well known that a nonzero longitudinal component in the early universe corresponds to the Debye length and should be a quadratic function of temperature whereas in a classical plasma the temperature dependence of the shielding length reads
Propagation of light in the early universe
It is well known from the thermal history of the universe that temperature of the universe was extremely high and the chemical potential was extremely low in this radiation dominated era. Therefore, the valid limit of temperature in this situation is T m μ. In this section, we evaluate all Masood's functions for extremely high temperatures in the given ranges of the photon momenta and energy. The dominant thermal contribution comes from the interaction with the radiation at the equilibrium temperature T and fermions at the same temperature. The values of a i (mβ, μ) for extremely high temperature, only the c(mβ, μ) term contributes, giving And in the extremely high temperature limit (T m μ) with ignorable density the longitudinal and transverse components of the vacuum polarization tensor can be written as Equation (27) can be evaluated, using (26), for two extreme conditions based on the properties of light: (i) For large energy of the photons, ω k and the refractive index of the medium turns out to be (ii) For large momentum of the photons, ω k The thermal correction to the speed of the propagation of light in such a medium is The refractive index of this medium is given by Equation (34) puts a natural limit on the values of k. Moreover, k 2 = e 2 T 2 6 is not physically allowed as it will give an infinite speed and zero refractive index. These equations cannot be true for T = 0 as well. The values of the electric permittivity and magnetic permeability depend on the values of the plasmon frequency and the wave number at a given temperature. The propagation speed v prop and the refractive index n become a function of temperature, energy and momentum of the photon. It is also to be noticed that the thermal contributions in Eq. (27)) start at T > 0.5 MeV. The electromagnetic properties of the medium cannot see any thermal contributions after the nucleosynthesis [13,15] is over in the early universe. This limit is below the neutrino decoupling temperature, indicating that the nucleosynthesis started right after the temperature of the universe dropped below the neutrino decoupling temperature and the neutrino capture contributed to beta processes during nucleosynthesis. All the calculations in this paper are relevant for the temperatures below the neutrino decoupling temperatures and higher than the electron mass. In other words, these calculations are valid after the neutrino decoupling and before the nucleosynthesis totally stops. It shows that the properties of the medium change tremendously during nucleosynthesis.
The existence of hot electrons in a medium at such temperatures ensures a significant effect on physically measurable values of the electron mass, charge and concentration of the electrons in a medium, which leads to the change in the electromagnetic properties in terms of magnetic moments of leptons, electric permittivity and magnetic permeability of the medium. The electromagnetic properties of a medium then work differently on the particles that propagate through this medium.
The magnetic moment of charged particles
Relativistically moving charged particles have an associated electric field and their relativistic motion at high temperatures create weak non-negligible currents in extreme situations in a medium. When charged particles are accelerated in a medium and a continuous change in energy occurs through acceleration of particles along with the change in temperature of the system, an associated magnetic field is generated. In such an electromagnetic system, there are localized electromagnetic fields that are associated with the distribution of charged particles in the medium. The magnetic moment is associated with charge and mass of leptons. Lighter particles have large magnetic moment effect and thermal contribution is higher for the lighter particles as temperature is always compared to the mass of the particles for thermal effects. Renormalization scheme gives a change in mass in Eq. (2). The charge of the electron is not affected until the temperature is extremely high as indicated in Eq. (5).
The magnetic moment is simply calculated from the change in mass in thermal background given as μ B is the unit of magnetic moment called Bohr Magneton. So the statistical corrections to the magnetic moment of the electron is directly proportional to the statistical self-mass corrections, given in Eq. (16).
Generalization of results
A straightforward generalization of all the above results can be done by evaluating Masood's functions a i such that a 1 = a(mβ, μ), a 2 = b(mβ, μ), a 3 = c(mβ, μ), and so on. All of the Masood's statistical functions a i correspond to electron background contributions in the above equations and always correspond to fermion background contribution for T > μ. Evaluating the above equations, we just consider the electron background as m is taken as the electron mass. Thus these functions correspond to the electron background for temperatures higher than the electron mass. The positrons' contribution in the same medium can be expressed by replacing μ with a negative μ. Everything else remains unchanged. Net contributions from the CP symmetric background can be obtained by taking the average of particle and antiparticle background contributions and the net background contributions can be obtained by replacing the a i functions by the corresponding difference functions, giving [6] a avg (mβ, μ) such that, in the limit T μ, the chemical potential contribution is ignorable and hence the thermal effects dominate, such that The average background contribution can be obtained by the corresponding functions considering Eq. (17), It can easily be seen that at extremely high temperatures, the fermion contribution from the medium is controlled by c(mβ, μ) and is given in Eq. (26).
Results and discussions
We reiterate some results from the previously calculated renormalization constants of QED in extremely hot and dense media using a real-time formalism. Equations (16)- (19) give hot and dense background corrections to the electron mass, wave function and charge comes, making renormalization constants as a part of physically measurable parameters of the statistical system. It can easily be seen that thermal behavior at high temperatures is clearly a quadratic behavior and all the parameters of the system becomes functions of temperature with (T /m) 2 , as dominant contribution. The photon background contribution is always proportional to (T /m) 2 whereas the dominant fermion contribution is also the same way at very high temperatures. It has already been noticed that all the parameters behave in a more complicated way during the nucleosynthesis [13,15] in the early universe and depend on Masood's functions. The thermal contribution to the electron mass, wave function and charge of the electron at high temperature is plotted in Fig. 2, showing a comparison between all three renormalization parameters of QED. The mass dependence is substantial and is non-ignorable even at T < m. However, charge renormalization is not contributed if enough fermions are not available in the system for the photon interaction, so a background effect cannot be seen below the temperatures of 10 10 K, i.e., equal to the electron mass. All the parameters have a dominant quadratic dependence on the temperature with different coefficients. The thermal contribution to the electron self-mass is a couple of orders of magnitude greater than the electron charge and the coupling constant of QED. It can be seen from Eqs. (16)- (19) that the wave function correction is much smaller than the electron mass or even the charge of the electron. It can easily be seen that the mass contribution dominates over the charge contribution because electron mass has radiative corrections by its interaction with the radiation (photon) in the background. However, charge does not see the photon and thermal contribution is only due to the fermion background. Thermal corrections to the electron wave function are very At T > m, the photon acquires a dynamically generated mass which increases with temperature in the presence of a high concentration of electrons in a medium. The QED parameters indicating properties of the electrons at high temperature are plotted in Fig. 2.
Self-mass corrections to electron affect the magnetic moment of electron which is also proportional to the T 2 /m 2 . Figure 3 gives the magnetic moment of the electron as a function of temperature as the electron mass keeps on increasing with temperature. However, the presence of charged fermions in the background affects the electromagnetic properties of the electron also. Figure 3 plots the thermal contributions to the magnetic moment as compared to the thermal contribution to the mass of electron.
The magnetic moment is a form factor and is a property of the mass and charge whereas neutrino is a massless neutral particle. So the nonzero magnetic moment can only be obtained through the weak interaction of the neutrino with the corresponding charged lepton as shown in Fig. 1. In the standard model massless and neutral neutrino cannot interact with the magnetic field and exhibits zero magnetic moment. It is a higher order process if neutrino has some mass. In this way, the magnetic moment of the neutrino will depend on the extension of the standard model and will be a model dependent quantity. We just consider the minimally extended standard model with the neutrino mass as 1 MeV just to compare the thermal mass of the electron and the magnetic This comparison is done at extremely high temperatures. The induced magnetic moment of the neutrinos is plotted in Fig. 4, where the magnetic moment of the neutrino is plotted as a function of temperature for the upper limit of the mass of the electron type neutrino around 1 eV. In this first order correction, the minimal standard model obeying the conservation of the individual lepton number is considered. Temperature correction is suppressed for neutrino because the temperatures is compared to the mass of W, instead of the electron as W boson is the loop partner of the electron in the bubble diagram. Therefore, the background contribution to The ratio of (thermal) electrons background corrections to the magnetic moment of the electronic type neutrino with the corresponding vacuum value of the magnetic moment is plotted as a function of temperature below the neutrino decoupling temperature to extract the pure background effect the magnetic moment of neutrino is induced as almost 10 −10 times the corresponding contributions to the electron mass which is exactly of the order of (m 2 /M 2 ) and is of the order of the square of the ratios between the mass of the electron and the mass of W boson. Figure 4 shows that this induced magnetic moment of the neutrino in the minimal standard model is negligibly small, being a higher order effect. An explicit comparison of all these values is given in Table 1. It shows the values of all of the QED parameters such as electron properties and the magnetic moment of the electron for a given value of temperature including the induced magnetic moment of the neutrino. However, it is clear from the last column of the table that the ratio of the magnetic moment of the electron type neutrino in a lepton number conserving minimally extended standard model with that of the electron is constant which is actually proportional to the ratio between their masses.
The magnetic moment is a property of the mass. The thermal corrections to the magnetic dipole moment of the neutrino are actually due to its swallowed mass in thermal background. This expected behavior is demonstrated in a plot of the electron mass and its corresponding modification in the dipole moment as a function of temperature. Figure 3 shows a clear demonstration of this behavior of the electron.
It is also interesting to note the vacuum polarization contribution due to the presence of fermions in the background for T > m. Neutrino magnetic moment can be induced the same way even if the neutrinos are not decoupled (below 2 MeV) as they acquire the induced magnetic dipole moment by the electrons which can occur even if the neutrino concentration is lower but there are enough electrons in the medium.
The properties of the photons at high temperature Photon, as a quanta of energy acquires nonzero mass in a medium with an abundance of electrons with extremely high energy at high temperature. The electromagnetic interaction of the photon with electrons in a medium gives a dynamically generated mass to the photon which can be treated as the screening mass and the Debye shielding turns the medium into an electron-photon relativistic plasma under suitable conditions. The photon is massless at lower temperature and density and the phase transition in a medium occurs at temperatures greater than 1 MeV where the fermion background starts to contribute to the self-energy of the photon which leads to Debye shielding due to the dynamically generated mass of the photons. The behavior of the dynamically generated screening mass and Debye shielding length as a function of temperature is given in Fig. 5. The Debye screening length decreases with the increase in temperature and with the increase in screening mass of the photon. At a temperature around 3-4 MeV, the Debye length decreases, compared Refractive Index T/m to the screening mass itself. The Debye length corresponds to the potential energy and decrease in the potential energy is expected with temperature. The number of particles in the Debye sphere is calculated from the Debye shielding length λ D , if the number density remains unchanged. We have N D = 4/3π n 0 λ 3 D Thus the decrease in λ D with temperature is associated with the number density of the universe. The composition of the universe changes with temperature and so does the N D due to the change in mass as well as the other parameters of the theory such as λ D and the propagation velocities.
We consider the small longitudinal component and the magnitude of the wave-vector k is much larger than the frequency ω, making it a very high energy wave.
However, the velocity of waves in a medium grows quickly with temperature whereas the refractive index decreases relatively slowly with temperature in such a medium (Fig. 6).
The above results indicate that QED can be used as the only theory below the neutrino decoupling temperature and that is 2 MeV. As soon as the temperature of the system crosses this limit, concentration of the neutrinos become significant and the weak interactions come into play, some of the energy is used in weak interaction and is not ignorable with the rise of temperature.
It can be clearly seen that the thermal corrections lead to a quadratic increase in the measurable values of the physical parameters of the theory (Figs. 2, 3, and 4). However, the interaction-based bulk properties such as the propagation speed, magnetic moment and the screening mass show different behavior. This difference in behavior balances some of the effects and keeps open the possibility of the existence of physical systems at extreme temperatures (kinetic energies) and large densities or chemical potential (potential energies) and magnetic field helping to develop an equilibrium and maintain the system.
The properties of the neutrinos are related to the massive neutrinos, which opens up a whole list of possible extensions of the standard model to accommodate massive neutrinos. The thermal effects on the properties of the neutrinos are highly model dependent [21][22][23][24] and can only be calculated individually for every model. Even the Dirac and Majorana mass will contribute differently to the magnetic moment.
It can be clearly seen that all the above equations of Sect. 4 reproduce the already existing results in the tranversality limit where ω 2 ≈ k 2 , we see that propagation velocity, electric permittivity and magnetic permeability are all equal to unity.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecomm ons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Funded by SCOAP 3 . | 2022-12-01T15:37:56.987Z | 2017-12-01T00:00:00.000 | {
"year": 2017,
"sha1": "a7da7e16ecd980e5141910cd663edbfd7f6d7b39",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-017-5398-0.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "a7da7e16ecd980e5141910cd663edbfd7f6d7b39",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
} |
249856174 | pes2o/s2orc | v3-fos-license | Effects of Different Radiation Sources on the Performance of Collagen-Based Corneal Repair Materials and Macrophage Polarization
Owing to the lack of donor corneas, there is an urgent need for suitable corneal substitutes. As the main component of the corneal stroma, collagen has great advantages as a corneal repair material. If there are microorganisms such as bacteria in the corneal repair material, it may induce postoperative infection, causing the failure of corneal transplantation. Therefore, irradiation, as a common sterilization method, is often used to control the microorganisms in the material. However, it has not been reported which type of radiation source and what doses can sterilize more effectively without affecting the properties of collagen-based corneal repair materials (CCRMs) and have a positive impact on macrophage polarization. In this study, three different radiation sources of ultraviolet, cobalt-60, and electron beam at four different doses of 2, 5, 8, and 10 kGy were used to irradiate CCRMs. The swelling, stretching, transmittance, and degradation of the irradiated CCRMs were characterized, and the proliferation of human corneal epithelial cells on the irradiated CCRMs was characterized using the CCK8 kit. The results showed that low dose (<5 kGy) of radiation had little effect on the performance of CCRMs. Three irradiation methods with less influence were selected for the further study on RAW264.7 macrophage polarization. The results indicated that CCRMs treated with UV could downregulate the expression of pro-inflammatory related genes and upregulate the expression of anti-inflammatory genes in macrophages, which indicated that UV irradiation is a beneficial process for the preparation of CCRMs.
INTRODUCTION
Corneal diseases and injuries are common causes of visual impairment, with high prevalence and strong blindness. 1−4 For many corneal diseases, there is no permanent treatment. Therefore, allogeneic corneal transplantation is still the most effective method for patients with corneal diseases. However, a study from Sweden showed that high efficacy is effective in the short term, while a 15% exclusion rate will still lead to 10% failure in 2 years. In the long run, the failure rate of allogeneic keratoplasty will increase, and the life span of penetrating keratoplasty is usually limited to 30 years. 5 Because of the shortage of suitable corneal tissue donors, transplantation rejection, and the increased risk of disease transmission, it is difficult for regenerative therapy to obtain the expected effects and meet the growing medical needs. Therefore, to solve the above problems, there is an urgent need for substitutes of corneal tissue.
Bioengineered artificial cornea tissue should have structural, chemical, optical, and biomechanical properties close to natural tissue. The properties of native cornea are mainly provided by the corneal stroma (mainly composed of collagen type I), which accounts for 90% of corneal thickness. 6,7 Collagen type I is the main component of corneal stromal layer, 8,9 so collagen as corneal regeneration material has incomparable advantages over other natural polymer materials. Since 2018, our research group has successfully prepared collagen-based corneal repair materials (CCRMs) for corneal lamellar transplantation by different methods. 10−15 Meanwhile, control of microorganisms on the CCRMs is quite important for their performance during the in vitro cell experiments and in vivo animal studies. Irradiation is often used to control the microorganisms in the material, but which type of radiation source and what doses can effectively control the microorganisms on CCRMs without affecting their properties and the effect on macrophage polarization have not been reported yet.
Irradiation using ultraviolet rays (UV), γ-ray (Co-60), or electron beam (EB) has been widely used in various fields. These are energy-efficient lab techniques that have high-quality control, leave no harmful residue, and can be carried out at room temperature. 16−18 For the bactericidal effect, increasing the irradiation dose is undoubtedly more favorable, but highdose irradiation will lead to the deformation and structural damage of collagen, resulting in the loss of its original function and value. 19,20 There are some literature studies that reported that γ-irradiation and electron beam-irradiation at doses of 2 kGy did not affect the mechanical properties of ECM hydrogel but the dosage of 30 kGy reduced their mechanical properties; in addition, γ-irradiation and EB-irradiation at doses of 2 kGy could achieve the sterilization efficacy of more than 80%; 21 collagen condensation and hole formation happened when dry ECM matrix was treated with γ-irradiation (2−30 kGy), producing a reduction of swelling ratio, elasticity, and stability; moreover, γ-irradiation (12 kGy) caused significant damage to native dermis ECM, even at moderate dose. 22 We also found the phenomenon from our previous experiments that when the Co-60 irradiation dose is lower than 10 kGy, the irradiation has no significant effect on the appearance and properties of collagen. When the irradiation dose reaches 15 kGy, the collagen becomes obviously hard, and when the irradiation dose reaches 20 kGy, the collagen denatures, and its color changes. Although there are some directive standards, different materials are manufactured under their own set of conditions; therefore, the appropriate irradiation dose should be chosen case by case to meet the requirements. Considering our previous study and the very thin thickness of our CCRMs, in this study, an irradiation dosage lower than 10 kGy is used.
Macrophages are widely distributed in all tissues of the body and are a key factor in inducing inflammatory immune response. 23 Under physiological conditions, macrophages that reside in human tissues are maintained by self-renewal. 24 After tissue injury, monocytes can be recruited from circulation to differentiate into macrophages under the induction of chemokines and adhesion molecules. Macrophages as an important part of nonspecific immunity are the first line of defense against foreign stimuli. They play an important role in phagocytosis, killing pathogenic microorganisms, processing and presenting antigens, repairing damaged tissues, and regulating specific immune responses. 24 Macrophages present in different tissues are polarized according to changes in their environment, forming different macrophage subtypes, such as M1 macrophages and M2 macrophages. 23 The microbial component lipopolysaccharide (LPS), toll-like receptor ligand, or interferon-γ (IFN-γ) can drive macrophage polarization to the M1 phenotype, while interleukin 4 (IL-4), interleukin 10 (IL-10), interleukin 13 (IL-13), or transforming growth factorβ (TGF-β) can induce macrophage polarization to M2. 25 M1 macrophages are capable of pro-inflammatory responses through both the signal transducer and activator of transcription 1 (STAT1) signaling pathway and the nuclear factor (NF)-κ B signaling pathway and produce pro-inflammatory related factors such as IL-6, IL-12, and tumor necrosis factor-α (TNF-α). In contrast, M2 macrophages are capable of antiinflammatory responses through activating STAT6 signaling pathway and produce anti-inflammatory related factors such as IL-10, platelet-derived growth factor (PDGF), TGF-β, and vascular endothelial growth factor (VEGF), which induces the repair in damaged tissues. 25−27 Therefore, the direction of macrophage polarization in damaged tissues can be regulated by drug or material interference to change to the desired phenotype.
Herein, CCRMs were irradiated by different radiation sources and doses, and its physical and chemical properties were characterized. Moreover, the effects of CCRMs treated by different irradiation methods on macrophage polarization were also studied.
RESULTS AND DISCUSSION
2.1. Physical Characterization. Cornea is an aqueous soft tissue. The water content of human cornea is 75−80%. 28 Water absorption and swelling of the irradiated CCRMs were plotted on the average of three trials in Figure 1a,b. Because CCRMs are a similar hydrogel material, the swelling and water absorption rates were tested, which showed a high swelling behavior. The water absorption of nonirradiated CCRMs shows similar results to that of native human cornea (about 80%) 29 but continuous water absorption and swelling due to its instability. The CCRMs treated with UV have lower water absorption and swelling rates, which may be because UV can crosslink collagen type I and change its internal structure. 30,31 However, the CCRMs treated with Co-60 show a higher water absorption and swelling rate than the CCRMs treated with UV, which may be attributed to the significant variation of Co-60 on the collagen molecular structure, fibril hydrothermal stability, and macromolecular chain's mobility within 10 kGy dose. 32 The CCRMs treated with EB have lower water absorption and swelling than nonirradiated CCRMs, which may be because EB can crosslink corneal fibers, resulting in a tighter structure. 33 As mentioned earlier, Co-60 can destroy collagen molecular structure. 32 The CCRMs treated with Co-60 show poor tensile strength, which is significantly different from the control group (p ≤ 0.05), while the irradiation dose of Co-60 exceeds 5 kGy ( Figure 1c). The CCRMs treated with Co-60 (2 kGy) have lower tensile strength than the control group, but there is no significant difference (p ≥ 0.05). The tensile strength of the CCRMs treated with EB decreases with the increase in irradiation dose, and the dose of 2 kGy has no significant effect on CCRMs (Figure 1c). The results show that Co-60 or EB can crosslink collagen, but the crosslinking will be excessive when the irradiation dose exceeds 2 kGy, resulting in stiff and brittle CCRMs.
One function of the cornea is to act as a protective barrier for the internal structure of eye. Another important function is to make light pass through the pupils and converge into the retina of eye fundus for imaging, which is similar to the lens of a camera. 34 If the transmittance of bioengineering artificial cornea is poor, it will lead to blurred vision and no implantation need. CCRMs will degrade over time, but CCRMs with good light transmission will increase patients' confidence in restoring health. Light transmittance of all CCRMs increases with the increase of wavelength (Figures 1d and S1), which is similar to that in human cornea. 35 As shown in Figure 1d, there are no differences (p ≥ 0.05) in the light transmittance of the irradiated CCRMs in the dry state, similar to that of human cornea (93.2 ± 3.2%), but there are significant differences (p ≤ 0.05) in the irradiated CCRMs after rehydration. This result is determined by the crosslinking density of CCRMs. 36 If the crosslinking degree increases, the transparency will increase. 37 The corneal ECM is an optically clear hydrogel comprised primarily of collagen and proteoglycans. 38 It is known that crosslinking density governs the physical properties of a hydrogel, 39 as demonstrated by our previous work. 35 When the crosslinking density of CCRMs is increased, light transmittance increases, 36 as do the degree of material stiffness and brittleness, 40,41 which limit surgical handling while decreasing the swelling ratio. Therefore, the crosslinking density, which affects the physical properties of CCRMs, must be balanced to ensure optimal performance for corneal repair.
2.2. In Vitro Degradation. Biomaterials should have sufficient stability against collagenase degradation to provide an environment for cells as scaffolds. 14 For our CCRMs, ideally, they can provide an environment for cell growth, so that human corneal epithelial cells (HCECs) can heal quickly and isolate the external environment in the early stage. Under the degradation of the CCRMs, the corneal stromal cells grow into the materials and secrete extracellular matrix to reshape the cornea. In the present study (Figure 2), the CCRMs treated with UV prevent the enzyme from entering the collagen molecule within 24 h and reduce the degradation of collagen. 30 After collagenase treatment for 24 h, the degradation rate is faster than that of the control group (the nonirradiated CCRMs). This may be because crosslinking sites are limited to the surface of CCRMs after UV treatment. 30 The degradation rate of the CCRMs treated with Co-60 is significantly faster than that in the control group, which could be because Co-60 can destroy the structure of collagen and expose more enzyme reaction sites of collagen. There are no clear rules of different doses of Co-60. EB has no obvious effect on human cornea, 42 but it impaired the properties of collagen-based materials 43 and has almost no crosslinking effect on amniotic membranes. 44 From the results, it was noticed that there was no regularity about different EB doses on the degradation behavior of CCRMs, but EB (10 kGy) had a more significant impact compared with the control group. This conclusion is consistent with the previous conclusion that high-dose EB can destroy CCRMs.
Cell Proliferation.
To determine whether the CCRMs after irradiation could influence cell proliferation, HCECs were seeded in a well plate with the extract of the irradiated CCRMs and culture medium, and HCECs were detected after 1, 3, and 5 days of culture using a CCK8 kit. In the first 3 days (Figure Interestingly, the CCRMs treated with Co-60 can promote the proliferation of HCECs, which is significantly different (p ≤ 0.05) from the control group on day 5. The results suggest that collagen treated with Co-60 is conducive to cell proliferation, but Co-60 is not suitable as the irradiation candidate of CCRMs due to its ability to destroy collagen. Although EB (≥ 8 kGy)-irradiated CCRMs can also promote cell proliferation, it will make CCRMs stiff and brittle, which deteriorates its performance in application. The CCRMs treated with UV cannot promote the proliferation of HCECs, but there was no difference with the control group. In addition, the results show that the CCRMs irradiated with the dose less than 10 kGy have no cytotoxicity and can be safely used in animal experiments or in clinics.
Expression of Genes Related to Macrophage Polarization.
Combined with the swelling, water absorption, tensile strength, light transmittance, and in vitro degradation performance of CCRMs treated by different irradiation methods and irradiation doses, macrophage gene expression was also studied. The dose of 2 kGy was selected for the following experiment, because 2 kGy irradiated dose is enough to control the bacteria in CCRMs without affecting the properties of collagen. The extract of CCRMs (UV, Co-60 (2 kGy) and EB (2 kGy)) was incubated in the agarose medium, and no colony was found within 1 week ( Figure S2). The result shows that the dose of 2 kGy can meet the requirements of CCRMs sterilization; this probably is due to the ultrathin structure of the CCRMs (about 40 μm). Interestingly, it is found from Figure 4 that the CCRMs treated with UV hardly expressed IL-6, IL-1β, iONS, and Arg-1 (p ≤ 0.05) but highly expressed CD 163 and IL-10 compared with the blank control group (p ≤ 0.001), which indicates that the CCRMs treated with UV can regulate M2 macrophages and inhibit the secretion of inflammatory factors. Conversely, the CCRMs treated with Co-60 (2 kGy) group shows a high expression of iONS (p ≤ 0.01), and the CCRMs treated with EB (2 kGy) group has a high expression of IL-6 and IL-1β (p ≤ 0.05) compared with the blank control group. The results show that the CCRMs treated with Co-60 (2 kGy) and EB (2 kGy) can activate M1 macrophages. Overall, the CCRMs treated with UV can downregulate the expression of pro-inflammatory-related genes 45 and upregulate the expression of antiinflammatory genes, which may be related to the crosslinking of amino acid residues in collagen by ultraviolet light. 46 2.5. ELISA. To further confirm that the CCRMs treated with UV showed a switch toward the M2 phenotype, the proteins of IL-1β and IL-10 were quantitatively analyzed by ELISA ( Figure 5). The CCRMs treated with UV group still shows a lower level of IL-1β protein and a higher level of IL-10 protein compared with the blank control group, which is consistent with the gene level. Moreover, the CCRMs treated with EB (2 kGy) exhibited a higher level of IL-1β protein but showed no difference when compared with the blank control group (p ≥ 0.05). There is a significant difference between the CCRMs treated with EB (2 kGy) and the blank control group (p ≤ 0.01). The results may be related to the sensitivity of ELISA, which is lower than that of PCR. 47 No matter what, the CCRMs treated with UV can regulate macrophage polarization, the switch from M1 to M2, which is conducive to tissue regeneration.
CONCLUSIONS
Irradiation has been suggested as a means of sterilizing biomaterials. In this study, we chose different radiation sources, including nonionizing (UV) and ionizing (Co-60 and EB) with low intensity to irradiate the CCRMs. Results showed that low doses (<5 kGy) of ionizing radiation had little effect on water absorption, swelling, tensile strength, and light transmittance of CCRMs, while the tensile strength decreased a lot when the dose reached 8 kGy. As for the in vitro cell experiment, both ACS Omega http://pubs.acs.org/journal/acsodf Article nonionizing and ionizing radiations exhibited noncytotoxicity on HCEC cells. Besides, we also found that nonionizing UV radiation, which is much easier to use than Co-60 and EB, also appears to polarize macrophage differentiation to the more tolerogenic M2 phenotype. The above results provide an economical and convenient way to irradiate CCRMs and lay a foundation for the potential clinical application of CCRMs in the future. 4.2. Preparation of CCRMs. The preparation methods of CCRMs are shown in our previous work. 10,12,15,19,35,48−54 Briefly, collagen was dissolved in 0.01 M HCl with a mass ratio of Col/(EDC/NHS) = 6:1 to obtain a final concentration of 6.5 mg/mL, in which the molar ratio of EDC to NHS was 4 to 1. The obtained collagen solution (35 mL) was poured into a disposable bacterial culture dish and air-dried on a clean bench to obtain the collagen membrane. The collagen membrane was rinsed and air-dried before being fumigated with glutaraldehyde for 80 min. After cleaning and air-drying again, CCRMs were obtained. The thickness of all dry CCRMs was controlled at 40 ± 5 μm, and the thickness was 160 ± 11 μm in the fully saturated state. The dry CCRMs were irradiated by three different irradiation methods of UV, Co-60 (2, 5, 8,and 10 kGy), and EB (2, 5, 8, and 10 kGy). UV treatment is carried out with an UV-C lamp (6 W, wavelength of 100∼280 nm). The distance from UV-C lamp to CCRMs is 50−80 cm; the enclosed space required for irradiation is 10,000−16,000 cm 3 ; the UV irradiation time is 30 min per side. Co-60 and EB irradiations were completed by Huada biology (Guangzhou, China). Finally, bacterial presence in CCRMs was identified by culturing the extract of CCRMs with bacterial culture medium, which is shown in Figure S2.
Swelling Test.
To explore the changes of water saturation of the CCRMs with different irradiation methods, the swelling rates of various CCRMs in normal saline were measured. The experimental processes were as follows: the thickness of dry CCRMs was measured at room temperature and recorded as T 0 ; CCRMs were immersed in normal saline for 0, 5, 10, 20, 30, 60, 120, and 240 min, the surface moisture of the CCRMs was sucked dry, and the thickness was measured and recorded as T 1 ; three parallel samples were measured in each group; the swelling rate and water adsorption 4.6. In Vitro Degradation. CCRM samples were put into preweighed bags made of hydrophobic filter cloth (100 mesh, W 0 ), and then the bags with samples were placed in PBS to complete saturation (W 1 ). The bags with samples after rehydration were put into collagenase type I solution (10 U/ mL) for the degradation test. The bags with samples were dried with filter paper at the specified time point and weighed (W 2 ). Fresh collagenase type I solution was replaced every 12 h. The residual mass of samples in collagenase type I solution was calculated by the following equations: 4.7. Cell Proliferation. Differentially irradiated CCRMs (n = 3) with a diameter of 10 mm were placed in a 48-well plate, and then the DMEM medium (500 μL) was added. The CCRMs were deposited at the bottom of the 48-well plate to be fully immersed in the culture medium. The extract medium solution of the irradiated CCRMs was obtained after incubation in an incubator at 37°C and 5% CO 2 for 1 day. HCECs were an immortalized cell line from Eye Center of Sun Yat-sen University. Cells were inoculated into 48-well plates at the density of 5 × 10 3 cells/well. DMEM-basic (1×) supplemented with 10% FBS and 1% penicillin/streptomycin (100 μL) and the extract medium solution of the CCRMs (100 μL) were added to each well in the experimental group, while DMEM-basic (1×) supplemented with 10% FBS and 1% penicillin/streptomycin (100 μL) and DMEM-basic (1×) were added to the control group. The OD value at 450 nm was detected by the CCK8 kit at 1, 3, and 5 days.
4.8. qRT-PCR. The candidate CCRMs with the same size as a 6-well plate were fully saturated with PBS and put into a 6well plate. RAW264.7 macrophages (5 × 10 5 ) in good growth condition were seeded on the candidate CCRMs, and DMEMbasic (1×) supplemented with 10% FBS and 1% penicillin/ streptomycin (3 mL) was slowly supplemented. Three experimental groups (UV, Co-60 (2 kGy), and EB (2 kGy)) and a blank control group were set up in this experiment. After PrimeScript RT reagent kit with gDNA Eraser after the concentration and purity of the extracted RNA were determined via spectrophotometry (NanoDrop2000). qRT-PCR analysis was performed with a SYBR Green System (GeneCopoeia) on an RT-PCR instrument (QuantStudio 6 Flex, Life Technologies). The relative quantification of target genes was performed through normalization to β-actin, and 2 −ΔΔCt method was used to calculate the gene expression. The PCR primer sequences are shown in Table 1. 4.9. ELISA. RAW264.7 macrophages (1 × 10 4 ) were seeded on the candidate CCRMs in a 48-well plate. There were three experimental groups: UV, Co-60 (2 kGy), EB (2 kGy) and a blank control group, with four parallel specimens in each group. DMEM-basic (1×) supplemented with 10% FBS and 1% penicillin/streptomycin (500 μL) was added and cultured for 3 days. ELISA was carried out at the specified time point. The standard curve was drawn using the IL-1β ELISA kit and the IL-10 ELISA kit ( Figure S3), and the concentration of IL-1β and IL-10 proteins secreted by cells was detected using the kits.
4.10. Statistical Analysis. All graphs were prepared using OriginPro 2021b and Adobe illustrator 2021, and data are displayed as means with individual data points or means ± SD. For variables with repeated measures over time, a mixed-effects analysis with Geisser−Greenhouse's correction was performed (α = 0.05) with Tukey's multiple comparisons test for treatment effects by time point (OriginPro 2021b or IBM SPSS Statistics). P ≤ 0.05 was considered to be a significant difference (*P ≤ 0.05, **P ≤ 0.01, and ***P ≤ 0.001).
Light transmission over visible light spectrum (380−780 nm) by UV spectrophotometer, colony diagram of the extract of the CCRMs treated by UV, Co-60 (2 kGy) and EB (2 kGy) after incubation on agar medium for 7 days, standard curve of IL-1β and IL-10 simulated by curve simulation (PDF) | 2022-06-20T15:09:27.621Z | 2022-06-18T00:00:00.000 | {
"year": 2022,
"sha1": "8076e19e6bf2f097cd937ffb0d01fd3c86f390d4",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.2c01875",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ae70135207fdf829ec1faf3bebd8225efd29053",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
235371252 | pes2o/s2orc | v3-fos-license | A cell-free assay implicates a role of sphingomyelin and cholesterol in STING phosphorylation
Stimulator of interferon genes (STING) is essential for the type I interferon response induced by microbial DNA from virus or self-DNA from mitochondria/nuclei. In response to emergence of such DNAs in the cytosol, STING translocates from the endoplasmic reticulum to the Golgi, and activates TANK-binding kinase 1 (TBK1) at the trans-Golgi network (TGN). Activated TBK1 then phosphorylates STING at Ser365, generating an interferon regulatory factor 3-docking site on STING. How this reaction proceeds specifically at the TGN remains poorly understood. Here we report a cell-free reaction in which endogenous STING is phosphorylated by TBK1. The reaction utilizes microsomal membrane fraction prepared from TBK1-knockout cells and recombinant TBK1. We observed agonist-, TBK1-, “ER-to-Golgi” traffic-, and palmitoylation-dependent phosphorylation of STING at Ser365, mirroring the nature of STING phosphorylation in vivo. Treating the microsomal membrane fraction with sphingomyelinase or methyl-β-cyclodextrin, an agent to extract cholesterol from membranes, suppressed the phosphorylation of STING by TBK1. Given the enrichment of sphingomyelin and cholesterol in the TGN, these results may provide the molecular basis underlying the specific phosphorylation reaction of STING at the TGN.
www.nature.com/scientificreports/ STING activation, besides STING palmitoylation. However, because of the technical difficulty to deplete these lipids in cells, their involvement in the activation of STING signalling pathway has not been addressed.
In this work, we report a cell-free reaction in which endogenous STING is phosphorylated at Ser365 by recombinant TBK1. With this reaction, we demonstrate that sphingomyelin and cholesterol in the Golgi membrane are critical for STING activation.
Results
A cell-free reaction in which endogenous STING is phosphorylated by recombinant TBK1. To directly assess the roles of Golgi lipids in STING activation, we sought to develop a cell-free assay in which endogenous STING is phosphorylated by TBK1. To eliminate any contribution of cytosolic/endogenous TBK1 to STING phosphorylation, we decided to use microsomal membrane fraction prepared from TBK1-deficient cells as a membrane source containing endogenous STING.
TBK1-knockout (KO) mouse embryonic fibroblasts (MEFs) were generated using the CRISPR-Cas9 technology ( Fig. 1a and Supplementary Fig. S1). We then validated the phosphorylation and membrane traffic of STING in these cells. TBK1-KO MEFs were stimulated with DMXAA, a membrane-permeable mouse-specific STING agonist. We chose 60 min for DMXAA treatment, because STING localized exclusively to the Golgi and activated TBK1 at this time point 22 (Fig. 1b). As expected, 60 min after DMXAA stimulation, we observed the phosphorylation of STING at Ser365 in parental wild-type (WT) MEFs (Fig. 1a). In contrast, the phosphorylation was mostly suppressed in TBK1-KO MEFs.
DMXAA-stimulated TBK1-KO MEFs were harvested in isotonic buffer and homogenized. The homogenates were then centrifuged at 3000 × g for 5 min and the resulting post-nuclear supernatants were further centrifuged at 100,000 × g for 60 min. The membrane fraction that floated just above 2 M sucrose cushion was collected and used as microsomal membrane fraction for the following in vitro reactions.
The microsomal membrane fraction was incubated with ATP and 100 ng of recombinant TBK1 at 37 °C for the indicated time periods (Fig. 2a,b). In the conditions where microsomal membrane fraction prepared from DMXAA-stimulated MEFs was used, the amount of phosphorylated STING at Ser365 increased gradually up to 60 min of incubation (Fig. 2b). The amount of STING was decreased after 120 min of incubation, suggesting the proteolytic degradation of STING ( Supplementary Fig. S2). When microsomal membrane fraction of unstimulated cells was used for the reaction, the phosphorylated STING disappeared (Fig. 2b). These results suggested that STING phosphorylation by TBK1 stringently requires the agonist (DMXAA) stimulation.
Titration of the amount of TBK1 in the reaction showed that 100 ng of recombinant TBK1 was sufficient to detect the phosphorylated STING (Fig. 2c). Thus, we routinely used 100 ng of recombinant TBK1 at the following experiments. The phosphorylation reaction of STING by TBK1 was temperature-sensitive ( Fig. 2d) and required ATP (Fig. 2e). In sum, a cell-free assay to phosphorylate endogenous STING at Ser365 was developed with the use of recombinant TBK1.
"ER-to-Golgi" traffic-and palmitoylation-dependent phosphorylation. The exocytic membrane traffic from the ER to the Golgi is required for the activation of the STING signalling pathway 15,22,23,30 . The treatment of cells with BFA (a fungal macrocyclic lactone that blocks "ER-to-Golgi" traffic 31 ) suppressed the translocation of STING to the Golgi 22 ( Supplementary Fig. S3), the emergence of p-TBK1/p-STING/p-IRF3 15 , and abolished the STING signalling 22 . Thus, we examined if the phosphorylation of STING in the cell-free assay also required the translocation of STING to the Golgi. Microsomal membrane fraction was prepared from cells treated with BFA and DMXAA. When the microsomal membrane fraction was subjected to the in vitro reaction, we found no phosphorylated STING (Fig. 3a). Addition of STING agonist i.e., DMXAA or 2′3′-cGAMP, to the in vitro reaction with the microsomal membrane fraction prepared from unstimulated cells did not promote the phosphorylation of STING ( Fig. 3b and Supplementary Fig. S4). These results suggested that (1) the binding of STING with its agonist alone was not sufficient to make STING the substrate for TBK1 and (2) STING translocation from the ER to the Golgi, which subsequently occurred after the binding of STING with its agonist, was required for the phosphorylation of STING.
STING undergoes palmitoylation at the Golgi and the inhibition of palmitoylation with 2-bromopalmitate (2-BP) suppressed the STING-dependent downstream signalling 22 . Microsomal membrane fraction was prepared from cells treated with 2-BP and DMXAA. When the microsomal membrane fraction was subjected to the in vitro reaction, we found no phosphorylated STING (Fig. 3c), consistently with the in vivo observation 22 . A role of cholesterol and sphingomyelin in STING activation. We previously suggested the role of sphingomyelin in STING activation by the experiment with D-ceramide-C6 22 , an agent to disrupt lipid domain containing sphingomyelin 32 . We sought to address directly the role of lipids constituting the lipid domain in STING activation with the cell-free reaction. Microsomal membrane fraction prepared from DMXAA-stimulated cells was digested with recombinant bacterial sphingomyelinase (bSMase), and then subjected to the in vitro reaction. As shown in Fig. 4a, we found no phosphorylated STING with bSMase-digested membranes. Pre-treatment of the microsomal membrane fraction with methyl-β-cyclodextrin, an agent to extract cholesterol from membranes, also suppressed the phosphorylation of STING by TBK1 in dose-dependent fashion (Fig. 4b).
These results suggested the role of sphingomyelin and cholesterol in STING activation.
Discussion
In the present study, we developed a cell-free assay in which endogenous STING is phosphorylated by recombinant TBK1. The assay showed the dependency of STING phosphorylation on the STING agonist, TBK1, "ER-to-Golgi" traffic, and palmitoylation, which mirrored the nature of STING phosphorylation in vivo. With Scientific Reports | (2021) 11:11996 | https://doi.org/10.1038/s41598-021-91562-z www.nature.com/scientificreports/ the assay, we provided an in vitro evidence that sphingomyelin and cholesterol in the Golgi membranes were involved in the STING activation. The assay should provide a useful biochemical complement to cell biological studies presently used to understand the molecular mechanism underlying the STING activation. Protein palmitoylation has been implicated in the clustering of a number of proteins into a specific lipid microdomain "raft" enriched in cholesterol and sphingomyelin 33 . Cholesterol is suggested to be enriched at the TGN, and cholesterol together with sphingomyelin generated by sphingomyelin synthase 1 are thought to form lipid rafts at the TGN 32 . Together with the present findings by the cell-free experiments, we hypothesize that palmitoylated STING becomes clustered at the TGN with the aid of the raft, which facilitates the recruitment of TBK1 10,11 for phosphorylation of STING 13 . www.nature.com/scientificreports/ Sphingomyelin at the TGN is demonstrated to be essential for transport carrier formation 32 . Cholesterol at the Golgi, the levels of which is regulated by oxysterol-binding protein, is essential for the Golgi localization of intra-Golgi v-SNAREs by ensuring proper COP-I vesicle transport 34 . Caveolin-1, a protein to form caveolae at the plasma membrane, accumulates in the Golgi when the cholesterol level in the Golgi is low 35 . The present study demonstrates another role of sphingomyelin and cholesterol at the Golgi/TGN in cellular signalling. The
Cell culture. MEFs 22 were cultured in DMEM supplemented with 10% fetal bovine serum and penicillin/ streptomycin/glutamine in a 5% CO 2 incubator.
Generation of TBK1-KO cells by CRISPR-Cas9.
MEFs that stably expressed Cas9 were established using lentivirus. HEK293T cells were co-transfected with lentiCas9-blast, psPAX2, and pMD2.G. The medium that contains the lentivirus was collected and filtered through 0.45 µm PVDF filter. WT MEFs were incubated with the medium for 6 h and then selected with blasticidin (5 µg/mL) for several days. Single-guide RNAs (sgRNA) were designed to target mouse TBK1 genomic loci. The sgRNA (sense: 5′-cac-cgCAT AAG CTT CCT TCG CCC AG-3′, antisense: 5′-aaacCTG GGC GAA GGA AGC TTA TGc-3′) was cloned into a lentiGuide-Puro vector. The lentiviral plasmids, psPAX2, and pMD2.G were then co-transfected into HEK293T cells and the lentivirus-containing medium was collected. Cas9-expressing MEFs were incubated with the medium for 6 h and selected with puromycin (2 µg/mL) for several days. Single colonies were then isolated and the expression of TBK1 in each clone was analyzed by western blot. The genomic sequences of the clones were confirmed by Sanger sequencing. Briefly, genomic DNA was first isolated, and PCR was performed to amplify the targeted regions using the following primers: 5′-TGCC GGA TCC CTG AGA GGG TAC AGG TTG CC-3′ (sense primer, BamHI site is underlined) and 5′-CACC GAA TTC CTA GCC TGA AAG GCC TGG TG-3′ (antisense primer, EcoRI site is underlined). The PCR products were subsequently cloned into a pMX vector for sequencing analysis. For comparison, the same regions in the parental line were also sequenced. The sequencing data for the control and the knockout cells were then analyzed using Benchling.
Immunocytochemistry. Cells were fixed with 4% paraformaldehyde in PBS at room temperature for 15 min and permeabilized with digitonin (50 µg/mL) in 3% BSA-PBS at room temperature for 5 min. Cells were then incubated with primary antibodies, then with secondary antibodies conjugated with Alexa fluorophore. In vitro assay of recombinant TBK1-dependent phosphorylation of STING. Cells were collected in an ice-cold buffer (50 mM Tris-HCl pH 7.4, 100 mM NaCl, 1 mM EGTA, 2 mM DTT, 200 mM sucrose) containing protease inhibitors (25955-11, nacalai tesque), and phosphatase inhibitors (8 mM NaF, 12 mM β-glycerophosphate, 1 mM Na 3 VO 4 , 1.2 mM Na 2 MoO 4 , 5 µM cantharidin, and 2 mM imidazole), homogenized with 2 passages through a 27-gauge needle after 6 passages through a 23-gauge needle, and centrifuged at 3,000 × g for 5 min at 4 °C. The post-nuclear supernatant was overlaid on 10 µL of 2 M sucrose and centrifuged at 100,000 × g for 1 h at 4 °C. The membrane fractions were resuspended in a buffer (50 mM Tris-HCl pH 7.4, 100 mM NaCl, 1 mM EGTA, 2 mM DTT, 200 mM sucrose, 20 mM MgCl 2 , protease inhibitors, and phosphatase inhibitors), and incubated with ATP (1 mM) and recombinant TBK1 (100 ng) at 37 °C for 30 min. The samples were then subjected to SDS-PAGE and phosphorylation of STING was detected by western blot.
Western blot. Proteins were separated in polyacrylamide gel and then transferred to polyvinylidene difluoride membranes (Millipore). These membranes were incubated with primary antibodies, followed by secondary antibodies conjugated to horseradish peroxidase. The proteins were visualized by enhanced chemiluminescence using a Fusion SOLO.7S.EDGE (Vilber-Lourmat). | 2021-06-09T06:18:29.836Z | 2021-06-07T00:00:00.000 | {
"year": 2021,
"sha1": "567fca7d07b874d9027d6725af48b1e869ffea1c",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-91562-z.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa561356607cb59b3c6bb01fc808e039338a88bb",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222104394 | pes2o/s2orc | v3-fos-license | Improving student’s mathematical problem solving skills through Quizizz
The Industrial Revolution 4.0 on Education in Indonesia affects school activities rapidly. Teachers are no longer educated in classrooms only, but they can also utilize technology to conduct distance learning. One interactive application that can be used as a form of question exercise is Quizizz. Therefore, this research aims to: first, examine the effectiveness of students' activities by using Quizizz on mathematical problem-solving skills; second, investigate the differences in the increase in students' ability to solve mathematical problems between the class with and without the Quizizz-aided learning method; third, describe the activities of students who used the Quizizz-aided drill learning method; and fourth, describe the students' responses in using Quizizz. The research was quasi-experiment with a pretest-posttest non-equivalent control group design. The participants in this study are 67 of 10th students divided into experimental and control classes. Data collection techniques used were tests for students' mathematical problem-solving skills and questionnaires. The instruments were validated using Pearson correlation, while the reliability was tested using Cronbach's Alpha. Then, the N-gain test was used to analyze the data. The results showed that there was an effect on students’ learning activities by using Quizizz on their problem-solving skills. Besides, there was a difference in the improvement of problem-solving skills between the class with and without Quizizz-aided. Furthermore, students’ activities in three meetings have increased. Moreover, students provided a positive response in learning using Quizizz. Thus, it can be concluded that Quizizz is effective in improving mathematical problem-solving skills.
Introduction
The skill of solving mathematical problems is one of the essential mathematical skills which it is need to be mastered by students who study Mathematics (Hendriana et al., 2017). Problem-solving is also a part required to complete Mathematics learning. It means that to enhance creativity, logic, criticism, and systematic thinking, students must master a series of problem-solving skills in Mathematics (NCTM, 2000). Therefore, solving mathematical problems is a significant part of the learning goals that need to be achieved (Surya et al., 2017). http://journals.ums.ac.id/index.php/jramathedu PISA 2018 results showed that Indonesia's PISA rating dropped compared to the results in 2015. Furthermore, in the mathematics category, Indonesia ranked 73 with an average score of 379 (Tohir, 2019). This condition was strengthened by the author's observations when conducting a preliminary study in one of Dukupuntang State High Schools. Mathematics learning still used traditional ways in which the teacher explained, provided examples of questions and exercises, then ended it by providing assignments. Based on the results of the interview, the Mathematics teacher gave items that referred to printed books from the government and Students' Worksheet from the publisher. These problems did not yet focus on the mathematical problem-solving procedure. Limited time was also an obstacle for the teacher to adjust students' answers overall.
Furthermore, the learning media used was PowerPoint, which has not yet optimized the function of an android smartphone as a learning resource. Students' mathematical problem-solving skills are still far from expectations. Some causes of background mathematical knowledge, in general, are the availability of learning resources, the learning process, the strength of teachers, and national education policy (Siswono et al., 2017). Students usually lack the motivation to participate in learning activities actively. Young students often seek to have some fun learning experiences in class, but Mathematics is a subject that students perceive as not having much "fun" (Zhao, 2018). Therefore, the teachers have an essential role in managing students' interaction with learning resources to achieve the desired results (Sugiyanti & Muhtarom, 2016). Students' involvement in solving mathematical problems can be optimized with the help of instructional media.
Efforts that are alleged to be able to develop mathematical problem-solving skills are using smartphone media as a learning resource. Some free applications that can be utilized include Edmodo, Socrative, Kahoot, Quizizz, Google Classroom, Flubaroo, Edpuzzle, and many others. Game-based learning is one of the advancements in technology (Wang& Tahir, 2020). One effort to introduce modern technology in the classroom is through gamification, aiming to increase students' satisfaction in mathematics (Bullón et al., 2018). According to Burguillo's research, digital game-based learning can effectively increase students' attention, interest, creativity, and community relations (Burguillo, 2010). Quizizz is a game-based educational app that brings multiplayer activities to classrooms and makes in-class exercises interactive and fun (Zhao, 2018). Unlike other educational apps, Quizizz has game characteristics such as avatars, themes, memes, and music entertainment in the learning process. There are two main modes in Quizizz, namely the instructor mode as the quiz maker, which can be accessed via Quizizz.com, and the player mode, in this case, is the students, which can be accessed via www.Quizizz.com/join (Saleh & Sulaiman, 2019). This application is also equipped with a timer to answer each question; if a student answers quickly, the student will get more points than students who answer lower (Juniarta et al., 2020). Quizizz also allows students to compete with each other and motivates them to study. Students take the quiz at the same time in class and see their live ranking on the leader board. Although they work on the questions at relatively the same time, it is difficult for students to cheat because the questions and answers are given randomly (Akhtar et al., 2019). Instructors can monitor the process and download the report when the quiz has finished evaluating the students' performance (Çeker & Özdaml, 2017). The use of this apps in the mathematic classroom helps stimulate students' interest and improve students' engagement.
Several studies related to the effectiveness of using Quizizz have been done. Quizizz effectively increases students' learning activities in accounting classes and has a positive effect on their learning experience (Zhao, 2018). Furthermore, Quizizz is an interactive quiz application that is more effective in increasing students' enthusiasm in learning because it http://journals.ums.ac.id/index.php/jramathedu replaces the old quiz way that only involves paper and pens (Wibawa, 2019). However, research using Quizizz that focuses on the effectiveness of Quizizz on mathematical problem-solving skills has never been revealed.
Therefore, this study aims to: first, examine the effect of students' activities in solving problems with the quiz-aided drill method on students' mathematical problem-solving skills; second, identify the difference in increasing the ability to solve mathematical problems between students who applied the Quizizz-aided drill learning method and students who did not apply the method; third, find out the students' activities who applied Quizizz-aided drill learning methods; and fourth, describe the students' responses in using Quizizz. This study will contribute to the teachers in improving students' mathematical problem solving abilities using educational apps as a learning media. Besides, Quizizz can also serve as an alternative application for distance learning, especially during the pandemic.
Research Methods
Quasi-experimental research utilizing a post-test only control group design was employed to check whether there was a cause-effect relationship among the variables, data, and how the data were compared (Creswell & Clark, 2011). A quantitative approach was used to compare the final grades of students who had different treatments.
The population in this study was all students of 10th grade at one of the state high schools in Dukupuntangin 2019-2020. The research sample was taken using a purposive sampling technique, which consisted of 67 participants. It was grouped into experimental and control classes. High school students were chosen as samples because they have entered the developmental stage of formal operations as a provision for problem-solving. The class could be matched based on the students that had the same background of knowledge. Both classes' normality was needed to do a parametric test aiming to prove that the classes were distributed normally (Pallant, 2010). The experimental class consisted of twenty-five females and eleven males, while the control class consisted of twenty-one females and ten males. The two classes were given different treatments. The experimental class learned using the Quizizz-assisted drill methods while the control class learned by the drill method without the help of Quizizz.
Data collection techniques used were tests to determine the problem-solving skill and questionnaires to find out students' responses in learning using drill methods assisted by Quizizz interactive media. The multiple-choice test used Quizizz media, but the students solve the problem along with a coherent solution. The examples of the problem-solving test are presented in Figure 1. Before using mathematical problem solving, a trial was carried out on the quality of the problem. It was done to find the validity and reliability of each item. The test validity was tested using the Pearson product-moment correlation. An instrument is valid if it can reveal the variables studied precisely or with high validity. The validity test was calculated by SPSS 22 software. The results of the validity test showed that all test items were valid with medium and high levels. Besides, the reliability of the tests measured by using SPSS 22 was also high, with the value of Cronbach's alpha of 0.665. Therefore, these questions can be used as questions in the current research. A self-learning pre-test in multiple-choice-question format embedded in the Quizizz media was administered to the students to determine their prior knowledge in problem-solving skills. Afterward, the students participated in a written post-test session. At the end of learning, the questionnaire was distributed to measure their response after learning the Quizizz media. Students' responses to the Quizizz questionnaire consisted of 20 statements. Each statement was measured using a Likert scale level with five answers, which are Strongly Agree (SA), Agree (A), Undecided (U), Disagree (D), Strongly Disagree (SD). The sample statements in the questionnaire are listed in Table 1. I find it more challenging to understand the quiz questions presented in the Quizizz application. 5.
The Quizizz application is easy to use.
Post-test data were processed using inferential statistics. Shapiro Wilk test was used for normality tests because the sample was less than 50, while the homogeneity test used the Levene test. The t-test is used to find out the average difference if the data is normally distributed. Otherwise, the Mann-Whitney test is used. The gain test was used to determine the results of increasing students' mathematical problem-solving skills between before and after learning. The processed gain data was obtained from the difference between the pretest and post-test scores of the experimental class.
Questionnaire data analysis was done on the percentage of student answers. The questionnaire ratings are shown in Table 2. After the data analysis process was carried out, the conclusion was taken of whether or not the use of Quizizz in learning was effective by looking at the criteria: first, the existence or absence of the effect of the activities of students who used Quizizz on their mathematical problem-solving skills; second, the differences of the mathematical problemsolving skills between the experimental and control classes; third, the activities of students who used Quizizz; and fourth, students' response in learning by using Quizizz media.
Result and Discussion
Teaching and learning activities in Mathematics at one of the state high schools in Dukupuntang were carried out in five meetings. The research data were obtained through evaluation activities, including formative tests conducted for the students in the form of multiple-choice tests and observation of the learning stages using the Quizizz mediaassisted method. The type of test given was multiple choices, but students ought to solve the problem and how to solve them. Students worked on problems with problem-solving steps with limited set time in Quizizz. One of the student's answers after being given a learning treatment using Quizizz is presented in Figure 2. The first step in using Quizizz for teachers is to register as a teacher by logging in to www.Quizizz.comusingemail or google account they used when registering. After having an account, ask questions for the next learning activities. During the lesson, the teacher open the practice questions on the Quizizz profile and then click "My Quizizz". After that, the teacher needs to double-click on the training material to be carried out. Once ready, the teacher shall select "live game" for direct training. The teacher can also download statistical data about students' performance in the form of an Excel spreadsheet. The first step in using Quizizz for students is to visit www.Quizizz.com. Then, students enter the code that has been shown by the teacher to join the Quizizz platform.
Furthermore, students enter the game rules. They can also see the results of the correct or wrong after the process is complete. The Quizizz display that has been implemented is shown in Figure3. The implementation of learning, in general, was when the students were asked to observe and learn the material that has been presented on Microsoft power points and games in the form of practical questions on Quizizz media. In face-to-face learning, students were reminded again by asking the material that was previously studied, namely the content of the Linear Equation System in Two Variables (LESTV). By presenting a daily problem related to LESTV to be understood and answered by students, the teachers communicated the learning objectives to be achieved, conveyed the material's scope to be studied, and motivated the students by explaining the importance of learning LESTV material. In the core activities, the teacher organized the students in study groups. Then, students were asked to observe the examples of problems that have been explained and then worked on the examples of problems exercise related to LESTV in groups. Each group consisted of two or three students. After creating the groups, group representatives entered Quizizz media to take part in Quizizz games. The teacher went around, guided, observed, assessed the students' skills, and helped each group that had difficulties in solving problems or challenges in taking Quizizz. The next stage was the examination of group results. The students' work appeared immediately after the estimated time was up. The last step was giving a group reward. Students received positive feedback and reinforcement in oral, written, and prize as an appreciation form of the students' efforts. The teacher and students concluded the material that had been learned. Learners discussed to make a summary of the content that had been submitted. The experimental and control class students worked on the post-test questions in writing in the last meeting. The http://journals.ums.ac.id/index.php/jramathedu following table presents the descriptive data of the pre-test and post-test results in the experimental class and control class. Table 3 shows that the experimental class experienced an average increase of 47.54, while the control class experienced an average increase of 30.62. It means that both classes had a significant increase in the average. Furthermore, the data were analyzed using statistical tests to find the effect of students' activities on mathematical problem-solving skills as well as differences in the skills of the experimental class and the control class.
The Effect of Quizizz Media Assisted Method on Students' Mathematical Problem-Solving Skills
Students' activities using Quizizz on students' mathematical problem-solving skills were determined by conducting an analysis test, a simple linear regression test and determination. The test results are in Table 4. Table 4 shows that the output results show Sig .value of 0.000; this value is smaller than α = 0.05. Hence, it can be concluded that there was a significant effect on students' activities in mathematics learning using Quizizz on the students' mathematical problem-solving skills. Table 5 shows that R Square was 0.526 (0.526 x 100% = 52.6%). It means that the skill to solve mathematical problems can be affected by the variable of the Quizizz mediaassisted method's implementation by 52.6%. In contrast, 47.4% is explained by other variables besides the variables used in the study.
Students' activities had been increased from the first meeting to the last meeting. Students who were initially unenthusiastic in group learning were eventually enthusiastic. Students also became more active in teaching and learning activities, not ashamed to ask http://journals.ums.ac.id/index.php/jramathedu questions, and helped each other. It is in line with Saleh and Sulaiman's research, which states that the use of Quizizz makes students more confident, actively involved in class, learning is more student-centered, so it is more effective (Saleh & Sulaiman,2019). Another study on similar topics found that participants play Quizizz in class; each student plays their smartphone or laptop; and the teacher must make sure that every one of them has joined the Quizizz by entering the code that has been shared (Mei et al., 2018).
The Difference in Increasing the Mathematical Problems Solving Skills
Before testing the difference in increasing the skills to solve mathematical problems between the two classes, a normality test was first performed to determine the data distribution. The normality test in this study was used to test the N-Gain data from each group, the experimental class, and the control class to determine whether the data obtained was normally distributed. The normality test used was the Shapiro-Wilk test because the sample used was less than 50 students (Sundayana, 2015). Table 6 shows the normality test result showing the Sig. by using Shapiro-Wilk was 0.002 for N-Gain experimental class and 0.364 for the N-Gain control class. It turned out that for the N-Gain experimental class, the Significant/ Sig. value is smaller than the probability value of Sig. or 0.05> 0.002. It means that the data was not normally distributed. As for the N-Gain control class, the Sig. is greater than the probability value of Sig. or 0.05 <0.364, which means the data was normally distributed.
The N-Gain experimental class data was not normally distributed. In contrast, the N-Gain control class data were normally distributed so that it was continued with nonparametric statistics using the Mann Whitney-U Test. The Mann Whitney-U test aimed to determine the real difference between the average of two populations with the same distribution through two independent samples taken from both populations.
Students' Activities Observation Sheet
The data on the observation sheet of students' activities shows that in conducting the learning process, researchers conducted exercises by those listed in the lesson plan. They started from the events in the preliminary activities, which included entering the class on time, opening the lesson by greetings, asking for readiness to follow the experience, conveying the material, delivering the objectives, learning the methods that would be used, motivating the students, and exploring the students' background knowledge. Activities in the core activity are explaining the content and giving examples of practical questions in the form of online games on Quizizz media. Afterward, students discussed and solved the problems on Quizizz media.
At the first meeting, the teacher discussed the prerequisite material and the characteristics of the three-variable linear equation system so that students could not be active. However, some students still dared to ask the teacher about what they did not understand yet. Students began to be enthusiastic about the learning process after being introduced to an online game application to practice math problems called Quizizz. However, there were still some students who did not understand how to use the media. Furthermore, in the 3rd meeting, students were already seen to be active in the learning process. They were accustomed to using the interactive media Quizizz and dared to ask questions about anything they had not yet understood. Students' activities in the learning process also determine the effectiveness of the teaching methods and learning resources. The graphs of student activity in the experimental class at the 1st meeting to the 3rd meeting of the experimental class can be seen in Figure 4. Figure 3 shows that the use of Quizizz application also allowed students to be actively involved in learning outside the classroom. It can be monitored with the number of Quizizz that can be completed, the number of badges awarded, the number of challenges that were tried, the number of resources downloaded, and the level students have achieved (Stewart & Chung, 2016). A survey conducted by Permana and Permatawati shows that students preferred to do live Quizizz in class rather than as homework assignments (Permana & Permatawati, 2020). Therefore, the teacher, as a facilitator, still plays an essential role in learning.
Students' Responses to Quizizz Interactive Media
A student response questionnaire was used to assess students' responses after using Quizizz media. The results of the analysis of the questionnaire data responses of students to the Quizizz media were presented in Figure 5. Figure 5 shows the results of student responses using questionnaire learning using drill methods assisted by Quizizz interactive media. Almost all students felt the benefits and usefulness of the learning process. Quizizz display, which is interesting, and something new for students, make the learning not dull. These aspects obtained the highest score in the questionnaire assessment of 86% and the overall statement score is 76.67% with proper interpretation. The improvement of problem-solving skills in this study is in line with the research results of Sulastri, et al. The application of the LAPS-Talk-Ball learning model integrated on Android-based interactive games can train students' complex problem-solving skills. Based on the results of the calculation of the gain test in the experimental class, the results obtained were0.707 (Sulastri et al., 2019). These results are in line with Meng et al. study that worked on questions using Quizizz applications limited by time. The ranking given during the quiz made students feel satisfied, more focused, competitive, and motivated in solving the questions (Meng et al., 2019). Some disadvantages of using Quizizz include the platform only available in English and online, so an internet connection is required to create and respond to the quiz (Junior, 2020). The use of the Quizizz-aided media drill method is more effective in improving students' learning outcomes than conventional methods. Quizizz is a measurable learning tool that can motivate and engage students with their content (Amornchewin, 2018). However, the results of this study are not in line with the research by Göksün and Gürsoy which the use of the Quizizz application in gamification activities used in research did not have a positive effect on academic achievement and student participation (Göksün & Gürsoy, 2019).
Conclusion
There was an effect on students' learning activities using Quizizz on their problemsolving skills. Furthermore, the improvement of problem-solving skills is difference between the classes with and without Quiziz-aided learning methods. In case of the students' activity, it increased significantly in the three meetings conducted. Meanwhile, the students' responses to the use of Quizizz are in good criteria. Hence, the Quizizz media is effectively used in learning to improve mathematical problem-solving skills. Quizizz is very useful as a learning media in dealing with changes from 3.0 industrial revolution, http://journals.ums.ac.id/index.php/jramathedu where mathematics learning is still dominated hands-on media, to 4.0 era where more emphasize on the use of digital media such as software applications. Therefore, the blended-leaning strategy, which combines face-to-face and online learning, will become a necessity in this era. Further research will investigate the perceptions and challenges of the teachers in applying the Quizziz in learning. Besides, the practicality in using Quizziz as learning apps is also important to discover. | 2020-10-02T12:41:15.149Z | 2020-07-17T00:00:00.000 | {
"year": 2020,
"sha1": "5fa7f1d7fdd787f1cac6033e93afad27d81b363e",
"oa_license": "CCBY",
"oa_url": "http://journals.ums.ac.id/index.php/jramathedu/article/download/10696/5799",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2432113f1636b8d1916d0e850ecf2c3a41accbfd",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
10464560 | pes2o/s2orc | v3-fos-license | Protected control packets to prevent denial of services attacks in IEEE 802.11 wireless networks
Denial-of-service (DoS) attack exploits inherent limitation of resources in wireless networks in attempt to overwhelm and exhaust their finite capacity. In wireless networks, clear-text form of control packets (CP) exhibits a security flaw that can be exploited by attackers to render the networks incapable of providing normal services. While these attacks are quite damaging in terms of consuming available processing and bandwidth resources, they are easy to conduct against the wireless networks. In this study, we propose two distinct models to prevent wireless DoS and replay attacks based on trust in CP for IEEE 802.11 wireless networks. The first model is based on original HMAC-SHA1 algorithm and the second one is based on a proposed modified HMAC-SHA1 (M-hmac) algorithm. Both models are implemented and the results are obtained and evaluated based on a number of metrics. The results show that the two models successfully prevent both wireless DoS and replay attacks. In addition, the newly proposed M-hmac algorithm provides better network performance in term of the metrics.
Introduction
There are various types of security protocols in wireless networks that protect data packets during transmission such as WEP, WPA, and 802.11i.Despite providing different levels of security, these security protocols are not able to protect control packets (CP).Consequently, the CP including RTS, CTS, ACK, CF-End, and CF-End-ACK are transmitted in clear-text form.Wireless CPs contain a 2-byte duration field with 32767 μs maximum value used to set the network allocation vector (NAV).The field shows the time that the channel will be kept busy by the originator node.During this time, other nodes are not allowed to transmit and must wait until NAV reaches zero.Because CP are not protected, it is possible for the attackers to generate false CP with large duration values in order to trigger denial-of-service (DoS) attacks.The intention is to overload the target wireless networks and crash the systems by quickly consuming the networks' capacities.
The attackers may continuously transmit false CP to the target wireless network within short intervals.The target network has to accept the false CP because the 802.11 standard does not provide any mechanism to distinguish the true CPs from the false [1][2][3].Reception of large number of false CP within a short time will quickly consume the network resources until the access point (AP) becomes overloaded and crashed.When this happens, the AP has to disconnect all the users who are at the time connected to the network as it comes to a complete halt and shutdown.While these attacks are easy to implement, they have highly negative impact so that completely shut down the wireless networks and significantly degrade the quality of services.
In this study, two distinct models are proposed to distinguish the trust-based CP belonging to the authorized users from the false CP belonging to the attackers.The models enable the recipients to investigate the CP and analyze information contained in their security elements in order to accept trusted CP and discard false CP.The proposed models support all types of CP and are able to prevent both DoS attacks and replay attacks with sufficient level of security.
The remainder of this article is organized as follows.Related studies are discussed in Section 2. Section 3 presents the structures of the two proposed models.Section 4 details out the experimental setup to implement the models.Section 5 presents the experimental results and discussion.Finally, Section 6 concludes the article.
Related studies
In order to address the wireless DoS attacks, different schemes have been proposed.The schemes can be classified into three general methods, namely, cryptographic, detection, and NAV validation methods.Table 1 summarizes the strategies adopted by these schemes and highlights the corresponding weaknesses.
Despite all the benefits getting from the proposed schemes in Table 1, there are still a number of notable weak points.Each scheme only addresses a specific issue from the entire security problem, therefore, requires complementary solutions.The schemes are still vulnerable to replay attacks.In addition, ignoring protection of contention-free CP by these schemes keeps the DoS attacks a threat against wireless networks.These drawbacks lead to a need to provide a security model so that while it is able to prevent the DoS and replay attacks by protecting all types of CP, it does not impose significant security cost to the wireless networks compared to the 802.11 standard model which are the main focus of this study.
Overall structure of the proposed models
In order to prevent wireless DoS and replay attacks, we propose two distinct models that provide secure control packets (SCPs).The first model employs original HMAC-SHA1 as the underlying authentication algorithm, which we refer to as O-hmac.Because it is based on O-hmac, we named the first proposed model as SCP-O.For the second proposed model, there are two steps involved.First, we propose some modifications over the HMAC-SHA1 and we refer to the modified version as M-hmac.
The main purpose of the M-hmac is to reduce the security cost and communication overheads of O-hmac while at the same time optimizing its performance.Mhmac is then used as the underlying authentication algorithm of the second proposed model, which is called SCP-M.
Implementation of the SCP-O involves three main parts which are proposed key derivation algorithm (KDA), creating two new security elements, and proposed replay-preventing mechanism.In contrast, implementation of the SCP-M in addition to these three parts requires one additional part to develop the proposed Mhmac.These four parts are presented in Figure 1 while the overall structure and implementation process of each part are explained in the following sections.
Proposed KDA
The implementation of both SCP-M and SCP-O models requires a key, while protection of the key that is shared between the communication parties is an important consideration.We assume the users in wireless network to be equipped with a shared key K of length k bits.Nonetheless, applying the main shared key directly through the communications is insecure [4].Based on this argument, we propose a new KDA used during the SCP transmissions.The proposed KDA hashes the main shared key with two rationales behind the design.First, the length of the hashed password is longer than length of the original key, thus the hashed key is harder to compromise.If the length of original key is k bits, then there are 2 k possible values that the attacker has to expose.However, for a hashed key with n bits where n > k, the number of attempts for the attacker to break the key is 2 n , where 2 n > 2 k .This means that the attacker will be dealing with a more difficult task to reveal the value of the hashed key.
Second, hash functions are one-way algorithms.Therefore, finding the original key from the hash results demands high effort which can make it practically infeasible [4].
On the other hand, although hashing a key can be useful in strengthening security [5] as compared to using a shorter plaintext key, it is still possible for the attacker to compromise the key using the rainbow table attacks [6].To avoid this issue, we incorporate two additional cryptographic salts to the original shared key before the key is being hashed.A cryptographic salt is additional information that is added to the main key before hashing takes place [7] which makes the key disclosure attacks (e.g., rainbow table attack) slower and more difficult [8].
In this study, we apply two cryptographic salts.The first cryptographic salt is referred to as cryptsalt1 using the network name (SSID) as the value.The second cryptographic salt is referred to as cryptsalt2 using the MAC address of AP (BSSID) as the value.The cryptsalt1 is appended to the original shared key to generate a cryptographic-salted-key as follow.
Cryptographic − salted − key = Original shared key cryptsalt1 where is concatenation Once the salted cryptographic key is generated, the following processes are performed to generate the final key (FK) for both SCP-O and SCP-M models to be used during subsequent SCPs transmissions instead of the original key.
Generating FK for the SCP-O model
To generate the FK for the SCP-O model which is called FK-SCP-O, we consider cryptsalt2 as the input to the O-hmac algorithm with the cryptographic-salted-key as the key to generate FK-SCP-O key with 160 bits length as follow.
Generating FK for the SCP-M model
The process of the key generation for the SCP-M model is different from key generation for the SCP-O model.To generate the FK for the SCP-M model which is called FK-SCP-M, we employ the first round of the Mhmac algorithm.Nevertheless, we exclude the second round of M-hmac in the key derivation because of the fact that a longer key poses harder constraints to decode the key value by the attackers.The length of the output in the first round of M-hmac is longer than its second round, which leads us to use only the first round of Mhmac.
We enter cryptsalt2 as the input to the first round of M-hmac algorithm with the cryptographic-salted-key as the key to generate FK-SCP-M key with a length of 160 bits as follow.
The overall process of the proposed KDAs is described in Figure 2.
New security fields: TS and AF
To ensure security, imposing extra overheads to the networks is unavoidable [9].However, because of the fact that wireless networks have limited resources [10], it is required to keep the overheads as small as possible.In this study, we define two new security fields, TS and AF, to construct the SCP as follows.
• TS field: This field carries creation time of the CP and is used to verify their freshness.The TS field is 4 bytes and is appended at the end of all CP before the FCS field.• AF field: In order to verify integrity of the received CP and identify the validity of their originator, a new field called the authenticator field (AF) is created.This field is attached following the TS field in all types of CP to carry output results of the M-hmac and Ohmac.The length of the AF is 20 bytes in the SCP-O model as compared to 12 bytes in the SCP-M model.
Proposed replay-preventing mechanism
We design a replay attack protection mechanism based on threshold timeout window in order to verify freshness of the received CP.The replay attack protection mechanism is accomplished by tagging each outgoing CP with an identifier, which is the creation time of that particular CP.We formalize and determine five distinct threshold timeout windows, which are called TO RTS , TO CTS , TO ACK , TO CF-End , and TO CF-End-ACK related to the five SCP.They determine maximum acceptable age of the corresponding SCP as presented in Figure 3.
In order to determine these five threshold timeout windows, a number of IEEE 802.11 standard notations [11] are used as presented in Table 2.
Now we define T SCP , as the required time to transmit the entire SCP including their physical header as follow.
In Equation 1, L SCP is the length of the SCP and T SCP is considered for all types of CP as T RTS , T CTS , T ACK , T CF-End , and T CF-End-ACK , which represent the required time for transmission of the secure RTS, CTS, ACK, CF-End, and CF-End-ACK packets, respectively.The followings are the timeout formulization and the corresponding values in both SCP-M and SCP-O models.
Timeout calculation in the SCP-M model
Upon receiving a CP, the recipient must first verify its freshness using the corresponding timeout value and the following equation.
In Equation 2, CCT is the current clock time announced by the secure timing synchronization function in the beacon frames [12].If the above condition is met, the received SCP is deemed fresh and is old if otherwise.Old CP will be discarded immediately by the recipient to prevent replay attacks.
Proposed M-hmac
The design of M-hmac is motivated by resource constraints in wireless networks and to optimize the network performance using the SCP-M model as compared to the SCP-O model.National Institute of Standards and Technology (NIST) considers an optional transformation function called finalization function (g) which can be used to derive the desired output from the original output of hash functions.This can provide more options to achieve the best desired hash results out of hash functions.Therefore, we provide the required modifications over the finalization function which can offer the following advantages.
• NIST has agreed on the presence of this function [13].Therefore, we utilize this advantage to optimize the performance of the SCP-M model without tempering the basic structure of the HMAC-SHA1 algorithm.This can avoid unknown security flaws in the SCP-M model.
• NIST does not define any specific algorithm for this function.This feature can eliminate any limitation to provide the desired transformations in effort to enhance the efficiency of the SCP-M model.
• Since the compression function of the M-hmac is similar to HMAC-SHA1, its security analysis can be accomplished as the HMAC-SHA1 which is already well known.
The proposed M-hmac applies the same compression function as the HMAC-SHA1 algorithm with two new additional rounds to the algorithm in the finalization function.The first round of M-hmac includes a new function that involves the stream cipher encryption algorithm while the second round of M-hmac constructs the value of the AF field.
First round of M-hmac
In this round, there are three fields that are used as inputs to the compression function along with the FK-SCP-M as the key, namely, the TS, duration, and receiver address.The output result is 160-bit final chaining variable (FCV) that moves into the new stream cipher function (SCF).The SCF first divides the 160-bit FCV into two halves as left and right halves with the size of 80-bit each.Then, the SCF breaks both the left and the right halves separately into chunks of the same length of 4-bit each.
The SCF considers the right half chunks as keystream and the left half chunks as input message.The encryption process in the SCF is accomplished by combining each left half chunk with the corresponding right half chunk using XOR operation to generate a new 160-bit stream.The result is called transformed chaining variable (TCV) which acts as the input to the second round of M-hmac.
Second round of M-hmac
This round takes the 160-bit TCV and divides it into 5 words of 32-bit each, which are TCV 0 , TCV 1 , TCV 2 , TCV 3 , and TCV 4 .Next, XOR operation is applied over all the words in order to generate a 96-bit output in hex format (H).We call this 96-bit output as value of AF field (VAF), which is placed in the AF field of the SCP.The process of extracting the 96-bit VAF out of the 160-bit TCV is accomplished as follows.
where is concatination
The overall process of the both rounds in M-hmac is presented in Figure 4.
Defense process by the SCP-O and SCP-M models
Basically, defense process by the SCP-M and SCP-O models comprises two main phases: the generation phase and the verification phase as follows.
Generation phase
This phase is carried out by the sender station to generate values of the TS and AF security fields.The sender station determines the creation time of the outgoing CP to be placed into the TS field.Then, the sender must generate value of the AF field.Therefore, the TS, duration, and receiver address are considered as inputs to the M-hmac/O-hmac along with the FK-SCP-M/FK-SCP-O keys.After the calculation of the sender VAF (S-VAF), this value is tagged into the AF field.Once this is done, the CP will be transmitted to the intended receiver.
Verification phase
This phase is carried out by the receiver station to verify the freshness and validity of the received CP.Upon receiving the CP, the receiver station will immediately discard the packets that do not have any TS or AF security elements based on the ground of wrong format.Otherwise, the receiver must first check the freshness of the received CP.Doing freshness check as the first line of defense can significantly enhance the speed and efficiency of the proposed models.By this way, the old CP are discarded immediately by the receiver without even wasting time to involve the authentication algorithm.
For the freshness check, the receiver station adheres to Equation 2 and subtracts the current clock time from the value in the TS field of the received CP.This is to check whether the result is less than or equal to the corresponding timeout value.If the condition is met, the receiver considers the CP as fresh.At this point, if the CP is CF-End or CF-End-ACK, the receiver must verify their duration field.If the duration field of these CP is not zero, the packet is considered as an invalid packet because of its wrong format and will be discarded.Otherwise, the recipient moves to the next step to validate the authenticity of the sender and integrity of the CP.
The receiver goes through the same process as the sender to recalculate VAF, which is called Receiver VAF (R-VAF).The receiver compares this calculated R-VAF with the received S-VAF.If there is a match between these two values, the packet is accepted and its corresponding function is implemented.Otherwise the packet is discarded by the receiver.The general implementation process of the SCP-M model is presented in Figure 5.
Security analysis
In order to start DoS attacks against the wireless networks protected by the SCP-M and SCP-O models, the attackers first have to generate valid forgery CP.Passing the authentication and integrity check requires generating valid S-VAF for the forgery CP without knowing value of the FKs.To achieve this, the attackers need to carry out some hash-based attacks against the SCP-M and SCP-O models.Therefore, in order to analyze security of the SCP-M and SCP-O models we describe the complexity of the common hash-based attacks against these models while the following notations are used.
• Attack complexity: The complexity of an algorithm is defined as amount of required efforts or number of operations required to break the algorithm.Therefore, unit measurement of the attack complexity is number of attempts [14].In this regard, the security level of an algorithm is defined as the amount of work required to break the algorithm [15].The unit measurement of the security level is bits.Thus, if 2 N attempt is the attack complexity, the security level of the algorithm will be N bits.
• Length of the FK is denoted by k.
• Length of S-VAF is denoted by m.
• Given information: When an attacker attempts to perform his malicious intents, he will use all available public known information to make the attacks process faster and easier.Therefore, the attacker knows the description of the algorithms used in the system and format of the messages as well.The attacker also can observe sequence of messages with their corresponding S-VAF.However, what the attacker does not know is value of the FK which is one of the main inputs to generate S-VAF.The given information known by the attackers represented as follows.
The description of the common hash-based attacks against the SCP-M and SCP-O models along with their complexity is presented as follows.
MAC guessing attack
MAC guessing attack [16] is when the attackers try to find a valid S-VAF for their forgery CP by guessing it.Since the attackers do not know the value of the key (i.e., FK-SCP-M and FK-SCP-O), they have to guess all possible values for the output result.The attacker randomly generates a tag for one fixed message of his choice and transmits it to the receiver.If by any chance, the tag is valid the receiver accepts the message and the attack is successful.However, if the tag is not verified as a valid tag by the recipient, the attacker has to repeat the process and generate another tag for his fixed message until finding a valid tag.According to [16], complexity of this attack is directly proportional to the length of tag and the applied key.The attack complexity is described as follows.
Problem: MAC guessing attack using the given information Goal: Attacker tries to find a valid z for his own x such that x ∉ {x 1 , x 2 ,...,x q } Attack complexity: MIN(2 m-1 , 2 k-1 ) attempts Hence, complexity of the best attack is about 2 159 and 2 95 for the SCP-O and SCP-M models, respectively.
Key recovery attack
In order to have full access over the wireless channel, the attacker attempts to figure out the value of the key applied through the communications [17].By exposing value of the key, the attacker can make any valid S-VAF.In order to implement this attack, the attacker determines a fixed input message along with a randomly selected value for the secret key.He sends these values to the same MAC algorithm applied in the target system and generates the tag.Then, the attacker appends the tag to his fixed message to transmit to the receiver in the target system.If the recipient accepts the message, it means that the attacker has the correct secret key.Otherwise, he has to try another secret key with his fixed message and this process is repeated by the attacker until finding the correct value for the key.Complexity of this attack against the SCP-M and SCP-O models is considered as follows.
Problem: Key recovery attack using the given information Goal: Attacker tries to find a valid FK' such that FK' = FK Attack complexity: 2 k-1 attempts Since the length of the keys for the both SCP-M and SCP-O models is the same, to figure out the correct value of the FK-SCP-M and FK-SCP-O, the attacker has 2 159 possible values to examine.
Forgery attack
In order to implement this attack, the attacker tries to determine a valid tag for his new forgery message, which has not already been transmitted to the network.Forgery attacks in terms of second preimage [17] or birthday forgery [18] are carried out by the attackers to find valid tags.The complexity of these attacks is measured as follows.
Problem: Second preimage forgery attack using the given information
Goal: Attacker by having (x , z t ) where: 1 ≤ t ≤ q tries to make his message x such that z = z t Attack complexity: 2 m-1 attempts The work factor to make a new SCP matching with a given S-VAF using the second preimage forgery method is 2 m-1 .Thus, the complexity of the second preimage attack is about 2 159 and 2 95 for the SCP-O and SCP-M models, respectively.
Problem: Birthday forgery attack using the given information Goal: Attacker by having (x t , y t ) where: 1 ≤ t ≤ q, tries to make his message x such that y = y t Attack complexity: 2 m/2 attempts The workload to carry out forgery attack using the birthday method is 2 m/2 .Thus, complexity of the birthday forgery attack is about 2 80 and 2 48 for the SCP-O and SCP-M models, respectively.
Experimental design
We have developed a simulation environment using omnetpp to implement the SCP-M and SCP-O models and evaluate their performance.The simulation environment, experiments methodology, and performance metrics that are used to evaluate the models are further described in the following sections.
Simulation environment
The simulation environment consists of three types of entities, namely attacker station, authorized wireless stations, and authorized AP.In this environment, there are two types of wireless networks, namely, the protected and unprotected.The unprotected wireless network adheres to the current MAC layer of IEEE 802.11 standard, which is known to be vulnerable to DoS attacks.In contrast, the protected wireless network adheres to the proposed models as the secure MAC layers.Two new MAC layers are created and written in C++ to include the required codes for each proposed model separately.
In general, each wireless host (80211-Host) has a wireless NIC (80211-NIC) that includes the PHY and MAC layers.The unprotected wireless network utilizes the current MAC layer (802.11-Current-MAClayer).In contrast, the protected networks utilize two secure MAC layers as the proposed models, namely, the 802.11-SCP-M and 802.11-SCP-OMAC layers.Figure 6 shows these secure and insecure MAC layers.
The structure of the unprotected wireless network and the two protected wireless networks in the simulation environment are developed exactly the same to provide fair conditions during evaluation of the models (Figure 7).The two wireless in the unprotected network are wireless station1 (WS1) and wireless station2 (WS2) connected to the AP.All the WS1, WS2, and AP follow the 802.11 standard model.
In the protected wireless network using the SCP-M, the two wireless stations are protected wireless station1 (PWS1) and protected wireless station2 (PWS2) connected to the protected AP (PAP).All the PWS1, PWS2, and PAP follow the SCP-M model.
In the protected wireless network using the SCP-O, the two wireless stations are protected wireless station1 (PWS1) and protected wireless station2 (PWS2) connected to the PAP.All the PWS1, PWS2, and PAP follow the SCP-O model.
In the above environment, the sender transmits 1000 bytes TCP packets with 0.5 s intervals and 56 bytes ICMP packets with 1 s intervals to the AP which in turn transmits them to the receiver.The attacker station is configured to trigger different types of DoS attacks against the three networks using different types of SCP.For all types of the attacks, we consider the attack cycle as 100 false SCP per second (0.01 s attack rate), while the duration field of all these false packets is set to the maximum possible value which is 32767 μs.The reason is a larger duration filed can keep the NAV reserved for a longer time.This consequently denies the channel from the authorized users for a longer time which can extend impact of the attacks as the attackers do in real world.
Experiments methodology
An extensive set of experiments are carried out to implement, evaluate, and compare the performance of the protected wireless networks using the proposed SCP-M and SCP-O models and the unprotected wireless network using the current 802.11 model.
According to the IEEE 802.11, there are two types of communication modes in wireless networks, namely, the enabled and disabled RTS/CTS handshakes.Since the proposed models directly deal with the wireless CP, enabling or disabling the handshake can provide significant differences in the network performance.This motivates us to perform the experiments under two general scenarios as enabled and disabled handshakes.Each of these scenarios includes two more sub-scenarios to evaluate the models performance under DoS attacks and under normal conditions without any attacks.Throughout the experiments, the total implementation time is considered 90 s.In the scenarios under DoS attack, the 90 s is divided in three 30-s time frames to represent duration attack (B.attack), during attack (D.attack), and after attack (A.attack).For the scenarios under normal conditions with no attacks, these three 30-s time frames are denoted as 0-30, 30-60, and 60-90 s to have identical time frames for comparisons.
Performance metrics
The following network metrics are investigated to evaluate performance of the models.
• Throughput: It is computed by dividing the amount of data received by the destination node with the time taken to arrive at this node.
• Delay: It is the average amount of time taken by a packet to travel from the originating node until it is successfully received at the destination node.
• Performance improvement ratio: It is to show the overall percentage of the throughput improvement in the wireless networks using the SCP-M model as compared to the SCP-O model under DoS attacks and under normal conditions.
• Security cost: It is percentage of the network performance degradation caused by the proposed models as compared to the standard model under normal conditions with no attacks.
• Packet lost ratio (PLR): It is measured as the number of dropped ICMP packets divided by the total number of sent ICMP packets during the attacks.
• Round trip response time (RTT): It is the time required for ICMP packets to travel from the source to the destination and back again.
Results and analysis
In this section, the results from implementation of the proposed models are presented.We evaluate the performance of the SCP-M and SCP-O models to determine and compare their capabilities.These comparisons are accomplished to assist in selecting the more appropriate model for the wireless networks in order to prevent DoS attacks.By comparing the results and considering the required security level along with the system limitations, capabilities, and requirements, the more efficient model can be applied.Performance of the proposed models is evaluated in the following sections.
Effect of disabled handshake
This scenario evaluates and compares performance of the protected wireless networks using the proposed SCP-M and SCP-O models and unprotected wireless network using the current IEEE 802.11 model when the RTS/CTS handshake is not involved in the communications.
Performance analysis under DoS attacks
Figures 8 and 9 show the results from the delay and RTT along with the throughput and PLR as presented in Tables 3 and 4, respectively.
According to the above results, we can see that both SCP-M and SCP-O proposed models, unlike the current Delay (s) 802.11 model, have successfully prevented wireless DoS attacks.Before the attacker begins the attacks, normal traffics are observed in the both protected and unprotected networks.However, immediately after triggering the attacks, the unprotected network is with large number of false CP that consume the available bandwidth.As the result, the network is no longer capable to handle valid requests made from any authorized user.Consequently, the communication between the stations is broken and the network throughput quickly drops to null.The remarkable difference between the null throughput of the current model with the normal throughput of the proposed models during the attacks, as well as the 36% PLR of the standard model as compared to the null PLR of the proposed models show that SCP-M and SCP-O models have successfully prevented the attacks.
After the attacks, we observed a high peak in amount of delay and RTT of the standard model for a short time.The cause of this high peak is related to some already scheduled packets.Before starting the attacks, some packets were in the queue waiting to be transmitted.When the attack starts, the AP is overloaded by the false CP thus dropping all the upcoming new packets.The packets that were in the queue prior to the attack still remain there because the AP is not able to do any more services.When the AP was back to its normal conditions and regained ability to provide services for its users, it began to transmit the queued packets.Therefore, these packets experienced significant amount of delay.After transmitting all these queued packets, the new upcoming packets are transmitted normally.In contrast, because the proposed models are not affected by the attacks, there is no peak in their amount of delay and RTT after the attacks.
Based on the results, we observed 100% performance improvement for the protected wireless networks by adopting the proposed models as compared to the standard model under different types of attacks.The results also show that using the SCP-M in the network will improve the network performance under the DoS attacks, up to 7% compared to the SCP-O model.
Performance analysis under no attacks
The results of the delay, RTT, and throughput under normal conditions are presented in Figures 10, 11, and Table 5 respectively.Based on the above results, we that when the wireless network is under normal condition without the presence of the false CP, performance of the proposed models is very close to the standard model.Considering the fact that security comes in price of extra overheads, the small difference between performance of the SCP-M and SCP-O models with the standard model proves that adopting these models does not impose remarkable computational overhead to the wireless networks.The security cost of 9% in SCP-M and 12% in SCP-O can be considered negligible as compared to their outstanding achievements of 100% performance improvement under different types of DoS attacks.The results also showed that adopting the SCP-M model provides 2% performance improvement as compared to the SCP-O model when the handshake is disabled during the normal transmissions.
Effect of enabled handshake
The scenario of enabled handshake evaluates and compares performance of the protected wireless networks using the proposed SCP-M and SCP-O models and unprotected wireless network using the current IEEE 802.11 model when the RTS/CTS handshake is used during the communications between the authorized stations.
Performance analysis under DoS attacks
The results of delay and throughput are presented in Figure 12 and Table 6, respectively.
The above results prove the fact that enabling the RTS/CTS handshake causes significant influence over the entire network performance regardless if the network is protected or not.In this case, the amount of delay that packets experience is higher than when the handshake is disabled.The reason for this increase is because the handshake process needs extra time to be accomplished.The originator node must wait until the handshake is completed before the actual data transmission takes place.This sending RTS and waiting for CTS before any data transmission is time consuming while at the same time data are waiting in the transmitter buffer to be sent.In contrast, with a disabled handshake, data are sent immediately as soon as they are ready, which in turn decreases the overall amount of delay.
These results are also consistent with the previous results, where the proposed models unlike the standard model have successfully prevented the DoS attacks.Comparing the null throughput of the standard model with the normal throughput of the proposed models during the DoS attacks indicate 100% improvement in the wireless network performance.In addition, adopting the SCP-M model in the wireless networks has resulted in better performance as compared to the SCP-O model in terms of higher throughput, less delay, and RTT.The SCP-M model managed to enhance the overall system performance by up to 11% during the attacks, which is 4% higher than when the handshake is disabled.
Performance analysis under no attacks
The results of delay and throughput are presented in Figure 13 and Table 7, respectively.From the above results, we observe that under normal conditions, the performance of the model is close to the standard model.The security cost of the SCP-M and SCP-O models are about 13 and 20%, under normal conditions.these results confirm our previous results as better performance of the SCP-M model in terms of higher throughput, and less delay, and RTT compared to the SCP-O model and thereby prove advantage of the proposed M-hmac over the original O-hmac.Adopting the SCP-M model in wireless network can improve the overall system performance by up to 6% compared to the SCP-O model which is 3% higher than when the RTS/CTS handshake is disabled.
The reason of higher security improvement of the SCP-M model over the SCP-O model when the handshake is enabled compared to a disabled handshake is related to the nature of the handshake.When the handshake is used during the communications, using the SCP-O model imposes extra 8 bytes to the network by each RTS, CTS, and ACK packets.In this case, the total overhead of each successful data transmission using the SCP-O model is about 24 bytes more than the SCP-M model.In contrast, when the handshake is disabled the extra 8 bytes is imposed to the network by only ACK packet which makes the difference between the imposed overhead by the SCP-M model less than the SCP-O model.
Functional analysis of the proposed models
According to the SCP-M and SCP-O models, upon receiving a wireless CP, the recipient must first verify its freshness and then its validity to accept or reject the CP.Thus, the functionality of the models varies depending on value of the TS and AF.In this section, functionality of the proposed models is investigated upon receiving the CP either belong to the attacker or the authorized users.The results are presented for the SCP-M model as follows.attempts to verify freshness of the received RTS.Therefore, as the figure shows, subtraction of the current clock time and TS is calculated by the replay-preventing mechanism.The result of the calculation is less than the TO RTS , therefore although the RTS packet is forgery, it can pass the freshness check.
Influence of arriving forgery CP
Then the recipient attempts to verify validity of the AF security field.At this step, since the attacker does not know value of the key, he is not able to set a valid value for the AF field.Thus, the AF field of the received RTS packet is not valid.Consequently, although the packet is fresh, it is discarded as an invalid CP and thwarts the attack.
As it is observed, since the first forgery CP considered fresh, for the first forgery packet both TS and AF verifications are carried out.However after that, since the forgery packet is getting old over the time, the second forgery packet cannot pass the freshness check anymore and without even checking the AF the second and all the subsequent forgery CP are discarded.The live capturing of this state is shown in Figure 15.
As the above figure shows, the secure MAC layer of the SCP-M model (80211-SCP-M) determines the packet as an RTS packet with 36 bytes length, 30 s TS, and 32767 μs duration field received by the AP.This packet is regarded as a old CP through the freshness check and is discarded without even checking the AF field.This significantly speeds up the overall process of the proposed models and makes them more efficient to apply in the wireless networks.During the entire attacks time, the results show that only the first forgery CP at the beginning of the attack can pass the freshness check while it is discarded by the proposed models because of the wrong AF field.All the subsequent forgery CP other than the first one are discarded as old CP by the replay preventing mechanism.
Influence of arriving authorized CP
When the destination node receives a valid CP, since the legitimate originator node has the correct value of the key, it is able to make a valid AF value in addition to a fresh value for the TS field.The process of passing both freshness and authentication verifications for legal CP is shown in Figure 16.
As the above figure shows, after determining the specification of the received CP by the 80211-SCP-M MAC models do not discard any valid CP which proves that under normal conditions the models correctly follow the IEEE 802.11 standard.Based on the obtained results, under normal conditions without presence of the attackers, both freshness and authenticity of the authorized CP must be verified which can consume system resources.To avoid this issue and to get the best performance of the wireless networks, enabling the SCP-M and SCP-O models after detection of the DoS attacks can reduce the overall overheads and maintain maximum network throughput.In this case, since the proposed models are only used when the DoS attacks have occurred, they are light on resource usages which make the models significantly efficient for the limited resources wireless networks.
Conclusion
The unprotected CP are capable to make the entire wireless networks vulnerable to DoS attacks and result in serious damages in the critical areas where constant availability of the networks, resources, and services is a prime priority.In this study, we proposed two new distinct models, SCP-M and SCP-O, to prevent wireless DoS and replay attacks by protecting the CP.The proposed SCP-M and SCP-O models were evaluated through extensive set of scenarios and experiments.The obtained results proved that while the standard model failed against the attacks, both the proposed models have successfully prevented the wireless DoS and replay attacks.The results also showed that the best performance of the both models is obtained when the RTS/ CTS handshake is disabled.In this case, the performance of the SCP-M and SCP-O models is considerably close to the standard 802.11model with a negligible security cost.Results of comparison between SCP-M and SCP-O models also proved that by adopting the SCP-M in wireless networks, the network performance is enhanced.This proves that the proposed M-hmac as underlying authentication algorithm of the SCP-M model provides higher efficiency to access to the system resources and services while maintaining a sufficient level of security.
Figure 1
Figure 1 Corresponding parts in the SCP-O and SCP-M models.
Figure 2
Figure 2 Proposed KDA for the SCP-O and SCP-M models.
Figure 3
Figure 3 Conceptual description of the five threshold timeouts and their effects.
FCVFigure 4
Figure 4 Structure of the proposed M-hmac.
Figure 5
Figure 5 Defense process by the SCP-M/SCP-O models.
Figure 6
Figure 6 Secure and insecure MAC layers in the simulation environment.
Figure 7
Figure 7 Structure of the three types of wireless networks in the simulation environment.
Figure 14
Figure 14 shows how the SCP-M model functions when the DoS attack starts at 30th second and the first
Figure 14
Figure 14 Discarding forgery CP with fresh TS and invalid AF by the SCP-M model.
Figure 15
Figure 15 Discarding forgery CP with old TS by the SCP-M model.
Figure 16
Figure16Accepting valid CP with fresh TS and valid AF by the SCP-M model.
Table 1
Wireless DoS attacks prevention schemes ValidationTwo timers as RTS-DATA and CTS-ACK to check reception of data and ACK packets, respectivelyThe model is incapable of preventing DoS attacks using contention-free CP Bellardo and Savage (2003) [26] Validation Place limit on duration value of CP: ACK duration must be zero, discarding RTS if data frames is not sensed, ignoring isolated CTS packets The model does not specify prevention of contentionfree DoS attacks.Besides ignoring CTS packets while they may belong to hidden nodes can significantly degrade wireless network performance
Table 2
System parameters
Table 4
PLR comparison | 2017-10-24T22:49:50.725Z | 2011-11-28T00:00:00.000 | {
"year": 2011,
"sha1": "155074ba9f5592b7e095d13f4a935c3542dcb3f4",
"oa_license": "CCBY",
"oa_url": "https://jis-eurasipjournals.springeropen.com/track/pdf/10.1186/1687-417X-2011-4",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "155074ba9f5592b7e095d13f4a935c3542dcb3f4",
"s2fieldsofstudy": [
"Computer Science",
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
248576504 | pes2o/s2orc | v3-fos-license | Climate Adaptation and Successful Adaptation Definitions: Latin American Perspectives Using the Delphi Method
: Across the world, policies and measures are being developed and implemented to reduce the risks of climate change and adapt to its current and projected adverse effects. The Paris Agreement established the global stocktake to evaluate the collective progress made on adaptation. Nevertheless, various challenges still exist when evaluating adaptation progress, among which is the lack of standard definitions to support evaluation efforts. Therefore, we investigated the views of experts regarding the definitions of adaptation given by the Intergovernmental Panel on Climate Change (IPCC) and the definition of successful adaptation by Doria et al., with a focus on Latin America. Using the Delphi method, we obtained relevant knowledge and perspectives. As a result, we identified a high level of consensus (85%) among the experts regarding the IPCC’s definition of climate adaptation. However, there was no consensus on the definition of successful adaptation. For both definitions, we present the elements on which the experts agreed and disagreed, as well as the proposed elements that could improve the definitions to support adaptation evaluation efforts. Additionally, we introduce a list of criteria and indicators that could improve the evaluation of adaptation at different management levels and facilitate the aggregation of information on adaptation progress.
Introduction
Currently, natural and human systems are experiencing the adverse effects of more than 1 • C of mean global warming compared to pre-industrial levels [1,2].Therefore, there is a need for ecosystems and societies to adapt to the changing climate conditions.Policies and measures to adapt to and reduce climate-change-imposed risks are therefore being developed and implemented at different scales and in different settings across the globe [3][4][5].However, due to the inherent complexities of adaptation, it is not easy to assess whether the climate adaptation measures implemented are actually helping ecosystems and societies to adapt successfully.Context-specificity, meaning that what is identified as progress or successful adaptation by one community may not be recognized as such by another, is one of the key adaptation complexities involved [4][5][6][7][8][9][10][11][12][13][14][15].
Acknowledging such complexities, an important prerequisite to conducting a meaningful assessment of adaptation success is to have a sound understanding of what adaptation means.The IPCC's Working Group II [16] (p.118) is the most commonly cited definition of climate adaptation: "The process of adjustment to actual or expected climate and its effects.In human systems, adaptation seeks to moderate or avoid harm or exploit beneficial opportunities.In some natural systems, human intervention may facilitate adjustment to expected climate and its effects".In the realm of successful adaptation an example is given by Doria et al. [17] (p.817): "successful adaptation is any adjustment that reduces the risks associated with climate change, or vulnerability to climate change impacts, to a predetermined level, without compromising economic, social, and environmental sustainability".
Despite these and other academic efforts to define climate adaptation (e.g., [16,18]) and successful adaptation (e.g., [5,9,17]), the literature still shows a limited understanding of both.For instance, scholars identify the IPCC's definition of adaptation as being not "operational", since it does not include specific elements that would allow measuring the progress obtained through adaptation measures [17,[19][20][21].Similarly, to the discussion on a standard definition for adaptation, the issue of successful adaptation has also been identified as an adaptation research priority [13,22,23].
Current climate adaptation research is even more limited for the case of vulnerable regions in the Global South [24,25].One of these regions is Latin America [2,[24][25][26], which has been identified as "highly exposed, vulnerable and strongly impacted by climate change" [25], with the level of implementation of adaptation lagging behind the actual needs [25,27,28].Equally, there are insufficient financial resources [27,28], as well as scarce information on the feasibility, and monitoring and evaluation (M&E) of adaptation options in the region [25,26,29].Overcoming these informational and financial limits is essential for the adequate funding and implementation of adaptation priorities [30].
Moreover, it is crucial to note that the scope of the adaptation policies and monitoring and evaluation frameworks used in Latin America is limited to climate impact drivers, excluding social and economic aspects that influence the effectiveness of adaptation measures [25,31].Among the barriers limiting adaptation policy monitoring and assessment in the region are the lack of a clear delimitation of adaptation policies, the lack of indicators to assess the effectiveness of adaptation measures, and the lack of mechanisms with which to track adaptation [29].
The limitations on monitoring and evaluation in Latin America fall short of the ambitions for adaptation set at the global policy level.The global stocktake (GST) and global goal on adaptation (GGA) were established by the Paris Agreement within the United Nations Framework Convention on Climate Change (UNFCCC).The GST serves as the overarching mechanism with which to assess collective progress on mitigation, adaptation, and climate finance based on national reporting instruments.As part of the GST and in the realm of adaptation, the GGA includes a reduction in vulnerability, increase in resilience, and increase in adaptive capacities [32].
However, most of the literature on the implementation and progress of adaptation is related to measures implemented at the local level.This and the circumstances of adaptation as they are at present, for example, in Latin America, present challenges at other levels of management in terms of data availability and comparable and meaningful indicators or proxies to measure adaptation, especially from the local to the global scale [20].
The first GST is planned for 2023, and it will also review the overall progress made concerning the GGA [32].However, how can the impact of adaptation policies and interventions be measured or assessed if we do not have a common definition of adaptation or what successful adaptation entails?Moreover, how can we use information produced at local or subnational levels at the international (aggregated) level to inform the GST?
To contribute to the establishment of definitions of climate adaptation and successful adaptation, especially one that is applicable to different contexts and local specificities across the globe, it is pivotal that different perspectives be taken into account [6].Therefore, we investigated the views of Latin American experts on the definition of adaptation according to the IPCC [16], as well as those on the definition of successful adaptation developed by Doria et al. [17].
We used the Delphi method, a "group facilitation technique", which utilizes an iterative, multistage process, to transform opinion into group consensus [33] (p.1008).The method has been used in a wide range of sectors and for multiple objectives, including for aspects relating to climate change adaptation (e.g., [17,[34][35][36][37][38]). The method has already been applied by Doria et al. [17] in their development of their own definition of successful adaptation.The Delphi method allowed us to identify the perspectives obtained from a heterogeneous panel of Latin American adaptation experts.The method facilitated a co-production process between the researchers and experts by identifying elements of agreement and disagreement.In this way, this method also facilitated the identification of ways to improve the existing definitions.Additionally, the method let us identify a list of criteria and indicators that could be used for aggregating information on adaptation from the local level to the global level to inform the GST.
With our work, we aim to provide guidance on (1) the aspects of definitions of adaptation and successful adaptation to foster their general operability and their use in adaptation success assessment, and (2) criteria and indicators that could support efforts to aggregate information on adaptation progress, for example, in the frame of the GST.To strengthen the respective research focusing on the Global South we apply our efforts to the case of Latin America.
Why Are Definitions for Climate Adaptation Important?
Definitions aim to establish and clarify what a word entails.They help to avoid ambivalences or ambiguities.Bassett and Folgemann [39] (p.51) highlight that "how we think and talk about adaptation matters in current and future debates on transformative climate action".Until recently, adaptation to climate change was considered a nascent policy and research field [40][41][42].However, new literature shows that climate adaptation research is rapidly increasing in volume and diversifying [24,43,44].Moreover, following the establishment of the GST, research related to adaptation assessment has gained prominence.
Recent literature speaks of climate adaptation as a public good [9], as a public goal [51], and as an investment [13].Moreover, is seen as a process, an adjustment, or an outcome [50,52].All those perspectives highlight the need to evaluate adaptation measures, especially in light of the limited financial resources available, the global policies in place, and the risk of maladaptation [3,30,50,53,54].The questions regarding the definition of adaptation and successful adaptation are relevant to all levels where the planning, design, and implementation of adaptation take place.However, there might be "no easy or political answers" [9] (p.1), underpinning the need for a profound scientific understanding of what adaptation and its success entail.
As part of a wider debate, there are discussions on the need to differentiate adaptation from development [19,55,56], as well as discussions about whether adaptation outcomes should be additional or complementary to those obtained from development interventions alone [19,54].
According to Moser and Boykoff [9], investigating successful adaptation achieves the following goals: communication and public engagement, deliberate planning and decisionmaking, improved fit with other policy goals, justification of adaptation expenditures, improved accountability, and support for learning and adaptive management.
Regarding the assessment of adaptation measures and the aggregation of relevant information, the UNFCCC guides policies and actions undertaken at different management levels.In this regard, Magnan and Ribera [45] (p.1282) find it "crucial to overcome the intuitive and subjective understanding of adaptation".The establishment of the GST as part of the Paris Agreement reflects and responds to the need for a better overview of how well or how successfully we adapt to climate change.However, how do we arrive at a reliable overview?Magnan [57] indicates the need to develop metrics, which must comply with two characteristics: the consideration of context-dependent aspects ("national circumstances") and allowing for the aggregation of information from the local through to the global level.
The UNFCCC already recognizes the multiple dimensions where adaptation actions or interventions take place [32].Despite this, much of the literature describes adaptation as a "local" issue [44].As a result, monitoring and evaluation (M&E) frameworks are mainly developed for use at the local level (e.g., for a community or project/program) [58].Likewise, the GST's evaluation of adaptation progress is based on national assessments, and accordingly most efforts to inform the GST focus on the national level (e.g., [59][60][61][62][63]).However, adaptation and reporting on adaptation progress also need to be considered as part of broader subnational, national, regional, and global mechanisms, such as the GST [64] (see Figure 1).Nevertheless, there are also limits to an aggregated view of adaptation, as not all metrics can be used at all levels [19].
The UNFCCC already recognizes the multiple dimensions where adaptation actions or interventions take place [32].Despite this, much of the literature describes adaptation as a "local" issue [44].As a result, monitoring and evaluation (M&E) frameworks are mainly developed for use at the local level (e.g., for a community or project/program) [58].Likewise, the GST's evaluation of adaptation progress is based on national assessments, and accordingly most efforts to inform the GST focus on the national level (e.g., [59][60][61][62][63]).However, adaptation and reporting on adaptation progress also need to be considered as part of broader subnational, national, regional, and global mechanisms, such as the GST [64] (see Figure 1).Nevertheless, there are also limits to an aggregated view of adaptation, as not all metrics can be used at all levels [19].
In addition to the disconnection between the levels where adaptation policies are developed and actions implemented, most M&E frameworks developed for adaptation focus on providing accountability.This approach aligns with the need to guarantee that the limited resources available for adaptation are invested efficiently [65].However, it does not provide guidance on, for example, the goals of vulnerability reduction or how to increase resilience [54,66].Policymakers and practitioners face this type of challenge when evaluating and aggregating information on adaptation progress, together with those related to context, definitions chosen, and the availability of information [49,67,68].
The Delphi Method
The Delphi method is a versatile and valuable social research technique [33,69,70].
The key characteristics of the Delphi method are that it is an iterative process between rounds of questionnaires that guarantees anonymity, has controlled feedback, and provides a statistical response [70,71].
The development of a Delphi exercise does not require face-to-face meetings.Instead, it allows reaching consensus through rounds of questionnaires, which are later analyzed and fed back to the panel members (experts) [33,70].Each round of questionnaire answers serves as the basis for the next.As a result, direct interaction between experts is limited.This last aspect has been identified as a limitation of the method, as limited interactions could also imply that meaningful exchange within the expert group is absent from the process [72].Nevertheless, this method allows the identification of the elements of In addition to the disconnection between the levels where adaptation policies are developed and actions implemented, most M&E frameworks developed for adaptation focus on providing accountability.This approach aligns with the need to guarantee that the limited resources available for adaptation are invested efficiently [65].However, it does not provide guidance on, for example, the goals of vulnerability reduction or how to increase resilience [54,66].Policymakers and practitioners face this type of challenge when evaluating and aggregating information on adaptation progress, together with those related to context, definitions chosen, and the availability of information [49,67,68].
The Delphi Method
The Delphi method is a versatile and valuable social research technique [33,69,70].
The key characteristics of the Delphi method are that it is an iterative process between rounds of questionnaires that guarantees anonymity, has controlled feedback, and provides a statistical response [70,71].
The development of a Delphi exercise does not require face-to-face meetings.Instead, it allows reaching consensus through rounds of questionnaires, which are later analyzed and fed back to the panel members (experts) [33,70].Each round of questionnaire answers serves as the basis for the next.As a result, direct interaction between experts is limited.This last aspect has been identified as a limitation of the method, as limited interactions could also imply that meaningful exchange within the expert group is absent from the process [72].Nevertheless, this method allows the identification of the elements of agreement, level of consensus, and hierarchization among the different aspects that a group of experts evaluates.
The anonymity allowed by the Delphi method helps with the co-production of knowledge by avoiding issues of power and prestige between the experts, which could affect the co-production process [73].The questionnaire and reports summarize the information and arguments given by the experts.It is necessary for the coordinating team to have strong abilities to analyze and extract the views from the experts [33].
Another characteristic of the Delphi method is that it does not rely on a random or representative sample.Thus, the results obtained through the method represent only the professional opinion of those experts who participate in the exercise [74].Additionally, reaching consensus might not necessarily mean that the correct answer has been found [74,75].However, the results obtained can be used to further deepen the debate around the issue under study [33].
Implementation Framework
Based on the information presented previously, this section describes the process and steps we followed to implement the Delphi method in our research.Figure 2 summarizes the actions taken by the researchers (left) and the actions taken by the members of the panel of experts (right).The black and blue arrows represent their interactions.
agreement, level of consensus, and hierarchization among the different aspects that a group of experts evaluates.
The anonymity allowed by the Delphi method helps with the co-production of knowledge by avoiding issues of power and prestige between the experts, which could affect the co-production process [73].The questionnaire and reports summarize the information and arguments given by the experts.It is necessary for the coordinating team to have strong abilities to analyze and extract the views from the experts [33].
Another characteristic of the Delphi method is that it does not rely on a random or representative sample.Thus, the results obtained through the method represent only the professional opinion of those experts who participate in the exercise [74].Additionally, reaching consensus might not necessarily mean that the correct answer has been found [74,75].However, the results obtained can be used to further deepen the debate around the issue under study [33].
Implementation Framework
Based on the information presented previously, this section describes the process and steps we followed to implement the Delphi method in our research.Figure 2 summarizes the actions taken by the researchers (left) and the actions taken by the members of the panel of experts (right).The black and blue arrows represent their interactions.
Selection of Experts and Communication
Experts are "informed individuals" and "specialists in their field" [74] (p.196).Considering that adaptation policies and implementation are developed at different scales and by different actors [5,76], we aimed to gather different perspectives on definitions of
Selection of Experts and Communication
Experts are "informed individuals" and "specialists in their field" [74] (p.196).Considering that adaptation policies and implementation are developed at different scales and by different actors [5,76], we aimed to gather different perspectives on definitions of adaptation and successful adaptation from a heterogeneous panel of Latin American climate adaptation experts.
We identified a pool of 77 professionals working in academia, non-governmental organizations, and governmental dependencies designing and implementing adaptation actions.A total of 50 out of the 77 identified professionals were invited by e-mail to participate in the panel as experts.Out of the 50 invitees, 40 experts (80%) accepted the invitation.The selection was based on the experts' publications, known experience, and professional networks.Of the 40 invited, 32 participated in the first round and 20 in the second.This decrease in the experts' participation between both rounds is reported as a common situation in Delphi exercises (e.g., [17,71,74,77]).
After the selection, the facilitator contacted the experts via e-mail.That first contact included an introduction to the aims of this work and the Delphi method.
The invitation also included the estimated time taken to answer the online questionnaires and how often the researchers would contact them.Finally, we offered a meeting to clarify any questions or doubts concerning the goals and methods used.Once the experts confirmed their interest in participating, an e-mail containing the link to the questionnaire was sent.In addition, the facilitator sent reminders prior to the questionnaires' deadlines.
Profile of Members of the Panel of Experts
The researchers aimed to assemble a heterogeneous panel of Latin American experts to collect governmental, non-governmental, and academic perspectives.Table 1 and Figure 3 confirm that that objective was achieved.The majority of the experts resided in Colombia, Guatemala, Uruguay, and Mexico.networks.Of the 40 invited, 32 participated in the first round and 20 in the second.This decrease in the experts' participation between both rounds is reported as a common situation in Delphi exercises (e.g., [17,71,74,77]).
After the selection, the facilitator contacted the experts via e-mail.That first contact included an introduction to the aims of this work and the Delphi method.
The invitation also included the estimated time taken to answer the online questionnaires and how often the researchers would contact them.Finally, we offered a meeting to clarify any questions or doubts concerning the goals and methods used.Once the experts confirmed their interest in participating, an e-mail containing the link to the questionnaire was sent.In addition, the facilitator sent reminders prior to the questionnaires' deadlines.
Profile of Members of the Panel of Experts
The researchers aimed to assemble a heterogeneous panel of Latin American experts to collect governmental, non-governmental, and academic perspectives.Table 1 and Figure 3 confirm that that objective was achieved.The majority of the experts resided in Colombia, Guatemala, Uruguay, and Mexico.Table 1 provides an overview of the experts' gender, years of experience, type of organization where the experts acquired most of their experience in adaptation, current professional role, and professional background.In terms of gender, the participation of women and men was balanced in the two rounds.A significant number of the experts had long-standing (>10 years) adaptation-related professional experience (69% and 75% for the first and second rounds, respectively).In terms of the type of institution/organization in which they had spent most of their adaptation-related career, the results were also well distributed between government (G), academia/research (A/R), development aid (D) Table 1 provides an overview of the experts' gender, years of experience, type of organization where the experts acquired most of their experience in adaptation, current professional role, and professional background.In terms of gender, the participation of women and men was balanced in the two rounds.A significant number of the experts had long-standing (>10 years) adaptation-related professional experience (69% and 75% for the first and second rounds, respectively).In terms of the type of institution/organization in which they had spent most of their adaptation-related career, the results were also well distributed between government (G), academia/research (A/R), development aid (D) organizations, and non-governmental organizations (NGOs).In terms of their current role, most experts identified themselves as researchers (38% and 45% in each round), followed by policymakers (25%, 15%) and consultants (22%, 20%).In addition, the experts identified other professional roles (19%, 20%), such as director, manager, independent consultant, specialist, and professor.Most experts had backgrounds in environmental or natural sciences (41%, 50%), followed by social sciences (37%, 30%) and applied sciences (16%, 20%).Only 6% of the experts included in the first round indicated that they had a background in environmental and social sciences.
Questionnaires
In this exercise, we performed two rounds of online questionnaires.We developed and shared the online questionnaires using Survey Hero (https://www.surveyhero.com/).Together with sharing and collecting the information, the tool calculated the arithmetic average, mean and standard deviation, and weighting (where appropriate).
Following the Delphi method (Section 3.1), the first questionnaire primarily consisted of open-ended questions to allow experts to have freedom in their responses and allow us to obtain individual perspectives, which should be the basis of the following questionnaire.The second questionnaire was mainly made up of closed questions.
To obtain information on both definitions being studied and provide guidance on the aspects that could be improved, the questionnaires included information about the study's aim, a summary of the Delphi method, and additional background information.The questions were organized into three main sections: (A) adaptation definition, (B) successful adaptation definition, and (C) elements for operationalizing the definition of successful adaptation.Sections A and B included questions on the overall level of agreement, the elements of the definitions under study that the experts agreed and disagreed with, and additional elements that could be considered to improve the definitions.We also included a question on the need for definitions specific to each management level.Section C included questions on the operationalization of the definition of successful adaptation.Here, the experts could identify useful and missing elements in the definition that could support the evaluation of adaptation at different management levels.Finally, the first questionnaire included an additional section (D) relating to the experts' background information.
The second questionnaire was prepared based on the answers to the first questionnaire [33,70].First, we analyzed the answers (including the frequencies, means and standard deviations, where appropriate).Afterwards, we grouped similar items.In this case, we listed all the elements identified by the experts in each question.Based on that, the experts confirmed their agreement or disagreement with the listed elements.Additionally, they identified the degree of importance for improving the definitions or the usefulness of those elements.Experts could identify more than one aspect that they agreed or disagreed with.Furthermore, each section had an additional field where the experts could share further comments.
In both questionnaires, Likert-type scale questions were included to identify and verify their level of agreement with both definitions (total disagreement to total agreement).The average agreement included a scale of 3 (−/+).We calculated the level of consensus considering the answers given as "agree and totally agree" scales (scales 4 and 5).To avoid misunderstandings, in this work the percentage symbol (%) included in the results refers to the number of experts answering a question or indicating their agreement/disagreement.Weights on importance or usefulness are reported as a fraction of 1 (0 not important or not useful/1 very important or very useful).The weights included in this work represent the average weight for each element, as indicated by the experts.
In terms of time, the experts had at least one month to answer and complete the questionnaires.After the analysis, we prepared and shared a report presenting the answers and arguments for each questionnaire round.The questionnaires were developed in Spanish and implemented between September 2020 and March 2021.
Qualitative Analysis
We based our analysis on the information provided by the experts in the first questionnaire.In addition, we developed a category system using inductive category formation (categories based on the data) [78], which allowed us to identify the elements of the def-initions under analysis with which the experts agreed or disagreed.We also used this approach to identify elements proposed to improve the definitions.
Once the category system was developed, we extracted and analyzed information about the frequency of the different elements.That information served as the basis with which to develop the list of elements provided in the second questionnaire, in which the experts could identify their agreement or disagreement.
The researchers used the MAXQDA software to qualitatively analyze the answers given in the open-ended questions in the first questionnaire.
Consensus with the Definitions
The Delphi exercises stop once a predetermined level of consensus is reached.To define the experts' level of consensus with the definitions under study, we followed the limit used by Doria et al. [17] (>80%).
Therefore, this exercise stopped after the second questionnaire, as the level of agreement with the definition of adaptation reached 85%.However, as we were not proposing a new definition of successful adaptation, we also decided to stop the exercise with an agreement level of only 50%.While the level of agreement with the definition of successful adaptation was lower than the >80% defined by Doria et al. [17], the authors considered it a stable response (compared to the 53% obtained in the first questionnaire).Stable responses could be a "more reliable indicator of consensus" [33] (p.1011).This low level of agreement reflects the experts' different concerns about this definition.
Results
Below, we present the main results related to the revision of the definitions of adaptation together with those of successful adaptation.In a separate section, we present the results relating to the aggregation of information.Appendix A presents a summary of the results.
Perspectives on the Definitions
In this section, we present the results related to the level of agreement or consensus, the elements upon which the experts agreed, and those upon which they disagreed.Additionally, we list the elements identified by the experts which could be considered in future revisions of the definitions.
Consensus with the Definitions
We consulted with the experts regarding their level of agreement with the IPCC's [16] definition of adaptation.In both rounds, the level of agreement with the IPCC's definition was high (75% and 85% of the answers, respectively).Therefore, there was consensus among the experts (>80%) regarding this definition.The average agreement levels were 76% and 79% (Figure 4).General comments on the definition of climate adaptation mentioned its complexity, which depends on different factors such as level of implementation, sector, and type of adaptation.Comments also highlighted that more than specific definitions, work on adaptation needs to be guided by general principles or criteria, allowing the alignment of General comments on the definition of climate adaptation mentioned its complexity, which depends on different factors such as level of implementation, sector, and type of adaptation.Comments also highlighted that more than specific definitions, work on adaptation needs to be guided by general principles or criteria, allowing the alignment of actions with specific objectives.
Similarly, as for the definition of adaptation, experts identified their level of agreement with the definition of successful adaptation proposed by Doria et al. [17].The experts agreed less with the definition of successful adaptation than with the presented definition of adaptation.After the two rounds, no consensus was reached.The levels of agreement were 53% and 50%.The average levels of agreement were 68% and 70% for each round, respectively (Figure 5).General comments on the definition of climate adaptation mentioned its complexity, which depends on different factors such as level of implementation, sector, and type of adaptation.Comments also highlighted that more than specific definitions, work on adaptation needs to be guided by general principles or criteria, allowing the alignment of actions with specific objectives.
Similarly, as for the definition of adaptation, experts identified their level of agreement with the definition of successful adaptation proposed by Doria et al. [17].The experts agreed less with the definition of successful adaptation than with the presented definition of adaptation.After the two rounds, no consensus was reached.The levels of agreement were 53% and 50%.The average levels of agreement were 68% and 70% for each round, respectively (Figure 5).Regarding the need to have specific definitions for each management level, 69% of the experts did not identify such a need related to the definition of adaptation.In the case of the definition of successful adaptation, in the first questionnaire 56% of the experts identified that having a specific definition for each level of management could be useful.Therefore, the question was reframed in the second questionnaire.We asked the experts to identify their preference for these two options: (1) specific definitions for each management level, and (2) a general definition adaptable to each level of management.As a result, 90% of the experts chose option 2.
Elements of Agreement with the Definitions
In the first round, the experts were asked to identify the elements of the definitions with which they agreed.Regarding the adaptation definition, there were two aspects with which the experts agreed more: adaptation as a process of adjustment (56%) and the Regarding the need to have specific definitions for each management level, 69% of the experts did not identify such a need related to the definition of adaptation.In the case of the definition of successful adaptation, in the first questionnaire 56% of the experts identified that having a specific definition for each level of management could be useful.Therefore, the question was reframed in the second questionnaire.We asked the experts to identify their preference for these two options: (1) specific definitions for each management level, and (2) a general definition adaptable to each level of management.As a result, 90% of the experts chose option 2.
Elements of Agreement with the Definitions
In the first round, the experts were asked to identify the elements of the definitions with which they agreed.Regarding the adaptation definition, there were two aspects with which the experts agreed more: adaptation as a process of adjustment (56%) and the inclusion of human systems (56%).These were followed by the indirect reference to variability and climate change (34%) and human intervention in natural systems (34%).On the other hand, only 13% agreed with the reference to both systems (natural and human), and 6% agreed with the differentiation among both systems.Some of the elements listed above were listed separately in the second round (e.g., variability and climate change).Experts were asked to confirm their agreement with and identify the importance of each element.More than 75% of the experts agreed with all the listed elements.Regarding importance, the reference to both systems in the definition scored the highest (0.91).The element about which all experts (100%) agreed was the reference to "exploit beneficial opportunities", while it achieved the lowest weight in terms of importance (0.68).
Regarding the definition of successful adaptation, the experts agreed on reducing risks and vulnerability (44%) as well as sustainability (31%) in the first round.In addition, other aspects were mentioned, such as adjustment (16%), predetermined level (13%), and a focus on climate change (6%).
In the second round, the experts confirmed their agreement with those elements and identified their importance.The experts agreed with most of the elements (>80%).Only the reference to a predetermined level obtained less than 80% agreement (55%).Most experts agreed (95%) with the reference to reducing risks.However, the experts identified reducing vulnerability as the most important element (0.96).
Elements of Disagreement with the Definitions
Despite the high level of agreement with the definition of adaptation, 41% of the experts identified elements of it with which they disagreed.The aspects identified in the first round were the differentiation of both systems (41%), adaptation as an adjustment process (22%), and the limitation to climate change (3%).Except for the limitation to climate change, the experts agreed with all the elements identified as elements of the agreement in the first round.Therefore, we asked the experts to confirm their agreement or disagreement with the elements in the second round.As a result, the only element most of the experts disagreed with was the limitations to climate change (50%).
For successful adaptation, many of the aspects where experts showed agreement in the first round were also identified as elements of disagreement: sustainability (31%), predetermined level (28%), adjustment (15%), measurement elements (9%), scope (9%), and the focus on climate change (6%).In the second round, the experts confirmed they agreed or disagreed with the listed elements.In this case, the experts disagreed with two elements: a lack of elements that allowed measuring progress (75%) and the reductionist approach (related to disaster risk reduction, not considering the transformative character of adaptation) (40%).
In the second round, the experts indicated which elements they agreed with and identified their importance to improve the definitions.The four elements with which the experts agreed the most (>80%) were the increase in adaptive capacity (90%), reduction in vulnerability (90%), systemic approach (85%), and temporality (80%).The reference to reducing vulnerability ranked the highest in terms of importance (0.94).
In the second round, the experts identified their agreement with and the importance of the listed elements.As a result, more of the elements received a high level of agreement among the experts (>70%).Increasing resilience garnered the most agreement (100%).In terms of importance, the increase in adaptive capacity scored the highest (0.93).
Operationalization of the Definition of Successful Adaptation
Despite the efforts made, the academic literature states that the existing definitions are not operational.That is, at present the definitions do not support efforts to evaluate progress made through climate change adaptation measures implemented at different management levels [7,14,19,55].
In this section, we present the results related to the usefulness of Doria et al.'s definition [17] for measuring progress in climate adaptation.Additionally, we present methods and approaches that could facilitate the aggregation of information on progress.To this end, we introduce a list of the criteria and indicators identified by the experts which could improve capacities to measure progress at different levels of management.
Usefulness of the Definition of Successful Adaptation at Different Management Levels
We asked the experts to identify how useful the definition of successful adaptation is, in general, for supporting the evaluation of climate change adaptation (Figure 6).In total, 41% and 60% of the experts found the definition useful, in each of the rounds.
are not operational.That is, at present the definitions do not support efforts to evaluate progress made through climate change adaptation measures implemented at different management levels [7,14,19,55].
In this section, we present the results related to the usefulness of Doria et al.'s definition [17] for measuring progress in climate adaptation.Additionally, we present methods and approaches that could facilitate the aggregation of information on progress.To this end, we introduce a list of the criteria and indicators identified by the experts which could improve capacities to measure progress at different levels of management.
Usefulness of the Definition of Successful Adaptation at Different Management Levels
We asked the experts to identify how useful the definition of successful adaptation is, in general, for supporting the evaluation of climate change adaptation (Figure 6).In total, 41% and 60% of the experts found the definition useful, in each of the rounds.When asked about the usefulness at the different management levels (local, subnational, national, and global), the experts stated that the definition was progressively more useful when moving from the local to higher management levels (31%, 44%, 66%, and 75%, respectively) (Figure 7).However, at the local level only, some experts (13%) stated that the definition was not useful, and 9% identified it as not applicable.When asked about the usefulness at the different management levels (local, subnational, national, and global), the experts stated that the definition was progressively more useful when moving from the local to higher management levels (31%, 44%, 66%, and 75%, respectively) (Figure 7).However, at the local level only, some experts (13%) stated that the definition was not useful, and 9% identified it as not applicable.
are not operational.That is, at present the definitions do not support efforts to evaluate progress made through climate change adaptation measures implemented at different management levels [7,14,19,55].
In this section, we present the results related to the usefulness of Doria et al.'s definition [17] for measuring progress in climate adaptation.Additionally, we present methods and approaches that could facilitate the aggregation of information on progress.To this end, we introduce a list of the criteria and indicators identified by the experts which could improve capacities to measure progress at different levels of management.
Usefulness of the Definition of Successful Adaptation at Different Management Levels
We asked the experts to identify how useful the definition of successful adaptation is, in general, for supporting the evaluation of climate change adaptation (Figure 6).In total, 41% and 60% of the experts found the definition useful, in each of the rounds.When asked about the usefulness at the different management levels (local, subnational, national, and global), the experts stated that the definition was progressively more useful when moving from the local to higher management levels (31%, 44%, 66%, and 75%, respectively) (Figure 7).However, at the local level only, some experts (13%) stated that the definition was not useful, and 9% identified it as not applicable.In the second round, the definition was identified as useful for the local and subnational levels by 80% of the experts and for the national and global levels by 85%.This time, the experts also identified the degree of usefulness of the definition for each level.The experts identified the definition as less useful at the local level (0.65) compared to the national level (0.76).The experts gave the same weight (0.73) for the subnational and global levels (Table 2).In this case, the difference for the different levels was smaller than the one identified in the first round.After identifying the general usefulness of the definition of successful adaptation, the experts identified useful and missing elements for different levels of management.Table 3 shows that the experts identified the same elements for the different levels with slight differences in the agreement and usefulness (weight).On the contrary, there were differences when comparing the elements identified as missing from the definition (Table 4).The experts identified three elements to be missing for the different levels of management: adaptive capacity, resilience, and measuring elements.In comparison, the definition of the scope, climate variability, levels of management, and cultural aspects were identified as missing only for the local level.In addition, the elements of context and the definition of adjustment were identified as missing only for the subnational to global levels.
Aggregation
Another component of the exercise was investigating aspects of the feasibility of aggregating information on adaptation.In this case, we refer to information from the local level that can inform progress made at the national level, which at the same time could serve to inform global progress made in adaptation, as suggested by Magnan [57].
In the first questionnaire, the experts identified elements used to measure progress at the local level that should be considered at the global level.As a result, 66% of the experts indicated criteria or indicators, 19% mentioned methods of measuring progress, and 9% referred to approaches.Table 5 shows the methods and approaches indicated by the experts.Section 4.2.3 includes more detail on the identified criteria and indicators.
Additionally, 28% of the experts questioned the overall feasibility of aggregating information on adaptation.Considering the results obtained from the first questionnaire, we asked the experts specifically about the feasibility of aggregating information on adaptation progress from the local to the global level in the second round.As a result, only 35% agreed on the feasibility, while 15% thought that it was not possible to aggregate information.A total of 50% of the experts were unsure.As mentioned before, in the first questionnaire 66% of experts identified criteria or indicators that would support adaptation monitoring and evaluation efforts at different management levels, which would, at the same time, facilitate the aggregation of information progress on adaptation.We grouped the criteria and indicators identified in the previous round into the three components of the global goal on adaptation, i.e., increasing adaptive capacity, increasing resilience, and reducing vulnerability.The information was presented for each level: local, subnational, national, and global.The experts identified whether the criteria and indicators were useful for that specific management level and their degree of usefulness (Table 6).Besides the elements listed in Table 6, in the second round, some experts identified additional elements for each component of the GGA.These results might indicate that additional elements could have been identified if additional rounds were performed.
Discussion
This section presents our reflections on the results obtained from a panel of Latin American experts regarding the revisions of the definitions of climate adaptation [16] and successful adaptation [17].We present the level of agreement with the definitions and the elements that could help in evaluation and aggregation efforts.Additionally, we reflect on the Delphi method and its limitations for the co-production of knowledge in climate change adaptation research.
Revision of the Definitions
This exercise did not aim to produce new definitions but rather to revise and identify different elements and issues related to the definitions under study.Below, we present our reflections based on the issues identified by the panel of experts.
Adaptation Definition
We identified a consensus among the experts regarding the IPCC's [16] definition of adaptation.The element upon which most experts agreed was the reference to "exploit beneficial opportunities".Nevertheless, that reference scored the lowest in terms of importance.This result could reflect the concerns expressed by some experts regarding a "positive" view of the effects of climate change.According to some of the comments, the definition should focus on the adverse effects of climate change.
Opinions were divided (50%) about the focus on climate change as an element of disagreement after the two questionnaire rounds.For example, some experts proposed including general aspects of global change (i.e., environmental degradation).This result could reflect the debate of identifying (or not identifying) adaptation as a separate issue from the development agenda [19,54].
Among the elements proposed by the experts, the ones identified as more important were those related to the components of the global goal on adaptation (GGA).This is relevant as it confirms the importance of evaluating progress in the three components of the GGA, which is an issue covered in the current preparation work ahead of the first global stocktake.
Successful Adaptation Definition
In the case of Doria et al.'s [17] definition of successful adaptation, most experts agreed with the reference to a reduction in risks (95%).However, the experts identified the reduction in vulnerability as the most important element.This last aspect is aligned with research that highlights the fact that reducing vulnerability should be one of the objectives of adaptation (e.g., [30,50]).On the other hand, the experts disagreed with the lack of elements to allow for measurement (75%).
Regarding the elements used to improve the definition, most experts agreed on increasing resilience (100%) and increasing adaptive capacity (95%).The experts identified adaptive capacity as the most important element, in line with Ford and Berrang-Ford [44] and Dilling et al. [14].
Regarding the general usefulness of the definition for evaluation purposes, 60% of the experts identified the definition as useful.Additionally, most of the experts (90%) agreed that there was no need for a specific definition for each management level.Instead, the experts identified a general definition adaptable to each management level as the best alternative.This general definition could be supported by criteria and indicators applicable to each level.The results presented in Table 6 are an example of how the criteria and indicators might vary depending on the level of implementation of adaptation measures.
Additionally, according to the experts, the definition of successful adaptation has a lower degree of usefulness at the local level than when compared to higher management levels.While there are some critics of the view of adaptation as only a local concern (e.g., [44]), the results of this research confirm that it is necessary to consider the local context and its complexities when developing criteria and indicators for the evaluation of climate adaptation.Moreover, care is needed regarding the framing used to define how successful or effective an adaptation measure is [30].Finally, any effort related to evaluating adaptation needs first to showcase the characteristics of the level where adaptation is implemented.
Aggregation
Contrary to mitigation, it is difficult to aggregate information on the progress on adaptation [19].However, the global stocktake of the Paris Agreement's aim of assessing collective progress [32] demands that academia, practitioners, and policy-makers find ways to present information on adaptation progress.
This exercise reflects how challenging the effort to aggregate information on adaptation can be, with only 35% of the experts thinking it to be feasible.Nevertheless, the experts identified three approaches-objective measures, expert judgment, and inductive methods-that need to be considered when evaluating adaptation.In this regard, as shown in Table 5, most of the experts considered objective measures as the preferred and most useful ones.However, at the same time, the experts mentioned the challenges of measuring or establishing adaptation indicators in different sections of the questionnaires.This might imply that a combination of approaches should be used when evaluating progress made on adaptation.This is consistent with different research that suggests the use of different approaches (i.e., [10]).
At the same time, and as a more detailed contribution than the criteria identified by Doria et al. [17], it was possible to investigate different criteria and indicators at the different levels of management that could support efforts to aggregate information to inform global processes, such as the global stocktake.Experts identified the usefulness of the proposed criteria and indicators at each management level.
Added Value of the Delphi Method for Co-Production of Climate Change Adaptation Knowledge
This exercise has proven the Delphi method to be helpful for the co-production of knowledge related to adaptation to climate change.It allowed us to investigate, in an interactive way, the views on the definitions of adaptation and successful adaptation from the IPCC [16] and Doria et al. [17], respectively.
As a field related to different sectors and levels of governance, efforts related to climate adaptation require processes and methods that allow for exchange and inclusion among a diverse group of stakeholders.In this case, the use of the Delphi method fulfills many characteristics of co-production: as a means of addressing complex problems, producing knowledge, and recognizing different perspectives, while also allowing collaboration among various actors.The feedback process can be considered a social learning process [73].Moreover, in this exercise, the Delphi method facilitated collecting information and identifying different perspectives from a heterogeneous group of experts with different backgrounds, with different levels of technical expertise, and from different countries, despite the ongoing pandemic.
The information obtained can facilitate a common understanding of the goals and results achieved from adaptation actions.Furthermore, the method proved to be flexible, a valuable characteristic for adaptation research, considering the different contexts in which adaptation measures are designed and implemented.
Although the results obtained are not a statistical representation, they present the view of experts in the field, reflecting on critical aspects that need to be considered when evaluating adaptation.Moreover, in this case, the information reflects a regional perspective.
As a limitation of this study, it should be mentioned that only the survey coordinator performed the coding and analysis of the responses.This could have led to biases in the list of elements or issues identified.Sufficient time needed to be allocated for the coding phase, especially after the first round of questionnaires, which mainly consisted of open-ended questions.Following an inductive analysis, the coding phase was the most time-consuming part of the exercise.Furthermore, there were challenges related to the possibility of different interpretations of the questions and the different nature of the issues identified by each expert during the development of the exercise [33,71].
Conclusions
Global policy agendas might guide adaptation actions, but actions are implemented at the local level.Adaptation implementation and success depend on site-specific conditions.Therefore, before adaptation progress or success can be evaluated more consistently on such different levels, we need to know how climate adaptation is defined and what is considered progress and success.Additionally, there is a need to identify ways to support efforts to aggregate information on adaptation progress.However, this discussion is absent from the climate-related literature on Latin America.Therefore, we investigated the perspectives of Latin American experts on the aforementioned issues using the Delphi method.Our results confirm the complexity of the discourse on adaptation.
Overall, the Delphi method proved to be useful for the co-production of knowledge, facilitating the identification of different aspects that can serve as a basis for improving climate change adaptation monitoring and evaluation activities.
We found a consensus (>80%) with the IPCC's definition of climate adaptation [16] among the Latin American experts.In contrast, there was no consensus regarding the definition of successful adaptation developed by Doria et al. [17].The aspects with which most of the experts disagreed were the lack of elements to support evaluation efforts and the lack of recognition of the potential for transformation that adaptation can provide.Instead, the experts identified resilience and adaptive capacity as elements that could improve Doria et al.'s [17] definition of successful adaptation.
Additionally, we presented a list of criteria and indicators of successful adaptation that could support evaluation and aggregation efforts.Such indicators have been identified as a knowledge gap in the Latin American region.Here, we observed that most of the criteria and indicators proposed by the experts were related to adaptive capacity, identified in the climate-related literature as a crucial component when implementing adaptation measures.Our results confirm that there is no one method or one approach for evaluating adaptation.
The criteria and indicators identified in this exercise can help in the investigation of successful adaptation characteristics applicable at different management levels while providing guidance for policy makers and practitioners ahead of the first global stocktake.While our results are limited to the identification of criteria and indicators, they could specifically contribute to a structured approach that captures aspects of representativeness and comparability, as suggested by Magnan and Ribera [45].For example, regarding the criteria and indicators identified for the adaptive capacity component of the GST, the elements of context and the factors that influence the performance of the adaptation measures could be investigated.
Additionally, future research efforts should focus on developing and characterizing the identified criteria and indicators for the different levels of management by identifying the kind of information needed at each level, how the information should be collected, and how it could be aggregated and integrated into the reporting tools.The Delphi method could also be applied to these objectives.Similar exercises could also be developed in other regions to identify, compare and analyze how the different perspectives, elements, criteria, and indicators identified depend on the geographical context.
In conclusion, we present the level of agreement of experts and ways to improve the definitions of climate adaptation and successful climate adaptation, as well as criteria and indicators that could help to aggregate adaptation information from the local to the global
Figure 7 .
Figure 7. Usefulness of the successful adaptation definition at different management levels (first round).
Figure 7 .
Figure 7. Usefulness of the successful adaptation definition at different management levels (first round).Figure 7. Usefulness of the successful adaptation definition at different management levels (first round).
Figure 7 .
Figure 7. Usefulness of the successful adaptation definition at different management levels (first round).Figure 7. Usefulness of the successful adaptation definition at different management levels (first round).
Table 2 .
Usefulness of the definition of successful adaptation at different management levels (second round).
Table 3 .
Useful elements of the successful adaptation definition.
Table 4 .
Elements missing from the successful adaptation definition.
Table 5 .
Identified methods and approaches.
Table 6 .
Identified criteria and indicators for the different for management levels. | 2022-05-10T16:19:39.376Z | 2022-04-29T00:00:00.000 | {
"year": 2022,
"sha1": "b5f60f2755d03e749f73a39c0e52c1e6bdea2381",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/14/9/5350/pdf?version=1652244837",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6939a1972f55e657ec422e9c8f482fdb7d084253",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
43257829 | pes2o/s2orc | v3-fos-license | Virus Entry into Animal Cells
Publisher Summary In addition to its many other functions, the plasma membrane of eukaryotic cells serves as a barrier against invading parasites and viruses. It is not permeable to ions and to low molecular weight solutes, let alone to proteins and polynucleotides. Yet it is clear that viruses are capable of transferring their genome and accessory proteins into the cytosol or into the nucleus, and thus infect the cell. While the detailed mechanisms remain unclear for most animal viruses, a general theme is apparent like other stages in the replication cycle; their entry depends on the activities of the host cell. In order to take up nutrients, to communicate with other cells, to control the intracellular ion balance, and to secrete substances, cells have a variety of mechanisms for bypassing and modifying the barrier properties imposed by their plasma membrane. It is these mechanisms, and the molecules involved in them, that viruses exploit.
I. INTRODUCTION
In addition to its many other functions, the plasma membrane of eukaryotic cells serves as a barrier against invading parasites and viruses. It is not permeable to ions and to low molecular weight solutes, let alone to proteins and polynucleotides. Yet it is clear that viruses are capable of transporting their genome and accessory pro-teins into the cytosol or into the nucleus, and thus infect the cell. While the detailed mechanisms remain unclear for most animal viruses, a general theme is apparent like other stages in the replication cycle, entry depends intimately on the activities of the host cell. In order to take up nutrients, to communicate with other cells, to control the intracellular ion balance, and to secrete substances, cells have devised a variety of mechanisms for bypassing and modifying the barrier properties imposed by their plasma membrane. It is these mechanisms, and the molecules involved in them, that most viruses seem to exploit. The host cell provides surface receptors, endocytic activities, triggers for penetration, intracellular transport, and so on, and the viruses take advantage of these during infection. It follows that our understanding of early virus-cell interactions depends on progress in cell biology, physiology, and aspects of molecular biology which at first glance may appear to be only remotely pertinent.
The early phases of viral infection have remained a relatively neglected field in spite of their obvious importance for understanding viral replication, cell tropism, and pathogenesis. The reason may be that they are relatively intractable experimentally. Identification of viral receptors is often hampered by the weak affinity between individual viral proteins and receptor molecules, and receptors often occur in low numbers on the cell surface. The efficiency of productive entry is usually low (the particle-infective unit ratios can be as high as 100-1000). This means that biochemical and morphological studies are easily confused by virions following side pathways. It is, moreover, difficult to obtain a good signal at reasonable multiplicities of infection because the early events take place, by definition, before the amplification in signal due to replication.
In spite of these obstacles, considerable progress has been made. Model virus systems have been characterized approaching a plaqueforming unit (pfu) to particle ratio of 1. It is possible to incorporate sufficient radioactive label into virus particles to follow their fate even at low multiplicity. Modern immunochemical methods have proved powerful in identifying receptors. Relatively well-characterized inhibitors that block the productive infection pathway of large groups of viruses have been identified. In uivo and in vitro systems have been developed for studying penetration by membrane fusion. As a result, the step-by-step itinerary of the entry pathway of some viruses, such as the Semliki Forest virus (SFV) , are now known.
In this review we will focus on some of the principles in virus entry with emphasis on recent work. Individual viruses and virus families will be discussed only briefly to illustrate some of the generalizations made. For early work the reader should consult reviews by Dales (1973Dales ( , 1978, Lonberg-Holm and Philipson (19741, and Meager and Hughes (1977). More recent reviews include those of Helenius et al. (1980a), Dimmock (19821, Marsh (19841, Sommerfelt and Marsh (19881, White et al. (19831, Doms et al. (19881, and Stegmann et al. (1988). There are three recent meeting reports which contain relevant information on virus entry (Compans et al., 1988;Lonberg-Holm and Crowell, 1986) and on membrane fusion (Hoekstra and Wilschut, 1989;Ohki et al., 1988).
EARLY VIRUS-CELL INTERACTIONS: A GENERAL VIEW
The spread of viral infection depends on the transfer of the viral genome and accessory proteins from the cytosol (or the nucleus) of infected cells to the corresponding compartments of uninfected cells. When infection occurs via a virus particle-and not, as is sometimes the case, by direct fusion of the two cells-a sequence of discrete events take place. Depending on the virus, these include virus assembly in the infected cell, release of infective virions from the infected cell into the extracellular space, attachment of the viruses to receptor structures on the host cell surface, internalization by endocytosis, penetration through a host cell membrane, uncoating of the genome, and intracytoplasmic relocation to the nucleus or some other site in the cell.
Virtually all types of viruses can be seen by electron microscopy to be internalized by endocytosis. Endocytic uptake is probably not mandatory, however, for infectivity in all cases; many viruses have been shown to penetrate directly through the plasma membrane. It is now well established that viruses which depend on acidic pH (or some other property unique to endocytic organelles) require internalization to be infective.
Regardless of whether it is the plasma membrane or the membrane of an endocytic vacuole that serves as portal of entry, the genome and accessory proteins are usually first delivered to the cytosolic compartment. For successful entry the complete nucleocapsid may not need to enter; for some viruses only the genome need be made accessible to the cytosol. A dramatic example of such a mechanism is found in T-even bacteriophages, where the bulk of the viral particle actually remains extracellular.
Replication may occur in cytosolic complexes, in association with specific cytosolic membranes, or the genome may have to be transported in one form or other to the nucleus for replication. Although topologically continuous with the cytoplasm, the nucleoplasm can only be reached through the nuclear membrane or the nuclear pores. Due to a low size cutoff for passage through the pores (see Dingwall and Laskey, 19861, the nuclear membrane constitutes another barrier for those viruses destined for replication in the nucleus. Proteins smaller than about 70 kDa can pass freely through the pores, while larger particles can only enter by energy-dependent, selective uptake mechanisms which depend on specific signal determinants (see Dingwall and Laskey, 1986). Studies with gold particles coated with the appropriate signal peptides have shown that the pore structures can expand to allow particles with a diameter of 20 nm (ie., close to the size of viral nucelocapsids) to pass (Feldherr et al., 1984).
There is evidence to support the notion that some viruses possess, and make use of, nuclear targeting signals (Davey et al., 1985;Kalderon et al., 19861, and capitalize on existing cellular mechanisms for nuclear transport. Electron-microscopic evidence has, on the other hand, been presented suggesting that adenovirus particles, presumably released into the cytosol after lysis of endosomal membranes, may actually uncoat at nuclear pores and inject their genome into the nucleus (Dales, 1978). It cannot be ruled out that some (such as polyoma and SV40) may be delivered to the nucleus by membrane-bound vesicles (Mackay and Consigli, 1976;Maul et al., 1978;Nishimura et al., 1986) or enter the nucleus during mitosis when the nuclear membrane is dissociated.
TRANSPORT OF MACROMOLECULES THROUGH MEMBRANE BARRIERS
While similar in their overall pathways of entry, enveloped and nonenveloped viruses differ in the mechanisms of release and penetration. Enveloped animal viruses make elegant use of membrane fission and membrane fusion. Most nonenveloped viruses rely on lysis to escape from the infected host cell followed by poorly understood membrane translocation mechanisms during entry.
The membrane fission-fusion strategy would seem to have several advantages. It allows the viral genome to egress from and enter into cells without membrane lysis. Perhaps more importantly, at no stage during virus release from the infected cell, or entry, do the nucleocapsids need to be physically translocated through a membrane bilayer. This is a major advantage because major conformational alterations and a complex machinery is usually required for translocation of macromolecules through cellular membranes. Furthermore, packaging of large and segmented viral genomes need not be as rigorously controlled in terms of genome and capsid size as is the case for nonenveloped viruses. Many enveloped viruses are, in fact, polymorphic in size and shape. In normal cell life vesicle-mediated transport is clearly a preferred mode of cellular transport; it is common within the cytoplasm for flexible and efficient intercompartmental transport of cellular macromolecules (Palade, 1975).
Nonenveloped viruses generally exit from the infected cells by inducing cell lysis. During subsequent entry, they must penetrate a t least one host cell membrane by a translocation or a lytic mechanism. This may occur either at the level of the plasma membrane or the limiting membranes of endocytic vacuoles. The membrane translocation probably involves dramatic changes in the structure of the outer protein shell of the virus. These changes, which in many instances seem to be acid-activated, result in increased hydrophobicity of the virus surface and hydrophobic membrane attachment. Little detailed molecular data are yet available. The structural and functional studies on picornaviruses are, however, progressing rapidly (Hogle et al., 1985;Rossmann et al., 1985;Rossman and Palmenberg, 1988) and the mechanism of penetration may soon be understood.
IV. VIRAL RECEPTORS
In order to cause infection, a virus must be able to bind to a cell. Binding occurs through interactions between the surface proteins of the virion and structures on the target cell, "viral receptors." Enveloped and nonenveloped viruses have multiple identical proteins on their surface, thereby possessing the inherent capacity to bind to receptors through multivalent interactions. This provides them with considerable versatility in binding properties. For example, high-avidity binding may occur through a few high-affinity interactions; the human immunodeficiency viruses (HIV-1) gp120 protein binds with high affinity (Kd = 4 x Ml) to the HIV receptor CD4 (Lasky et al., 1987). Alternatively, high avidity can result from multiple lowaffinity interactions (Fries and Helenius, 1979).
Virus binding has, for the most part, been studied in tissue culture systems. A combination of factors including temperature, ionic strength, pH, composition of the medium, and the presence of serum has major effects on binding (Lonberg-Holm, 1981). A variety of techniques including affinity isolation with whole virus or viral components, antireceptor antibodies or antiidiotypic antibodies to the viral binding sites, chemical cross-linking, DNA transfection, and somatic cell hybrids have been used in attempts to identify specific binding components (reviewed in Sommerfelt and Marsh, 1988). However, few specific virus receptors have been unambiguously identified and still fewer demonstrated to play a functional role in virus entry in uiuo. A list of molecules reported to bind viruses (or viral components) that infect humans and other vertebrates is given in Table I. The evidence is not clear-cut in all the cases listed; hence the information should be viewed with some caution. Taken together the data indicate, however, that viruses have developed a large variety of strategies by which they recognize and bind to their host cells. They can use proteins, lipids, or oligosaccharides as receptors. It is not surprising that many of the receptors are proteins involved in ligand binding, endocytosis, and cell recognition.
A . High-Specificity BindinglNarrow Host Range
Viruses such as HIV-1 and 2 and Epstein-Barr virus (EBV) bind to components expressed only on a limited number of cell types. In the .case of HIV, the virions bind to CD4 molecules expressed on helperlinducer T lymphocytes, cells of monocyte-macrophage lineage, and certain cells transfected with the CD4 gene (Dalgleish et al., 1984;Klatzmann et al., 1984;McDougal et al., 1986;Maddon et al., 1986). The specificity of binding determines viral tropism. Cells that lack receptors for a given virus, or where the binding sites are blocked, are resistant to infection. Replication may occur in such cells if the viral genome is introduced artificially." The CD4 antigen is presently the best characterized virus receptor protein. The molecule has been cloned, sequenced and, by expression in receptor-negative cells, confirmed as the HIV receptor (Maddon et al., 1986). Soluble forms of the molecule, which lack the cytoplasmic and transmembrane domains, have been generated and shown to inhibit virus infection in uitro (Lasky et al., 1987) and the epitopes involved in virus binding are being mapped with increasing resolution (Sattentau and Weiss, 1988).
B. High -Specificity BindinglBroad Host Range
Orthomyxoviruses and paramyxoviruses bind with considerable spedficity to sialic acid, often in distinct linkage configurations. The * Receptor expression is not, however, the only factor that determines cell tropism.
Postpenetration events, such as cell-specific expression of transactivating factors, also determine whether a virus is replicated in a particular cell (Weiss, 1984).
binding site for sialic acid in influenza virus HA has been identified in the X-ray structure as a highly conserved depression in the HA1 subunit a t the tip of the spike protein (Wilson et al., 1981). Sialic acid is frequently the terminal saccharide residue on cell surface glycoproteins and glycolipids. Consequently, myxoviruses can infect a wide range of cells through a variety of glycoprotein or glycolipid receptors. Sialic acid has also been implicated in the binding of picornaviruses, papovaviruses, reoviruses, and adenoviruses (Burness, 1981). Competition experiments with the lectin concanavalin A point to a role for other saccharide moieties in human rhinovirus-14 (HRV-14) binding (see Rossmann et al., 1985), and complex carbohydrates such as those present heparan sulfate proteoglycan may be involved in herpes simplex virus infection (WuDunn and .
C . Broad Host Range
A number of viruses, such as the alphavirus SFV, rhabdo virus, and vesicular stomatitis virus (VSV), exhibit a broad host range in culture but do not appear to utilize carbohydrates as binding components. Semliki Forest virus (SFV) is probably able to utlize several cell surface proteins as receptors. Thus SFV binds to major histocompatibility (MHC) class I antigens on human and murine lymphoblastoid cells (Helenius et al., 1978) but it can also infect cells that do not express MHC molecules (Oldstone et al., 1980). With SFV the affinity of individual spike glycoprotein-receptor interactions appears to be low, but binding of the intact multivalent virus is virtually irreversible (Fries and Helenius, 1979;Marsh et al., 198313). Vesicular stomatitis virus has been reported to bind preferentially to negatively charged phospholipids (Mastromarino et al., 1987;Schlegel et al., 1983).
E . Viral Proteins as Virus Receptors
Membrane proteins of enveloped viruses may themselves function as virus receptors. Normally, Madin-Darby canine kidney (MDCK) cells-a cell line that grows as a polarized monolayer in culture-can only be infected with VSV through the basolateral domain. The receptors for these viruses are not expressed on the apical domain. When MDCK cells are infected with influenza virus, HA is expressed apically and the cells become susceptible to VSV through the apical domain. The HA binds to the sialic acid in the envelope protein (G) of VSV and functions as a VSV receptor (Fuller et al., 1985). Similarly, baby hamster kidney (BHK-21) cells do not express a receptor for murine hepatitis virus (MHV-A59, a coronavirus). The cells are therefore resistant to MHV. However, they become susceptible when preinfected with influenza. Thus, virus infection can facilitate secondary infections by other virus in uitro (Fuller et al., 1985;Khelifa and Menezes, 1983).* A similar phenomenon may occur in uiuo.
F. Indirect Binding
The examples just discussed all involve direct interaction of the virus attachment sites with cell surface-binding components. Viruses may also bind to cell surface components indirectly through mediation by an intermediate molecule. The clearest example of this type of interaction occurs in antibody-enhanced viral infection. Antiviral antibodies facilitate flavivirus, myxovirus, and lentivirus infection of macrophages and other cells both in culture and in viuo by cross-linking virions to cell surface Fc receptors (McGuire et al., 1986;Peiris and Porterfield, 1979;Peiris et al., 1981;Webster and Askonas, 1980). Cytomegalovirus is believed to bind &-microglobulin, displace MHCbound &-microglobulin, and use MHC antigens to bind to the cell surface McKeating et aE., 1987). Hepatitis B virus appears to bind to hepatocytes through polyalbumin and polymeric albumin receptors (Thung and Gerber, 1984).
A curious example where an related receptor can rescue a virus is supplied by mutant Sendai viruses which lack functional receptor binding proteins: since the glycoproteins contain terminal sialic acid, they * Infection by one virus may also interfere with subsequent infection by a second.
Cells chronically producing retroviruses, and expressing viral envelope glycoproteins on their surface, down-regulate, shield, or otherwise hide the binding site. Thereby the cells become resistant to superinfection by any retroviruses that use the same cell surfacebinding sites (see Sommerfelt and Marsh, 1988).
can be internalized through the asialoglycoprotein receptor and infect cells (Markwell et al., 1982).
G . Role of Cell Surface Virus Receptors
Binding facilitates viral entry by providing the initial physical association between a cell surface and the virion. Whether virus receptors play any further role in entry depends on the case. Influenza virus, for instance, does not require sialic acid containing receptors for penetration. The receptors are needed for binding and endocytosis. In the case of alphaviruses and rhabdoviruses it is also known from in uitro studies that the fusion activity which underlies penetration in endosomes is not dependent on the presence of specific surface receptors. Experiments in which avian retroviruses bind nonspecifically to receptordeficient cells but fail to infect suggest, however, that additional essential roles for receptors may exist in other cases (Notter et al., 1982). The receptor for poliovirus may, for instance, be needed for the correct acid-induced conformational change. While binding and endocytosis is required for penetration by pH-dependent viruses (Section V), no evidence for a more direct role for the receptor in penetration is available.
A . Background on Endocytosis
All nucleated interphase cells express continuous, high-capacity endocytic activity whereby components of the medium are internalized in membrane-bound vesicles. Constitutive endocytosis occurs primarily through clathrin-coated vesicles (-100 nm diameter), which form by the invagination of specialized coated-pit domains of the plasma membrane. Coated vesicles mediate efficient uptake of nutrient carriers, growth factors, peptide hormones, immune complexes, and other physiological ligands that bind to specific receptors expressed on the cell surface (see Brown et al., 1983;Goldstein et al., 1985;Helenius et al., 1983). In addition, solutes and small particles ( 4 0 0 nm diameter) are internalized nonspecifically and less efficiently in the fluid content of the vesicles (fluid-phase endocytosis) (see Steinman et al., 1983).
Following internalization, the clathrin coat is removed and the vesicles fuse with organelles of the endosome compartment. Endosomes are the station for sorting ligands and receptors internalized by coated vesicles and for regulating endocytic membrane traffic (see Helenius et al., 1983). They are responsible for dispatching molecules to lysosomes and for recycling others to the plasma membrane on the Golgi apparatus (see Goldstein et al., 1985;Helenius et al., 1983;Mellman et al., 1986). Some incoming molecules are proteolytically processed in endosomes (Diment et ul., 1988). Others undergo conformational changes.
The mechanisms of these sorting reactions are not well understood. However, acidification of endocytic organelles through membranebound H + -ATPases is one important element (Mellman et ul., 1986;Tycko and Maxfield, 1982). Endosomes are the first acidic compartment in the endocytic pathway (Fuchs et al., 1987). From a pH of 6.2 in early endosomes the pH decrease to approximately 5.0 by the time ligands reach the terminal compartment of the pathway, the lysosomes. Agents, such as acidotropic weak bases (ammonium chloride, chloroquine, amantadine, methylamine) and carboxylic ionophores (monensin and nigericin), which raise the pH of acidic organelles,* disrupt endosomal function, prevent the dissociation of receptorligand complexes, and inhibit the recycling of receptors Maxfield, 1982;Mellman et al., 1986;Ohkuma and Poole, 1978). Acidification is also important for lysosome function. Lysosomes provide a hydrolytic environment where ligands, and receptors to be down-regulated or turned over, are degraded. The hydrolytic enzymes are active at low pH and at least partially inhibited by reagents that raise the pH of acidic organelles (Ohkufna and Poole, 1978). Some hydrolytic enzymes are already encountered and active in the endo-soma1 compartment, where they modify and degrade certain incoming ligands (Diment et al., 1988).
In addition to constitutive endocytosis, cells can internalize large particles (>200 nm diameter) such as yeast or bacteria. Termed phagocytosis, this form of endocytosis is usually the property of specialized phagocytic cells such as macrophages (see Steinman et al., 1986), but with an appropriately opsonized particle it can occur in most cell types. Phagocytosis is ligand-induced, receptor-dependent, and occurs through an actin-based mechanism that can be inhibited by cytochalasin B. Following invagination, phagocytic vesicles are acidified in a similar way to endosomes and fuse with lysosomes to form degradative phagolysosomes.
* In nonprotonated form, weak bases readily diffuse across membranes. In acid compartments they become protonated, accumulate, and increase the vesicular pH. Carboxylic ionophores have a similar effect but achieve this by exchanging H + for Naf or K + (Maxfield, 1982;Ohkuma and Poole, 1978).
B . Many Viruses Are Endocytosed
Many viruses, including alphaviruses, orthomyxoviruses, paramyxoviruses, rhabdoviruses, retroviruses, herpesviruses, and a number of nonenveloped viruses, have been shown to be internalized by endocytosis (Dales, 1973;Helenius et al., 1980b;Matlin et al., 1982a,b). Morphological evidence for this process, originally termed "viropexis," indicates that uptake usually occurs through clathrin-coated vesicles by receptor-mediated endocytosis, but other modes of endocytosis may also be operational.
Detailed experiments with SFV have shown that the endocytosis is very similar to the receptor-mediated uptake of physiological ligands (see Fig. 1). At 0°C virions are bound over the entire cell surface but are not internalized. On warming to 37"C, the virions are relocated to coated pits and internalized in coated vesicles. Uptake is not induced by the virions, is independent of multiplicity, and occurs through the constitutive endocytic activity of the host cell (Marsh and Helenius, 1980;Marsh et al., 1983a). The SFV particles (65 nm diameter) can be contained within endocytic coated vesicles. They are internalized with a time course similar to that of serum low density lipoprotein (LDL) (the half-time of bound viruses on the cell surface is 5-10 minutes) Marsh and Helenius, 1980;Schmid et al., 1989). In BHK cells up to 3000 SFV particles can be internalized in coated vesicles per minute (Marsh and Helenius, 1980). The injection of anticlathrin antibodies into the cytoplasm of cells inhibits the internalization of SFV and blocks infection (Doxsey et al., 1987).
Morphological observations indicate that many other viruses follow essentially the same pathway, although the kinetics of endocytosis can vary apparently as a function of particle size. Influenza virus (100 nm diameter) and VSV (150 nm long, 50 nm diameter) have a half-time of 10-15 and 30 minutes, respectively (Matlin et al., 1982a,b).
Although clathrin-coated vesicles are clearly involved in the uptake of most enveloped and nonenveloped viruses, other endocytic mechanisms have been encountered. Morphological experiments with influenza and Sendai viruses show that, in addition to coated vesicles, virions are occasionally seen in noncoated membrane vacuoles resembling phagocytic vesicles (Dourmashkin and Tyrell, 1974;Hayward, 1987;Matlin et al., 1982a). Although herpesviruses are often observed close to coated pits, EBV (250 nm diameter) is reported to enter B lymphocytes in noncoated vesicles (Nemerow et al., 1985). The endocytic uptake of herpesvirus may at least in part, occur by a phagocytic 2. Internalization. enters by receptor-mediated endocytosis, and penetration by membrane fusion is triggered in the early endosome. Attachment occurs preferentially to microvilli followed by lateral movement of the virus to coated pits. Exposure to a pH of <6.2 and penetration of virus occurs almost immediately after internalization. If mutant viruses with a lower pH threshold of fusion or cell mutants with a n endosomal acidification defect are used, penetration occurs from late endosomes Schmid et al., 1988. Recent studies on the site of replication of the viral RNA suggest that it occurs on the cytoplasmic surface of the lysosomal membrane (Froshauer et al., 1988). Frequently, a connection between the lysosome and the rough endoplasmic reticulum is seen, suggesting that viral RNA synthesis, the synthesis of nonstructural and structural protein, as well as nucleocapsid assembly may occur in the same extensive structure suspended over the space between the cytopathic vacuoles and the lysosomes (Froshauer et al., 1988). mechanism, since it is inhibited by cytochalasins (Rosenthal et al., 1985). Whereas poliovirus seems to enter mainly by coated pits, HRV-2 may infect cells via a coated vesicle-independent endocytic mechanism (Madshus et al., 1987). Papovaviruses are known to be internalized mainly in noncoated vesicles (Maul et al., 1978;Mackay and Consigli, 1976). They are so small and tight fitting that it almost looks like the viral particles would be budding from the extracellular space into the cell. All these mechanisms require further study, as they may provide new insights into the versatility of cellular endocytosis.
C. Penetration from Endosomes
Like other ligands internalized by receptor-mediated endocytosis, internalized virions are delivered to endosomes. Kinetic, morphological, biochemical, and cell fractionation experiments have demonstrated that SFV, Sindbis, and influenza virions penetrate into the cytoplasm by fusing with the limiting membrane of endosomes Marsh et al., 1983a;Richman et al., 1986;Talbot and Vance, 1982;Yoshimura and Ohnishi, 1984). Internalized viruses have been shown to be infectious , and fusion of SFV with endosome membranes has been observed morphologically (Helenius, 1984). Particularly convincing images of viruses fusing out of endosomes have been obtained for frog 3 viruses (Braunwald et al., 1985).
For alphaviruses, myxoviruses, rhabdoviruses, and many other viruses, endocytosis is essential for productive entry. The acidic conditions in endosomes trigger the reactions which lead to fusion of the viral and endosomal membranes. The precise time and location of penetration is determined by the pH dependence of the fusion activity. Viruses in which fusion is triggered at a pH close to neutral (e.g., SFV, pH 6.2) fuse in so-called early endosomes Schmid et al., 1988). Viruses that require more acidic conditions, such as SFV,,,-, (pH 5.3) and influenza X-31 (pH 5.3), fuse in so-called late endosomes (tll2 = 20-35 minutes) Schmid et al., 1988;Stegmann et al., 1987b). The differences reflect the gradual increase in acidity within the endocytic pathway.
In all cases studied, fusion appears to occur in endosomes prior to delivery to the lysosomes Marsh et al., 1983a;Stegmann et al., 1987b). Viruses, or viral components that are delivered to lysosomes, are rapidly inactivated and degraded (Marsh et al., 1983a), which may explain why lysosomes are usually not able to support viral penetration. One exception could be reoviruses, where specific proteolytic cleavages are needed for infectivity (Sturzenbecker et al., 1987).
The dependence on endocytosis and subsequent exposure to acid conditions presents a point at which penetration can be blocked. By raising the pH of endosomes and lysosomes, weak bases and carboxylic ionophores block the acid-induced changes essential for fusion and translocation of many viruses (Gollins and Porterfield, 198613;Helenius et al., 1982;Matlin et aE., 1982a,b;Richman et al., 1986;Yoshimura and Ohnishi, 19841." As a result, the viral genome is not released into the cytoplasm. The efficacy of weak bases is related to the pH threshold of fusion. Viruses that fuse at pH 6.0 (SFV, VSV) and 6.5 (West Nile) require higher concentrations of weak bases to be effectively neutralized than viruses that fuse at pH 5.3 (influenza X-31) .
Weak bases and carboxylic ionophores do not affect binding, endocytosis, and transport of prebound virions to endosomes, nor do they affect the fusion reaction directly. In some cases weak base-induced blocks to infection can be bypassed by briefly incubating cells, with bound virus, in low-pH medium. This treatment induces fusion of virions at the cell surface, and, even in the presence of the inhibitors, results in infection of the cell (Helenius et al., 1980b. The effects of acidotropic weak bases are, moreover, very sensitive to a lowering of the pH in the medium, which has to be controlled throughout the experiment Yoshimura et al., 1987).
D. What Aduantages Does Endocytosis Offer a Virus?
Internalization through the endocytic pathway usually results in the delivery of ligands to the hydrolytic lysosome compartment, an environment in which they are rapidly inactivated and degraded. Since entry via the endocytic pathway carries such a risk for viruses, it must offer significant balancing advantages. Some potential advantages * Note that the effects of weak bases and carboxylic ionophores on virus entry should not be confused with effects on other steps in replication. Weak bases and carboxylic ionophores affect all acidic compartments in the cell and are not specific to endocytic organelles. For example, transport of newly synthesized enveloped virus membrane proteins to the cell surface can be inhibited through the effects of these agents on the exocytic pathway (see Cassell et al., 1984;Mellman et al., 1986;Tartakoff, 1983). When using these agents as diagnostic inhibitors for acid-dependent entry, it is important to assay for early events such as uncoating or RNA replication rather than late events such as the production of progeny virus.
have been discussed previously Sommerfelt and Marsh, 1988). By requiring endocytosis, virions can, for instance, ensure that they enter metabolically active cells capable of supporting replication. For pH-dependent viruses, endocytosis is required to deliver the virions to an acidic environment.
Another likely advantage is based on the fact that viral genome may need to be delivered to a specific region of the cytoplasm, such as the perinuclear region, to initiate infection. Entry through the endocytic pathway provides a natural means for achieving such relocalization. Evidence suggesting that productive infection of some cell types only occurs efficiently from endocytic vesicles comes from experiments with pH-d%pendent enveloped viruses. As already mentioned, fusion can be induced at the plasma membrane by briefly acidifying the medium (Helenius et al., 1980b). In baby hamster kidney (BHK), fusion a t the cell surface results in infection (Helenius et al., 1980b;. In several other systems, however, including VSV and fowl plague virus (an avian influenza virus) in MDCK cells, West Nile virus on murine macrophagelike cells, and SFV on Chinese hamster ovary (CHO) cells, fusion at the cell surface does not cause infection (Gollins and Porterfield, 1986a;Matlin et al., 1982a,b;M. Marsh, unpublished observations), although it is clear that the nucleocapsids have been delivered into thc cytosol (Matlin et al., 1982a). The most likely reason is that the incoming nucleocapsids are trapped in the space between the plasma membrane and the underlying membrane cytoskeleton, and cannot move into the cytoplasm proper. The result implies that endocytosis in many cell types may be essential for transporting viruses through obstacles within the cytoplasmic compartment, and deliver the genome into the central perinuclear area. In neuronal cells, which have long cytoplasmic projections, penetration is likely to occur after transport of viruses to the cell body.
A. Fusion Proteins
The membrane fusion activity of enveloped animal viruses is catalyzed by the spike glycoproteins (Table 11). While different in their detailed structure, the fusion proteins so far identified in different virus families have several properties in common. They are, as a rule, homooligomeric or heterooligomeric transmembrane glycoproteins with a combined molecular weight in the 200K-400K range. The N termini are located in the external (ecto) domain. Since the trans- (1977) membrane domains and the C-terminal, cytoplasmic domains are relatively small, the bulk of the polypeptide is external and visible by high-resolution electron microscopy as spikelike protrusions on the surface of the virus envelope. Most viruses only need a single type of spike glycoproteins for fusion, but it has been suggested that others, such as paramyxoviruses, may need the cooperative action of two (see later). Each virus particle contains 80 or more spike glycoproteins. Whether more than one individual spike is required in each fusion event is unclear.
The fusion factors are invariably glycosylated, and many of them contain covalently bound palmitic acid. While addition of the N-linked sugars is often required for correct folding of.the nascent fusion proteins and transport to the surface (see Rose and Doms, 1988;Schlesinger and Schlesinger, 19861, the carbohydrate moiety has no direct role in fusion. The role of fatty acid acylation is less clear. Mutants and naturally occurring strains of VSV, which lack fatty acylation in their G protein, are not impaired in their fusion activity, showing that the fatty acid is dispensable (Rose et al., 1984). In the case of influenza HA it has been reported, however, that removal of palmitic acid using hydrolysis with hydroxylamine inhibits fusion activity, suggesting that it may play a role (Lambrecht and Schmidt, 1986). It is not clear in this case whether the treatment modified the spike trimers in other subtle ways. In VSV G protein the fatty acid is coupled by a thioester bond to a cysteine residue in the cytoplasmic tail (Rose et al., 1984). A similar location for the palmitic acid is likely for other acylated spike glycoproteins.
Activating proteolytic cleavages catalyzed by cellular proteases late in the secretory pathway are critical for the fusion activity of a t least orthomyxoviruses, paramyxoviruses, coronaviruses, and retroviruses (see Choppin and Compans, 1975;Sturman et al., 1985;White et al., 1983). The new N termini are frequently hydrophobic, and some have been shown to be important in fusion (Boulay et al., 1987a;Gething et al., 1978Gething et al., , 1986Harter et al., 1988). In influenza HA strains the "fusion sequence" at the newly created N terminus of HA2 is highly conserved (Wiley and Skehel, 1987). While the uncleaved precursor hemagglutinin, called HAO, can undergo an acid-activated conformational change similar to that observed for the mature HA, it does not expose a hydrophobic moiety with which it can interact with membranes (Boulay et al., 1987a). It is thus apparent that the hydrophobic "fusion sequence" must be present as a free N-terminal for fusion activity.
Proteolytic activation is not required for all viral fusion proteins. A precursor subunit in the spike glycoprotein of SFV (p62) undergoes a late cleavage t o the mature glycopolypeptides E2 and E3, very similar to those listed earlier, but this cleavage is not needed for fusion activity (Cutler and Garoff, 1986;Edwards et al., 1983). Rhabdovirus G proteins do not undergo any posttranslational cleavages, but they are still very efficient fusogens.
Aside from the general similarities, fusion factors constitute quite a heterogeneous group of proteins. Sequence homologies are seldom observed, and not all have N-terminal hydrophobic fusion sequences. Some fusion proteins have receptor-binding activities, others do not. While fusion proteins are generally oligomeric, some contain a single type of subunits; others are heterooligomeric. The mechanisms of fusion also display distinct differences. It seems clear that, as a group, viral fusion factors do not have their origin in a single ancestral gene. Perhaps they evolved from different cellular surface receptors or cell recognition proteins, and their initial functions may have been to provide efficient and specific attachment to cells. The fusion function may have evolved later. Since many different proteins display latent fusion activities, particularly when denatured by extremes of pH, the structural requirements for this activity may not be very stringent (Stegmann et al., 1988).
B. Influenza HA and the Mechanism of Fusion
Most biological membranes have the capacity to fuse with other membranes. During cell life, membrane fusion is a frequent activity in endocytosis, exocytosis, and numerous other cellular functions. Fusion events in biological systems occur between specific membranes, at specific times, and at defined locations. Since they are strictly regulated, it is usually assumed that proteins are involved.
So far, the best-studied fusion proteins are the viral spikes from influenza virus, Sendai virus, SFV, HIV-1, and VSV. Their properties and activities have been described in numerous papers, and have been the topic of recent reviews, by White et al. (1983), Wiley and Skehel (1987), Stegmann et al. (1989), and Wharton (19871, and by several other authors in books on membrane fusion (Hoekstra and Wilschut, 1988;Ohki et al., 1989). Here we will restrict the discussion to some general principles, focusing mainly on influenza HA.
Influenza virus HA constitutes one of the most efficient fusogens known. The X-ray structure of the HA ectodomain in its inactive neutral pH form is known (Wilson et al., 198l), and its biosynthesis and assembly have been extensively analyzed. It is a homotrimer (3 x 84 kDa) in which every protomer consists of two disulfide-linked glyco-polypeptides, HA1 and HA2, derived by proteolytic cleavage from a common precursor HAO. HA2 constitutes the transmembrane subunit and plays a key role in fusion (Gething et al., 1986;Harter et al., 1988;Wharton, 1987). HA1, which is entirely outside of the membrane, is responsible for binding the virus to sialic acid residues on the surface of the host cell.
HA is the only protein component required for fusion activity (White et al., 1982a). To be active it must be an integral component of one of the fusing membranes (White et al., 1982a) and (in exceptional cases) when present as a polyvalent, water-soluble rosette (Wharton, 1987). The lipid compositions of the HA-containing membrane and the target membrane are not very critical (Maeda et al., 1981;Stegmann et al., 1987a;. Thus, HA can be reconstituted in functional form into artificial membranes without loss of activity, and it mediates fusion with virtually any type of biological or artificial membrane (Kawasaki et al., 1983;Maeda et al., 1981;Stegmann et al., 1987a;. Only certain liposomes with nonbiological lipid compositions have been found to be poor targets (Stegmann, 19871, and drastic enzymatic modification of the viral lipids has been shown to be inhibitory (Huang and Uslu, 1986). The target membranes do not need to contain any sialic acid-carrying molecules or proteins (Maeda et al., 1981;. The fact that sialic acid-containing receptors such as gangliosides sometimes increase the efficiency of fusion may reflect improved attachment of the two membranes to each other rather than receptor involvement in fusion (Stegmann et al., 1988).
The fusion activity of HA is acid-activated and involves a major conformational change in the HA trimer. The conformational change, studied in several laboratories using a variety of techniques (see Doms et al., 1985;Stegmann et al., 1988;Wharton, 1987;White and Wilson, 1987;White et al., 1983), involves a partial dissociation of the trimer. The top domains of the HA1 chains dissociate from each other. A previously hidden hydrophobic moiety, which includes the N termini of the HA2, is exposed, and additional changes occur in the interface between the three HA2 subunits (Doms and Helenius, 1986). Figure 2 depicts schematically our interpretation of the data. The half-life of the irreversible change is about 15 seconds a t pH 5 and 37°C; at lower temperatures the change is slower and less complete. There are indications that the change occurs in two steps. The first involves a change at the tip of the molecule and the second a change further down. However, studies with isolated bromelain fragments, which comprise NEUTRAL FORM ACID FORM
FIG. 2. The acid-induced conformational change in influenza HA. It is generally
agreed that acid exposure leads to a partial dissociation of the ectodomain of the HA spike trimer, that a hydrophobic moiety involving the N termini of HA2 (here shown in black) is exposed. Though based on a variety of biochemical, genetic, immunochemical, and morphological data, the proposed structure of the acid HA shown is, however, still highly speculative. the trimeric ectodomain (and approximate the intact HA in their acid conversion), have indicated that exposure of the hydrophobic fusion peptide may also be an early event (White and Wilson, 1987). The conformational change in the HA trimer is cooperative (Boulay et al., 1987b), and we have indirect evidence that fusion may involve more than one spike trimer (Doms and Helenius, 1986). It has been suggested on the basis of radiation inactivation analysis that the fusionactive unit in influenza virus fusion with red blood cells is no larger than about 60 kDa (Bundu-Morita et al., 1987). The dose-response curve is complex, however, and other interpretations are possible.
Despite extensive work in analyzing the conformational change in HA, a single generally accepted model does not exist. For alternative interpretations of its nature and its relevance for fusion, see Doms and Helenius (1988), Landsberger and Schegal (1986), and Ruigrok et uZ. (1986). Reports suggesting that influenza can fuse at neutral pH have appeared in the literature (Haywood and Boyer, 1982;Stegmann et al., 1985Stegmann et al., ,1986). One of them was shown to be an artifact due to the use of cardiolipin vesicles as targets (Stegmann et al., 1986). The highly charged cardiolipin membranes are known to serve as efficient fusion vesicles for a large number of proteins including many with no known physiological fusion activity (see Stegmann et al., 1989). The ac-cumulation of positively charged ions (including protons) close to the membrane, according to the electrical double-layer theory of Gouy and Chapman, may a t least in part explain the effect. Too little quantitative data are available to evaluate properly another report of neutral pH fusion between influenza virus and ganglioside-containing vesicles (Haywood and Boyer, 1982).
The detailed mechanism of HA-mediated fusion remains elusive. The view that we favor is as follows: When exposed to a pH of 5.0-5.6 (depending on the virus strain) in the late endosomes, the HA undergoes the conformational change. A hydrophobic interaction between the spikes and the target membrane is established through insertion of the exposed hydrophobic moieties into the target membrane. This results in attachment of the virus to the target membrane and close apposition between the two membranes. The close proximity of the two membranes results in local dehydration of the membrane surface, which in turn leads to a lowering of the hydration force, which normally prevents close apposition of polar surfaces in aqueous solution (Rand, 1981). As a consequence of local dehydration and a change in the local lipid configurations induced by the inserted polypeptide, fusion of the outer bilayer leaflet of the viral membrane and the inner bilayer leaflet of the endosome occurs. The fusion with the outer leaflet of the endosome and the inner leaflet of the viral envelope membrane follows, resulting in the release of the nucleocapsid into the cytosolic compartment. Fusion between two membranes obviously needs to occur only at a single focal site. This model predicts that the HA molecules are associated-at least transiently-as integral proteins in both fusing membranes and that the insertion of protein into the target membrane and the effect on surface hydration brings about a change in local lipid orientation.
It should be emphasized that other mechanisms are conceivable and that it is unclear what exactly is needed to induce fusion of any type of biological membrane (Blumenthal et al., 1987;Rand, 1981). Other viruses and viral fusion factors seem to differ in their fusion properties in significant ways. Some depend on the presence of cholesterol in the target membrane (Kielian and Helenius, 1984;, some do not have distinct hydrophobic fusion sequences (Woodgett and Rose, 1986), and some may, unlike HA, undergo reversible conformational changes at acid pH (Doms et aZ., 1987). As already mentioned, several viral proteins differ from HA in that they do not require acid pH for fusion (Tables I1 and 111). Additional information on these matters will be presented next, when a selection of individual virus families are described. Superti et al. (1984Superti et al. ( , 1987a, Tsing and Superti (19841, White et al. (1983) (1985), Gonzalez-Scarano et al.
A . Enveloped Viruses 1 . Alphaviruses
The pathway of entry and penetration of toga(a1pha)viruses has been studied in considerable detail. The work on SFV now serves as a general paradigm for viruses with endocytic entry pathways. It was the first virus to be shown to have acid-activated penetration from endosomes and to fuse with liposomes and cell membranes in an acidtriggered fashion (Helenius et al., 1980b;Marsh et al., 1983a;. The major advantages of this virus as a model system is the extremely high pfu to particle ratio, the ease of radioactive labeling to very high specific activities, and the wide host cell specificity. Since the literature on alphavirus entry has been recently reviewed (Kielian and Helenius, 19861, and some of the material already described here, a relatively short description will suffice. Although restricted to a few host organisms in the wild, these arthropod-borne, positive-stranded RNA viruses infect a wide range of cells in tissue culture. Some have been shown to bind to MHC antigens and to other surface proteins (Table I), but the true physiological receptors are not known. Endocytic internalization occurs by receptormediated endocytosis through clathrin-coated pits and vesicles (Helenius et d , 1980b;Marsh and Helenius, 1980). This has not only been demonstrated at the morphological level, but injection of anticlathrin antibodies into the cytosol has also been shown to inhibit endocytosis and infectivity (Doxsey et al., 1987). Internalized virions are rapidly and efficiently delivered in intact form to early endosomes where the mildly acid conditions (pH <6.2) trigger an irreversible conformational change in the viral fusion proteins, thus inducing membrane fusion and penetration (Edwards et al., 1983;Helenius, 1984;Kielian and Helenius, 1985;Schmid et al., 1988;Talbot and Vance, 1982;. The overall pathway of SFV entry is shown in Fig. 1. It is important to note that up to 60% of cell-associated viruses are actually able to release their RNA into the cytosol Kielian and Helenius, 1984;Marsh et al., 1983a), and that the viral RNA replication appears to take place on the surface of the endosomal and lysosomal membranes (Froshauer et al., 1988). The endosomes and lysosomes are modified in the process to morphologically distinct organelles called cytopathic vacuole I (Froshauer et al., 1988;Grimley et al., 1972).
Alphavirus penetration is blocked by weak bases and carboxylic ionophores Marsh et al., 1982;Miller and Lenard, 1981;Talbot and Vance, 1982). Semliki Forest virus reaches the endosomes in the presence of these acidotropic agents, but fusion is inhibited. The virus-loaded endosomes thus obtained can be isolated, and the SFV nucleocapsid induced to penetrate out of them in uitro by adding ATP to drive the endosomal H+-ATPase and decrease the endosomal pH (M. Marsh, unpublished observations). This in uitro-penetration reaction is inhibited by carboxylic ionophores and other agents that prevent acidification of the isolated organelles. Like influenza HA, the spike glycoproteins can be extracted from the viral membrane with detergents and reconstituted into lipid vesicles in fusionactive form (Scheule, 1986(Scheule, , 1987. For successful fusion of SFV with target membranes, three conditions must be fulfilled. i.) A pH of less than 6.2 is required to trigger the conformational changes that render the envelope proteins fusogenic (Edwards et al., 1983;Young et al., 1983). ii.) The target membrane must contain cholesterol (Kielian and Helenius, 1984;Scheule, 1987;, without which fusion does not occur. Cholesterol is needed for the low-pHinduced conformational change in the E l glycoprotein . iii.) Finally, the cell must have a normal membrane potential . In depolarized cells, virions are bound and internalized, and undergo acid-induced conformational changes in endosomes, but they do not fuse. This requirement remains somewhat perplexing because no pH gradient, ion gradient, or membrane potential over the target membrane is needed for SFV fusion with liposomes (Kielian and Helenius, 1984).
Since the mode of acid-activated, endocytosis-dependent entry of alphaviruses presented here is not unanimously accepted, it may be important to discuss some of the objections that have been raised. Because infection by Sindbis virus is not inhibited by cytochalasin B, Coombs et al. (1981) concluded that endocytosis is not essential for infection. However, it is known that, while this microfilament inhibitor blocks phagocytosis, it leaves coated vesicle-mediated pinocytic processes essentially unaffected (Steinman et al., 1983). Marsh and Helenius (1980) suggested, in fact, that the lack of cytochalasin inhibition was consistent with a pinocytic coated vesicle-dependent pathway of entry. The notion that acid-activated fusion only occurs after pH neutralization (Edwards and Brown, 1986) is also incorrect, as shown by recent studies on cell-cell fusion (Kempf et al., 1987). The confusion may in this case have arisen from the light-microscopic fusion assay used by Edwards and Brown (1986); they did not score fusion as such but subsequent changes in cell morphology, which take time to develop and may be more efficient after returning to neutral pH. Taken together, the overwhelming evidence contradicts the view presented by Edwards et al. (1983) that low pH causes the expression of a fusion function which merely "mimics" some intrinsically pH-independent event which occurs during normal infection.
Flaviviruses
Flavivirus entry has been studied in most detail with West Nile virus. West Nile virus is internalized by coated vesicles and trans-ported to endosomes (Gollins and Porterfield, 1985). Infection is inhibited by weak bases and is therefore presumed to involve acid-induced fusion Porterfield, 1985, 1986b). The conclusion is supported by experiments in which viruses have been induced to fuse with liposomes or the plasma membrane of target cells (Gollins and Porterfield, 1986a). Penetration is pH-dependent, with the threshold for fusion being about pH 6.5. The mechanism of entry into macrophages is of particular interest, as it is mediated by antibodies and presumably occurs via the Fc receptor (Gollins and Porterfield, 1985;Peiris and Porterfield, 1979;Peiris et al., 1981). Gollins and Porterfield (1986~) have made an important observation which may be of value for future therapeutic strategies against flaviviruses. They found that among several neutralizing monoclonal antibodies to the West Nile virus, one was able to block penetration without affecting attachment or endocytic internalization. They concluded that the antibody specifically interacted with the acidic fusionactive form of the virus and prevented penetration in endosomes. A similar report has been made for rabies virus (Dietzschold et al., 1987).
Rhabdoviruses
The entry of VSV and rabies virus into tissue culture cells resembles that of SFV and influenza virus, but less is known about it. Cell surface-bound virions are internalized through coated pits and delivered to endosomes (Matlin et al., 198213). Fusion is pH-dependent, occurring at pH 6.0 and below, and entry is inhibited by weak bases (Blumenthal et al., 1987;Matlin et al., 1982b;Miller and Lenard, 1981;Tsing and Superti, 1984) and carboxylic ionophores (M. Marsh, unpublished observations). Vesicular stomatitis virus fuses rapidly and efficiently with a variety of natural and artificial target membranes (Blumenthal et al., 1987;Matlin et al., 1982b;Mifune et al., 1982;White et al., 1981;Yamada and Ohnishi, 1986). The G protein, which exists as a trimer in the viral membrane, has been successfully reconstituted into proteoliposomes (Metsikko et al., 1986). G Protein undergoes conformational changes at low pH which seem to be unique in being reversible when the pH is returned to neutrality (Doms et al., 1987). One of the effects of the conformational change in the G protein is the stabilization of the homotrimeric structure (Doms et al., 1987). However, the effects of these changes on the conformation of the G-protein trimers and the fusion properties of the G protein are still poorly understood. The sequence of G Protein is known (Rose and Gallione, 1981), but there is no obvious hydrophobic fusion sequence. It has been suggested that the N-terminal peptide may serve such a role (Schlegel and Wade, 1985), but this is unlikely in view of more recent in uitro mutation studies (Woodgett and Rose, 1986).
. Orthomyxoviruses
The entry of influenza virus A has already been discussed extensively. Considerable information exists regarding the structure, receptor specificity, endocytosis, and low-pH-dependent fusion of these viruses. Penetration occurs from late endosomes (Yoshimura et al., 1982) following uptake in coated and noncoated vesicles (Dourmashkin and Tyrell, 1974;Matlin et al., 1982a). The low endosomal pH triggers the fusion reaction (Stegmann et al., 1985;White et al., 1982b) and infection is blocked by acidotropic weak bases (Matlin et al., 1982a;Yoshimura et al., 1982).
Paramyxouiruses
Paramyxoviruses, such as Sendai and Newcastle disease virus, are well-known fusogenic viruses (see Choppin and Compans, 1975;Hayward, 1987;White et al., 1983). Unlike the fusion reactions already described for alphaviruses, rhabdoviruses, and orthomyxoviruses, fusion is pH-independent, occurring with almost equal efficiency throughout the relevant pH range. Consequently, paramyxovirus fusion can and does occur a t the cell surface. During infection of cells with Sendai virus, intact virions can also be seen in coated vesicles and are presumably taken into endosomes. It remains to be demonstrated whether both or only one of the pathways are infectious.
Fusion is mediated by the F protein, which has a hydrophobic "fusion sequence'' at the N terminus of the F1 subunit (Gething et al., 1978). Only the proteolytically cleaved protein is active (Homma and Ohuchi, 1973;Scheid and Choppin, 1976). Several recent reports indicate that for maximal fusion, a second glycoprotein, HN, must also be present (Ohki, 1988). On the other hand, fusion can take place without receptor molecules in the target membrane (Citovsky and Loyter, 1985). Cholesterol in the target membrane provides for enhanced fusion activity (Asano and Asano, 1985;Citovsky et al., 1988;Hus et al., 1983;Yoshimura et al., 1987). Even then, the fusion activity of Sendai and other paramyxoviruses is slow and inefficient compared to acidactivated viruses.
Retroviruses
The retroviruses are a diverse group of viruses that can induce tumors or immunodeficiency diseases in humans and animals. A common feature is that they contain a reverse transcriptase which, following penetration, converts the positive-sense RNA genome into a doublestranded DNA which can integrate into the host cell genome. Entry of the viruses occurs by membrane fusion, which is mediated by fusogenic envelope glycoproteins (see White et al., 1983). These proteins have a similar overall structure to the HA of influenza virus, although they differ from it in primary sequence. The proteins are synthesized as single-chain polyprotein precursors in the endoplasmic reticulum of infected cells. En route to the cell surface the proteins are trimerized (Hunter and Einfeld, 1988) and proteolytically cleaved. One of the resulting glycopolypeptides, the N-terminal gp120 of HIV for example, is not anchored in the bilayer and carries the receptor-binding site. It is analogous to the HA1 polypeptide. The other portion (gp41 for HIV) is a transmembrane protein and contains the newly created N terminus, which is highly hydrophobic. It is analogous to HA2. As with HA, proteolytic activation and the hydrophobic nature of the N terininus are essential for the fusion activity (Kowalski et al., 1987).
The fusion activities of retroviruses can be acid-dependent or pHindependent (Table 111). The acid-dependent viruses include the ecotropic murine leukemia viruses, which are internalized into host cells by endocytosis and inhibited by weak bases (Anderson and Nexg, 1983) and murine mammary tumor virus (MMTV), which fuses cells a t <pH 5.3 (Redmond et al., 1984). Many other retroviruses-in particular those that infect human cells-exhibit pH-independent fusion. This has been demonstrated most clearly for HIV-1 and HIV-2 (McClure et al., 1987;Stein et al., 1987). HIV fuses at the cell surface and entry is not affected by weak bases and carboxylic ionophores (McClure et al., 1987;Stein et al., 1987). Virus particles are also observed in endocytic coated pits and coated vesicles (Bauer et al., 1987;D. Pauze, personal communication). It remains to be demonstrated whether either one or both of these routes leads to productive infection.
The CD4 receptor for HIV-1 and HIV-2, normally expressed on Thelper/inducer cells, functions as a virus receptor when expressed in human CD4-negative HeLa cells. Murine NIH 3T3 cells expressing CD4, however, bind HIV-1 but remain resistant to infection (Maddon et al., 1986). An additional complication is suggested by experiments with VSV (HIV-1) pseudotype viruses: pseudotypes containing the VSV genome and the HIV envelope infect HeLa-CD4 cells but fail to replicate the VSV genome in NIH 3T3-CD4 cells. The block in these murine cells appears to be in entry, reflecting an important, and as yet unresolved, postbinding entry event.
Herpesviruses
Virions of the herpes group are considerably more complex than the viruses just discussed. They are believed to carry a t least five antigenically distinct glycoproteins. Penetration occurs by membrane fusion. The gB protein is essential for entry and appears to be involved in fusion but not in binding (Ali et al., 1987). A role in fusion has also been suggested for the gD protein (Ali et al., 1987;Campadelli-Fiume et al., 1988;Fuller and Spear, 1987;Sarnieto et al., 1979). Morphological studies indicate that virions are internalized, and that this may occur in coated vesicles. However, fusion can either occur at the plasma membrane or following endocytosis, and it remains unclear whether both routes give rise to productive infection (Campadelli -Fiume et al., 1988).
The receptor for EBV, the CR2 complement receptor, is expressed primarily on B lymphocytes. The virions are up to 250 nm in diameter and thus considerably larger than the average coated vesicle. Epstein-Barr virus is, indeed, observed to enter into noncoated vesicles, possibly by phagocytosis (Nemerow et al., 1985;Tanner et al., 1987). Cytomegalovirus is believed to require exposure to low pH, as infection can be blocked by weak bases (J. McKeating, personal communication).
B. Nonenveloped Viruses
Compared to enveloped viruses, less is known about the penetration and uncoating of nonenveloped viruses. Endocytic uptake of adenoviruses, reoviruses, polyomaviruses, picornaviruses, and other viruses has been observed both biochemically and morphologically (Dales, 1973(Dales, ,1978Griffiths and Consigli, 1986;Silverstein and Dales, 1968), and a requirement for exposure to acid pH is inferred for several nonenveloped viruses from experiments in which infection is inhibited by acidotropic weak bases or carboxylic ionophores (Madshus et al., 1984a-c;Pastan et al., 1986;Stuzenbecker et al., 1987). The general mechanism of penetration through a cellular membrane probably involves a conformational change in the viral particle which confers on it hydrophobic, membrane-binding properties. This change may allow the viruses to become embedded in the cellular membrane and provide a mechanism for releasing the genome through the membrane.
. Picormviruses
A deep "canyon" or pit on the surface picornaviruses has been proposed as the receptor binding site. The amino acids lining the cavity are more conserved between the various viruses than other surface residues (Rossman and Palmerberg, 1988). Endocytic uptake of various picornaviruses has been observed morphologically and analyzed biochemically (Dales, 1973;Mandel, 1967;Zeichardt et al., 1985). It seems to be required for infectious entry of poliovirus type 1 and rhinoviruses. Infectivity is blocked by acidotropic amines, carboxylic ionophores, and reagents that inhibit endocytosis or endosome acid-ification (2-deoxy-~-glucose, NaN,) (Madshus et al., 1984a-c). The block can be bypassed if cells with bound viruses are briefly incubated at low pH (Madshus et al., 1984a). In isolated virions low pH induces changes in the capsid. In many cases this leads to loss of the VP4 protein and RNA, as well as to an increase in hydrophobicity and a change in isoelectric point. When polioviruses are incubated with cells or isolated membranes a similar change occurs, resulting in subviral particles that have lost the smallest structural protein VP4 and the viral RNA. It is possible that the extrusion of VP4, which is myristylated and therefore rather hydrophobic, may be significant in the translocation and mechanism (Paul et al., 1987;Chow et al., 1987). The conformational change in the virus particles also leads to exposure of the amino terminus of VP1, which helps attach the virus to lipid vesicles (Hogle, 1988). The changes are observed during normal entry into cells, and they are inhibited by weak bases, suggesting that low pH is responsible for conformational changes in the virions during penetration (Madshus et al., 1984a,b). In addition to exposure to low pH, a pH gradient across the membrane appears to be required for penetration (Madshus et al., 1984a). Thus acetic acid-induced acidification of the cytoplasm can inhibit poliovirus infection.
It has been suggested that occupation of receptor binding sites on the surface of the picornavirus capsids might facilitate disruption of the pentamer-pentamer contacts in the capsid and provide a port by which VP4 and RNA can exit the virion, accompanied by a change in the isoelectric point of the virus (Rossmann et al., 1985). Experiments on the pH dependence of the conformational changes observed in poliovirus type l support this idea. The changes are influenced both by temperature and association of the virus with a cell. In free virus the conformational changes are half-maximal at pH 5.0, following binding to cells they are half-maximal at pH 6.1-6.5 (Madshus et al., 1984a,b). Thus receptor binding may assist in unlocking the capsid.
Although acidotropic reagents and carboxylic ionophores inhibit the entry of poliovirus, HRV type 2, and foot-and-mouth disease virus (Carrillo et al., 1984;Madshus et al., 1984b), they do not block the entry of all picornaviruses. Infection by murine encephalomyocarditis (EMC) virus is, in fact, enhanced by monensin (Madshus et al., 1984b).
Adenovirus
The endocytosis of adenovirus has been observed morphologically, and a low-pH-dependent step has been implicated in its penetration (Dales, 1973(Dales, , 1978Pastan et al., 1986). Morphological experiments show that virions are internalized in coated vesicles and delivered to endosomes (Dales, 1978;Pastan et al., 1986). Further studies at the morphological and biochemical level suggest that the viruses are released into the cytosol in intact form by lysis of endosomes (Dales, 1978;Pastan et al., 1986). It has, moreover, been suggested that the lytic reaction is catalyzed by a viral factor which is activated by low pH (Pastan et al., 1986). Cells treated with weak bases (chloroquine and methylamine) are protected from infection, and the virions do not undergo acid-induced changes which render the viral proteins more hydrophobic (Pastan et al., 1986). As already mentioned, morphological evidence suggests that, following penetration from endosomes, the virion releases its genome into the nucleus through nuclear pores (Dales, 1978).
Reovirus
Reoviruses are internalized by receptor-mediated endocytosis, through coated pits and coated vesicles, and delivered to endosomes and lysosomes (Silverstein and Dales, 1968;Sturzenbecker et al., 1987). A variation in the theme of pH-induced changes seems to occur. The viral outer capsids are proteolytically digested to produce partially uncoated subviral particles which are able to penetrate.
Entry of reovirus types 1 and 3 into mouse L cells in inhibited by NH4C1 (Sturzenbecker et al., 1987). The inhibitor acts early in entry, within 1 hour of virus addition, and does not influence the binding, endocytosis, or intracellular routing of the virus. It blocks the digestion of the outer capsid proteins. If virions are digested with protease prior to infection, subviral particles are produced that are infectious and insensitive to NH4C1. It is unknown whether the acid-dependent protease is viral or cellular, but it is clear that digestion is an essential step in entry.
The fact that low pH and proteolytic digestion are important for entry in tissue culture cells does not mean that the same process occurs in the intact animal. Infection appears to occur through specialized M cells of the small intestine, and in order to reach these cells the viruses must pass through the acid hydrolytic environment of the stomach. How this "pretreatment" affects the virus is unclear (Wolf et al., 1983).
VIII. CONCLUSION
Interest in early virus-cell interactions is rapidly growing. For enveloped viruses some well-characterized paradigms have now been established, and relatively clear-cut diagnostic tests are available to determine general properties of entry in other systems. The main chal-lenges for the future are the identification of cell surface receptors, the characterization of the virus-receptor interactions at the molecular level, the elucidation of translocation, membrane fusion, and uncoating mechanisms, and the analysis of early cytoplasmic and nuclear events. The membrane translocation mechanisms of nonenveloped viruses remain particularly enigmatic.
Most of the successes so far have been limited to tissue culture cells. The question of entry in the whole organism, and its role in determining cell tropism and pathogenesis, must now be addressed. More work should also be invested in therapeutic and prophylactic strategies based on our knowledge about early virus-cell interactions.
It will be important to keep abreast with progress in a wide field of adjoining disciplines such as cell biology, molecular and structural biology, membrane biochemistry, and physiology. These areas provide the basis for understanding early virus-cell interactions. The exchange of information with other disciplines goes both ways: the work on virus entry has already greatly enhanced our understanding of endocytosis, membrane fusion, and organelle acidification. While each virus will turn out to have special features in respect to entry, the overall principles will probably be common to many. We are not yet at a point where these principles are known, and continued parallel work with a variety of different viruses is needed. | 2018-04-03T05:33:51.217Z | 1989-12-31T00:00:00.000 | {
"year": 1989,
"sha1": "ccb06e40316f7987e3e124131147c2a3395cd46b",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/s0065-3527(08)60583-7",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "4a0657f299fda3f04e21a262c181bab4d6c1feb4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
265684626 | pes2o/s2orc | v3-fos-license | THE INFLUENCE OF ECONOMIC UNCERTAINTY ON FOOD SECURITY AND THE MODERATING ROLE OF TRADE OPENNESS IN DEVELOPING COUNTRIES
This research examined the influence of economic uncertainty and the moderating role of trade openness on food security in 58 developing countries from 2012 to 2021. The dynamic panel data from the two-step System GMM was utilized to accomplish this. The findings of this research revealed that economic uncertainty did not exert a significant influence on food security in developing countries. Conversely, trade openness demonstrated a positive and significant effect in enhancing food security. Trade openness strengthened the adverse influence of economic uncertainty on food security in developing countries. The estimation results show trade openness has a significant positive effect of 0.0518, economic uncertainty has a positive but insignificant effect on food security, and Economic uncertainty when moderated by trade openness, shows a significant negative effect of -0.0533. The findings indicate that economic uncertainty does not significantly affect food security in developing countries. However, trade openness positively and significantly influences food security, suggesting that increased trade openness can enhance a country’s food security. The results reveal a significant negative effect when considering both trade openness and economic uncertainty. It implies that implementing policies that reduce trade openness can enhance food security in high economic uncertainty.
Introduction
Food security holds significant importance for individual countries and the global community, with one of the key objectives of the 2030 Agenda for Sustainable Development Goals (SDGs) being to eliminate hunger, improve nutrition, and achieve food security (UNDP, 2016).However, the current state of world food security in 2021 is worsening due to the influence of COVID-19.According to a report by the Food and Agriculture Organization of the United Nations (FAO), undernourishment (SDG Indicator 2.1.1)has continuously increased.In 2019, it stood at 8%, then rose to 9.3% in 2020 and further escalated to 9.8% in 2021.The report estimates that 828 million people worldwide will face hunger in 2021.
Additionally, the consequences of COVID-19 are compounded by the Russia-Ukraine conflict, which began in February 2022.This conflict involves two major food-exporting countries, Russia and Ukraine, accounting for 34% of global wheat exports and 73% of sunflower oil exports.As a result, the food price index 2022 is projected to increase by 20.8%.In March 2022, the FAO's Food Price Index reached its highest level since 1990, averaging 159.3 points (FAO, 2022b).
Efforts to enhance global food security have prompted the exploration of various strategies, including international trade policies.Food security was a prominent topic of discussion at the G20 Summit in Bali in 2022, recognizing the issue's urgency.During the G20 food security forum, Fred Kempe, the CEO of the Atlantic Council, emphasized the importance of maintaining open international food trade.He advised against protecting food exports by individual countries, highlighting that such protectionist measures during a crisis can expedite increases in food prices (Atlantic Council, 2022).Pascal Lamy, the former head of the World Trade Organization (WTO), supports this stance, asserting that the food crisis of 2011 was not caused by international trade.Lamy argues that international trade has consistently lowered food prices over the years through intense competition and increased consumer purchasing power.He emphasizes that international trade brings undeniable efficiency gains to agricultural production (Lamy, 2011).The WTO, G20, and Atlantic Council strongly advocate for the liberalization of food trade.They argue that government policies that protect domestic food production, such as quantitative restrictions and tariffs, are inefficient and contribute to higher food prices.
While it is widely acknowledged that open international trade contributes to increased food security and that food export protection policies can lead to higher food prices, it is essential to recognize that many countries opt for food export protection measures in times of heightened uncertainty.Examples of such policies include the restrictions implemented by 27 countries during the 2008 global crisis, 20 countries during the COVID-19 in 2020, and 24 countries during the Russia-Ukraine war (Laborde, 2022).These decisions highlight the complexities governments face during economic uncertainty, where future conditions are highly uncertain and risky.Such policies diverge from the recommendations of international organizations like WTO, FAO, and the World Bank, which advocate for trade openness even in times of uncertainty.Clapp (2017) highlighted specific circumstances where more open trade policies can heighten food security risks.During periods of economic uncertainty characterized by unstable and rising global food prices, declining export earnings due to exchange rates, and uncertainties in thin international markets, the potential for exacerbating food security risks is higher.Akter (2022) provided a summary of empirical evidence regarding the influence of limiting food exports on various aspects of domestic food systems.The results are diverse, indicating that the impact of export restrictions on domestic food prices and the welfare of food system actors in times of crisis varies from country to country.While some countries have successfully reduced domestic food prices by implementing export restrictions, others have experienced increased domestic prices.Su et al. (2023) analyzed the effect of economic policy uncertainty (EPU) on food security and the moderating effects of international trade dependence and food price volatility.The findings of this research indicate that EPU has an amplifying effect on the constraints on food security, particularly when coupled with fluctuations in food prices.In addition, the degree of dependence on foreign trade intensifies the adverse effects of the EPU on food security, further exacerbating the adverse outcomes.This research sheds light on the importance of considering the interplay between economic policy uncertainty, international trade dependence, and food price fluctuations when evaluating the implications for food security.The empirical results from Zhang et al. (2022) suggest that the effect of EPU on trade stability significantly undermines China's food security.Specifically, it negatively affects the stability of agricultural trade, which has implications for food security in China.
In contrast to the existing research that primarily examines the influence of economic policy uncertainty, the present research aims to investigate the dimensions of economic uncertainty.This research addresses the scarcity of studies focusing directly on economic uncertainty by utilizing the World Uncertainty Index (WUI) developed by Hites Ahir (International Monetary Fund), Nicholas Bloom (Stanford University), and Davide Furceri (International Monetary Fund).The WUI is a recent index that measures economic uncertainty through the frequency count of "uncertainty" (and its variance) in the quarterly reports of the Economist Intelligence Unit (EIU).
This research examined the moderating influence of trade openness and economic uncertainty on food security in developing countries.Developing countries were selected based on their heightened vulnerability to food security issues.Developing countries with less favorable macroeconomic conditions than developed countries are particularly vulnerable to shocks and food price fluctuations, where disruptions to supply chains or reduced economic access to food can lead to food conflicts (Erokhin & Gao, 2020).The research regarded data from 58 developing countries spanning 2012-2021.Investigating trade openness was driven by its relevance to the issue.Given the current state of food security, which is experiencing significant challenges, including the looming threat of economic uncertainty and a potential food crisis in 2023, it becomes imperative to examine the role of trade openness.The role of trade openness in times of heightened uncertainty has generated varying perspectives among policymakers and global organizations.Furthermore, there is a research gap in the literature regarding the specific focus on economic uncertainty and the influence of trade openness on food security.This research aims to bridge that gap by contributing new insights and understanding to the field.
Literature Review
Food security, as defined by The World Food Summit, refers to the condition where individuals, households, and communities at regional, national, and global levels have both the physical and economic means to obtain an adequate supply of safe, nutritious food and fulfills the requirements for an active and healthy lifestyle.This concept is supported by four key pillars of resilience: food availability, food access, food utilization, and food stabilization (FAO, 1996).
The theory of comparative advantage, developed by Ricardo (1917), posits that the benefits of international trade are based on relative rather than absolute advantages.It means that countries without absolute superiority in producing certain goods can still engage in trade.Trade can occur if each country has a comparative advantage in producing a particular commodity type.In cases where a country is less efficient in producing both commodities, it will specialize in producing the commodity with the most minor absolute loss.As a result, a country with a relative advantage will produce certain commodities relative to its trading partners and import commodities where the absolute disadvantage is more significant.In Ricardo's view, trade between countries is feasible as long as there are disparities in the relative price comparisons between the countries involved before engaging in trade.The theory of comparative advantage states that international trade arises from variations in labor productivity (explicitly stated factors of production) among countries.However, this theory needs to investigate the reasons behind these productivity differences.
In contrast, the modern International Trade Theory developed by Heckscher & Ohlin (1991) emphasizes that international trade is predominantly influenced by a country's factor endowments (natural resources, capital, and labor) and the prices of factors of production between countries.Based on the two discussed theories on international trade openness, it is evident that it improves trade efficiency.Consequently, this influences food security as it enhances the availability and stability of food through increased variety and the introduction of imported food.
According to research conducted by Marson et al. (2022) trade openness has a significant impact on reducing malnutrition rates in developing countries.Interestingly, these effects are primarily attributed to the import component rather than income, indicating a direct influence.Several other studies, such as those by Adamchick & Perez (2020); Dithmer & Abdulai (2017); Fusco et al. (2020); Shuaibu (2021); Sunge & Ngepah (2022) have also yielded similar results, demonstrating the positive influence of trade openness on food security.However, it is worth noting that not all studies reach the same conclusions.For instance, Sun & Zhang (2021) discovered a U-shaped relationship between trade openness and food security, where trade openness initially negatively influenced food security but later led to improvements.Other studies show that trade negatively affects food security.Zhu ( 2016) conducted research in China, which demonstrated that international trade increased reliance on food imports and affected food security.Bren D'Amour et al. ( 2020) and Luo & Tanaka (2021) show food imports lead to food supply instability.Mary (2019) found that an increase in food trade openness by 10% would increase the prevalence of undernourishment by about 6% in developing countries.Knight (1921) views economic uncertainty as a condition where multiple outcomes are possible, but the probability of each outcome is unknown.Unlike situations involving risk, where all potential outcomes and their respective probabilities can be determined, uncertainty lacks knowledge of the outcome probabilities.Keynes (1973) defines uncertainty as a state where the probability distribution of future events, such as investment choices, is unknown.Predicting outcomes becomes challenging in uncertain circumstances as they do not adhere to a predetermined probability distribution (Davidson, 1991).The theory of economic uncertainty suggests that it has a detrimental effect on the economy, particularly unemployment and price stability.In turn, it negatively influences food security as economic downturns and unstable prices can have adverse consequences.Food security relies on two essential pillars: access and food stability.Food access encompasses physical and economic access, which refers to the ability to afford food (FAO, 1996).Economic uncertainty can hinder access to food by affecting people's purchasing power.
Additionally, food stability is also affected as stable food prices are necessitated.Since economic uncertainty disrupts price stability, it consequently undermines the pillar of food stability.Wen et al. (2021) explore the impact of EPU on food prices in China.The results show that an increase in the EPU leads to a significant increase in food prices in the short term.Similarly, based on the analysis of 172 countries from 2000 to 2014, food security is positively correlated with political and economic stability.An increase in the stability of the economic and political environment will improve the country's food security (Jianming et al., 2020).
According to research conducted by Abay et al. (2023) there are better solutions than export restrictions, as they will increase global food prices.Rising global food prices threaten importing and low-income countries' food security.Aragie et al. (2020) found that export bans can help stabilize domestic food prices but are insufficient to address domestic food price spikes caused by external price shocks.Similarly, Fuje & Pullabhotla (2020) show that export bans can reduce domestic and relative prices in the short run.The short-term impact of export bans helps explain why policymakers use them.However, the welfare analysis shows that the long-term distortionary impact caused by the export ban outweighs its impact on welfare improvement and poverty reduction.These studies show that export restrictions affect food security negatively.
In a research conducted by Su et al. (2023), the influence of EPU on food security was analyzed in 25 countries, considering moderating variables such as international trade dependence (trade openness) and fluctuations in food prices.The findings indicate that EPU has an increasing constraining effect on food security, especially with fluctuations in food prices and foreign trade dependence.The research demonstrates that higher food dependency intensifies the negative influence of EPU on food security.International trade dependence was employed as a moderating variable.The researchers utilized Moderating Regression Analysis (MRA) and the fixed effect model for analysis.Similarly, Zhang et al. ( 2022) analyzed the effect of EPU on the stability component of food security in China.The empirical results indicate that economic policy uncertainty significantly erodes trade stability and negatively affects food security.
Akter (2022) revealed inconsistent findings regarding the influence of food export restrictions on various aspects of the domestic economy.Specifically, the research examines the effects on the welfare of food system actors and domestic food prices in exporting countries, including farmer welfare and economic efficiency.The analysis of 12 literature sources on the subject demonstrates diverse outcomes.Among the examined studies, two indicate an increase in domestic food prices, eight show a decrease in domestic food prices, and two suggest no significant influence on food prices.
Based on the previous theory and research, which suggests a positive effect of trade openness on food security, as well as the relationship between economic uncertainty, trade openness, and food security, the hypothesis of this research was formulated as follows: H1: Trade openness has a positive and significant influence on food security in developing countries.
H2: Economic uncertainty negatively and significantly influences food security in developing countries.
H3: The negative effect of economic uncertainty on food security will increase with the increasing trade openness in developing countries.
Data and Research Methods
Panel data from 58 developing countries within the timeframe of 2012-2021.We chose this period and 58 developing countries because it includes the effects of the 2011 food crisis and the effects of COVID-19, as well as due to data availability.This study uses secondary data retrieved from official organization websites such as FAO, World Bank, WUI (World Uncertainty Index), and the Economist Influence Global Food Security Index (GFSI) (economist.com)were analyzed.These sources provided reliable and comprehensive information to analyze the relationship between trade openness, economic uncertainty, and food security across the selected countries.The following data panel model was used in this research: In this research, the dependent variable is food security (FS), while the economic uncertainty index (WUI) and trade openness (TO) serve as independent variables.GDP (GDP per capita) and INF (food inflation) are also included as control variables.The standard error is denoted by "e."The data is collected for each country (i) over a specific period (t).The measurement of the food security variable is based on the Global Food Security Index (GFSI), which considers various factors such as food availability, quality, safety, affordability, sustainability, and adaptation.It is a dynamic model that combines both quantitative and qualitative data, incorporating 58 unique indicators that measure the driving factors of food security in developed and developing countries.The weighting of these indicators was determined by a panel of five experts in 2012 and subsequently normalized.The data for all series is transformed into a comparable value ranging from 0 to 100, where higher values indicate better food security in a country (The Economist Intelligence Unit, 2020).Economic uncertainty (WUI) is characterized by an unknown probability distribution of outcomes in decision-making.It arises during times of future uncertainty when predicting outcomes becomes challenging due to the absence of a predetermined probability distribution (Davidson, 1991).Various factors can contribute to economic uncertainty, including natural disasters like earthquakes or hurricanes, events like the COVID-19 pandemic, financial crises, and political instability.To measure economic uncertainty, WUI is employed, which is based on the frequency count of "uncertainty" and its variants in the quarterly reports of the Economist Intelligence Unit (EIU).This index captures both short-term and long-term uncertainties related to economic and political developments.The construction of the WUI involves counting the times uncertainties are mentioned in country reports within the EIU reports.The WUI scale is determined by calculating the number of uncertainties (and their variations) per thousand words and multiplying it by 1,000,000.Higher values on the WUI indicate higher uncertainties, while lower values indicate lower uncertainties (Ahir et al., 2022).
The variable of trade openness reflects the extent to which a government exerts control over the trade of goods and services, allowing for more significant international free trade.Trade openness is typically measured by the total value of a country's exports and imports as a percentage of its Gross Domestic Product (GDP).It indicates the degree to which a country is engaged in international trade.GDP per capita measures a nation's economic output per person (GDP/total population).Food inflation refers to the increase in food prices over one year.This variable is calculated using the base year of 2015 (FAO, 2022a): where, This research uses the panel data regression approach to analyze dynamic changes.This method offers the advantage of capturing more significant variability in the data while minimizing collinearity issues between the variables (Gujarati &Porter, 2010).To specify the dynamic panel data model, the research draws on the work of Baltagi (2005).
d is a scalar, x ' it matrix 1xk, b is matrix 1xk, and it n is assumed to follow the one-way component error: The equations indicate that ( , ) v n represents an independent error.In the dynamic panel data model, the dependent variable, yit , includes a lagged regressor, which introduces a correlation between yit 1 -and uit .This correlation can lead to endogeneity issues, making estimating the model using fixed or random effects inappropriate due to potential bias and inconsistency in the results.
The data analysis technique employed to examine the influence of economic uncertainty on food security while moderating trade openness is the Generalized Method of Moments (GMM).Regression analysis, explicitly utilizing the two-step system GMM method.The system GMM was selected for its efficiency and its ability to address endogeneity concerns by including lagged independent variables as regressors in the regression.The two-step GMM estimator incorporates corrected standard errors and t-tests, as outlined by Arrelano & Bond (1991).The two-step GMM estimator incorporates corrected standard errors and t-tests, as outlined by Windmeijer (2005).By simultaneously incorporating level and first difference equations, the System GMM estimator achieves greater efficiency than the difference GMM approach.The dynamic panel data regression model in this study: Two tests were conducted to assess the model specification of the GMM estimator.The consistency of the GMM estimator relies on the validity of the lagged value of the explanatory variable as an instrument.In this research, the Arellano and Bond tests for autocorrelation and the Hansen J-test for overidentifying restrictions are employed.These model specification tests ascertain whether the dynamic panel model estimated with GMM satisfies the criteria for valid and consistent instruments.
To assess the consistency of the estimation results, an autocorrelation test known as the Arellano-Bond (AB) test will be conducted.The decision criterion is based on the p-value.If the p-value is above the 1%,5%, or 10% significance level, the model is free from autocorrelation and can be considered valid.The Sargan test, also known as the Hansen test, was utilized to assess the validity of employing more instrumental variables than the estimated parameters, indicating an overidentifying restriction condition.The model is valid when the chi-square statistic or p-value is above the 1%,5%, or 10% significance level.Hypothesis testing uses partial tests (z-test) and the Wald Test.
Finding and Discussion
The description of the data used to analyze the effect of economic uncertainty and trade openness on food security in 58 developing countries in 2012-2021 is presented in the following table.The greater the standard deviation, the more significant the data heterogeneity.It is worth noting that economic uncertainty (WUI) reaches a minimum value of 0. At the same time, trade openness has a minimum value of 16, indicating that the developing countries under research engage in international trade to varying degrees without completely closing their trade activities.H1: Trade openness has a positive and significant influence on food security in developing countries.The estimation results show that trade openness has a significant positive effect of 0.0518 with a significance value 0.000 below 0.05.Therefore, H1 is accepted.H2: Economic uncertainty negatively and significantly affects food security in developing countries.The estimation results show that economic uncertainty has a positive but insignificant effect on food security with a significance value of 0.244, more significant than the 0.05 limit; H2 is rejected, and H3: The negative effect of economic uncertainty on food security is higher along with higher trade openness in developing countries.Economic uncertainty, moderated by trade openness, shows a negative effect of -0.0533 with a significance level of 0.001 (<0.05).Hence, H3 is accepted.The estimation results reveal contrasting influences of the GDP per capita variable and food inflation as the control variable on food security in developing countries.Specifically, the GDP per capita exhibits a positive and significant influence with a coefficient of 0.0007327 and a significance level of 0.000 (below the threshold of 0.05).On the other hand, the food inflation variable negatively affects food security in developing countries, indicated by a coefficient of -0.0071758.However, the significance level associated with this coefficient is 0.571, suggesting it is not statistically significant.The results of the estimation using the system GMM are presented as follows.In the sargan test, the decision criterion states that H0 should be rejected when the chisquare/p-value is below the 0.05 significance level.It is indicated that the model is considered invalid.Conversely, the model is valid when H0 is accepted, meaning the p-value is above the 0.05 significance level.In the case of the Sargan test, the result of 0.1010 suggests that the model being used is valid.The decision criterion is to reject the null hypothesis (H0) when the p-value is below the 0.05 significance level to assess the consistency of the estimation results.If the p-value is below this threshold, it indicates an autocorrelation problem.However, in the case of the AB test, the result of 0.8473 suggests that there is no autocorrelation problem, indicating a consistent model.
Simultaneous Test
The model undergoes a simultaneous test to determine if a set of independent variables collectively holds significance for the model.With a significance level of 0.0000, it indicates that all variables, including economic uncertainty, trade openness, GDP per capita, and food inflation, significantly influence food security in developing countries.
The Effects of Trade Openness and Economic Uncertainty on Food Security in Developing Countries
The findings of this research indicate that trade openness has a positive effect on food security.Specifically, a 1% increase in trade openness significantly increases 0.0518 units in the food security score.The theoretical foundation of this relationship is rooted in the modern international trade theory of Heckscher & Ohlin (1991).According to this theory, international trade enhances trade efficiency, resulting in lower food prices and a wider variety of food choices.In turn, it benefits food security as lower prices make food more accessible, while a greater variety of food options improves food stability by reducing reliance solely on domestic production.This study supports evidence from previous observations (Adamchick & Perez, 2020;Dithmer & Abdulai, 2017;Fusco et al., 2020;Marson et al., 2022;Shuaibu, 2021;Sunge & Ngepah, 2022), These studies found that trade openness enhances food security.Trade openness improves food security because international trade plays a crucial role in stabilizing agricultural production, not only at a global level but also within nations or regions.It facilitates food distribution from surplus areas to regions experiencing deficits, thus helping stabilize food prices.This stability is essential considering the unpredictable weather patterns in different countries (World Bank, 2012).
The findings of this study show that economic uncertainty has a positive effect on food security, but the effect is not significant.The effect of increasing food security during times of high economic uncertainty can be caused by government policies that prepare food supplies by stockpiling foodstuffs and not exporting food goods during times of high uncertainty, such as Indonesia's policy during the Russia-Ukraine war to limit food exports through Presidential Regulation (Perpres) No. 125 of 2022 stipulating 11 types of basic foodstuffs included in the CPP (government primary reserves) which will be controlled and managed by the government during the food price crisis, India also bans sugar and wheat exports in early 2022, and according to the Laborde (2022) 20 countries had restricted their food exports during the Russia-Ukraine war and all of them were developing countries.
The Moderating Influence of Economic Uncertainty and Trade Openness on Food Security in Developing Countries
The findings of this research indicate that economic uncertainty and trade openness significantly negatively affect food security in developing countries.Specifically, a 1% increase in economic uncertainty and trade openness decreased the food security score by 0.0533 units.Interestingly, the research suggests that an increase in trade openness strengthens the negative influence of economic uncertainty on food security when economic uncertainty increases.Keynes' Theory of Economic Uncertainty (Keynes, 1973), defines uncertainty as a situation where the probability distribution of decision outcomes, such as investment, is unknown.It becomes challenging to predict future outcomes during times of uncertainty since they do not follow a predetermined probability distribution (Davidson, 1991).The findings of this research indicate that trade openness has a significantly positive effect on food security when analyzed independently.However, this effect changes in the presence of economic uncertainty.Specifically, the results suggest that the interaction between trade openness and economic uncertainty significantly negatively influences a country's food security.
These results corroborate the ideas of Su et al. (2023) who analyzed the influence of EPU on food security in 25 countries using the Fixed Effect Model, considering moderating variables such as international trade dependence and fluctuations in food prices.Their findings revealed that economic policy uncertainty exacerbates the restraining effect on food security, particularly when coupled with fluctuations in food prices and higher levels of dependence on foreign trade.The research suggests that the higher the dependency on foreign trade, the greater the negative influence of economic policy uncertainty on food security.These situations typically arise when a country heavily relies on a staple food, such as rice, traded in a limited international market.In such cases, dependence on imports increases the vulnerability to price spikes caused by disruptions in supply.Additionally, countries with large populations and high demand for certain staple food commodities face increased risks.During periods of economic uncertainty characterized by unstable and rising global food prices, declining export earnings due to exchange rates, and uncertainties in thin international markets, the potential for exacerbating food security risks is higher (Clapp, 2017).
Conclusions
This research examined the relationship between economic uncertainty, trade openness, and food security in 58 developing countries.Partial and moderate analyses were conducted using the two-step System GMM dynamic panel data method proposed by Blundell & Bond (1998).The findings indicate that economic uncertainty does not significantly affect food security in developing countries.However, trade openness positively and significantly influences food security, suggesting that increased trade openness can enhance a country's food security.The results reveal a significant negative effect when considering both trade openness and economic uncertainty.It implies that in instances of high economic uncertainty, implementing policies that reduce trade openness can enhance food security, as suggested by the findings of this research.
Further research can be conducted to enhance our understanding of the relationship between economic uncertainty, trade openness, and food security.It can involve incorporating additional dependent variables aligned with the four pillars of food security: food availability, food access, food utilization, and food stabilization.By exploring the effects of each pillar individually, future studies can provide a more comprehensive analysis of the influence of economic uncertainty and trade openness on food security.Moreover, it would be valuable for future research to examine variations among essential groups of countries, such as leastdeveloped countries, export-oriented nations, and low-income food-deficit countries.
Declaration
Declaration includes Conflict of Interest, Availability of Data and Materials, Author's Contribution, Funding Sources, and Acknowledgments. | 2023-12-06T16:20:39.979Z | 2023-12-03T00:00:00.000 | {
"year": 2023,
"sha1": "bd970a3dd4e7156195b6b88ea2b6adfc637b72e5",
"oa_license": "CCBY",
"oa_url": "https://e-journal.unair.ac.id/JDE/article/download/47122/26773",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b967002d2df5b95de8b846f5ec0a72a7cc25846c",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
} |
212915861 | pes2o/s2orc | v3-fos-license | Microwave low mass-dimensional frequency standard on Hg- 199 ions
In the paper the design of a magnetic trap for low mass-dimensional microwave frequency standards is presented. The dependencies between the number of registered photons by PMT and magnetic field values are established. The new algorithm for optical processing is developed. The results comparison of experimental Allan deviation researches for different designs of microwave standard on 199Hg+ ions are performed.
In case of lowering construction size of a QFS for solving hard tasks connected with determining coordinates of moving objects the unavoidable outcome is decreasing short-term and long-term frequency stability. That is because QFS require a high thermostabilization order of the most operational blocks. Especially in cases of implementation laser or photodetection devices [26][27][28][29].
Besides that, QFS use magnetic fields in order to split energy levels of atoms into sublevels. While applying magnetic fields thermostabilization must be achieved [30][31][32][33]. It is necessary to note that most QFS models (for instance, those based on cesium-133 or rubidium-87) are not stable to high Gforce overloads. These overloads appear due to obtaining fast speed in short periods of time.
The held research by us has shown the microwave frequency standard on Hg-199 ions is the best solution in this situation. This standard has one major disadvantage to the other QFS modelsbig mass-dimensional parameters of magnetic system, operation and signal processing blocks. That is why for successful implementation of such a standard in moving objects it is necessary to decrease its size and mass, not affecting the precision characteristics sufficiently.
It is not a trivial task to solve, because all the blocks are interlinked with one another in a certain way. That is why the issue of standard modernization must be solved as a whole. Lower mass and size of the major blocks, develop new algorithms for automatic frequency control, magnetic field maintenance and photon registration systems. One of the solutions of this task is presented in the present work.
Microwave low mass-dimensional frequency standard on 199Hg+ ions
The main block of the microwave standard is a magnetic trap. That is why the directions of modernizations which lead to lowering its mass-dimensional parameters are primarily connected with reducing the size of the trap itself.
The alternating electro-magnetic field holds a required number of 199 Hg+ ions in the magnetic trap in order to make them interact with λ = 194.2 nm radiation and emit a certain number of photons which then has to be registered by a PMT (Photomultiplier tube). Long-term and short-term stabilities are straight out connected a quality of ions' trapping. Alongside with lowering mass and dimensions of the magnetic trap, electrode sizes for producing magnetic field are decreasing as well. That is why the requirements for parameter stabilization of the operating signal, used for forming magnetic field, are increasing vastly. In order to solve such a task we developed a new block for controlling electromagnetic field that provides a proper trapping of charged particles.
We have developed a new magnetic trap construction for the low mass-dimensional microwave frequency standard. A new automatic frequency control system has been developed for guiding the trap through managing the frequency of the quartz generator and the supply voltage of the power cascades.
Charged particles in the trap are held with an effective potential: (1) Trapping a particle with a single charge q shall be limited by mass m: at the upper bandby a depth of effective potential and at the lower bandby a mean duration of an ion trapping. The low edge of the operational frequency Ω is determined as: where nnumber of electrodes, r` -normed trap radius, Emkinetic energy of particles, r0trap radius, mmin and mmaxminimum and maximum particle mass This has allowed to fulfill an effective parameter control of the magnetic trap, judging by the analysis of the photon counter data, even in relatively rough conditions (for instance overpressure etc.). The circuit diagram for supplying the trap is show at the Figure 1: The generator 5 supplies the trap through the driver 6 and the transformer 7. Leaked photons from the trap are registered by the PMT that sends amplified signal to the Photon counter 3, programmed with VHDL language and set to precisely tract the number of leaked photons through different periods of time. The whole control loop is managed by the MCS-51 controller unit that receives signals from the Photon counter and produces control codes for the synthesizer, Automatic frequency control (AFC) generator, and the highly stable DC unit in order to operate values in real time. In other words, th negative feedback system is implemented.
Received experimental data has shown that a low mass-dimensional magnetic trap with the implemented control system ensures a solid operation with a number of registered photons from 10 4 to 5·10 5 and with the next parameters: frequency Ω shall be placed in the range: from 0.74 to 1.60 MHz, its amplitude V0 from 120 to 200 V and kinetic energy of particles Em from 0.2 to 10 eV.
The results of the experimental research
The held research has shown that spectrums of the driving voltages in the new magnetic trap construction have no sufficient perturbations which may affect the stability of operation. The The analysis of the acquired data on the fig. 1 shows that the implemented technical solutions have provided us the way to improve Allan deviance of 20 %. Long-term stability in the new low massdimensional construction did not experience a decrease.
Conclusion
The received experimental data has shown that while decreasing construction size of the standard in 3 times and mass by more than 60 % the suggested technical solutions allow us to save and improve the required precision characteristics in order to offer a solid operation of navigational devices and communication systems in mobile flying vehicles. | 2020-01-02T21:56:35.007Z | 2019-12-01T00:00:00.000 | {
"year": 2019,
"sha1": "6e28252a4d52e65d8b8589babdf4b95cdc8485d6",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1410/1/012211",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "63ab200a55934313ed1a5aba8b33dea9cfa9ddef",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
146515786 | pes2o/s2orc | v3-fos-license | Shalom & Eirene: Fully Caring for the Afflicted Person
The biblical concepts and categories relating to the person portray the individual as a multidimensional relational totality. To be fully human is to be a constitutive part of the whole creation in an enhancing relationship with God, with others made in his image and likeness and with the rest of God’s creation. The ecosystem, culture-education, family and friends unavoidably influence the individual, as do the choices he or she has already made. Thus to be ‘a responsible partner’ with God in the creative process of becoming and making shalom, a person needs to be self-reconciled and then become reconciled one with another, in family, in community and in all other spheres of their immediate and cosmic context. It is this that proves a person to be truly reconciled with God and thus enabled him or her to become fully human.
The biblical concepts and categories relating to the person portray the individual as a multidimensional relational totality. To be fully human is to be a constitutive part of the whole creation in an enhancing relationship with God, with others made in his image and likeness, and with the rest of God's creation.
The ecosystem, cultureeducation, family, and friends unavoidably influence the individual, as do the choices he or she has already made. Thus to be "a responsible partner" with God in the creative process of becoming and making shalom, a person needs to be selfreconciled and then become reconciled one with another, in family, in community, and in all other spheres of their immediate and cosmic context. It is this that proves a person to be truly reconciled with God and thus enables him or her to become fully human.
The creation narratives give us an understanding of what God's ideal is for human wholeness in regard to physical and social health. In Genesis 1 and 2, God's creative project is pronounced good and finished, up to the stage at which humanity had been created. From there, male and female, as a human partnership, are commanded to subdue and rule the creation and the powers of creation with a view to the fullness of God's cosmic intention, even in the aftermath of the fall. 1 Yet, due to disobedience and rebellion, there is a dramatic failure and the task of establishing shalom ( ), health and wholeness remains far from being completed.
The good news is that God has not given up on his creation. In the Pentateuch (Torah), we gain understanding of how God chooses and calls from fallen humanity those with whom he will work. Moreover, God restores and saves, pre-pares, and involves this people in his ongoing quest to bring back his humanity and creation to completion.
Goldingay agrees with Juergen Moltmann and concludes that "Genesis does recognize that creation was the beginning of a project, not the end of one." In fact, he goes further, asserting that . . . the statement that God's relationship with the world involves "creation, conservation, and transformation" does not say quite enough. Even before it went wrong and needed restoration, it was a project still on the way . . . God would hardly have given humanity the task not merely of maintaining it but of subduing the world. So the renewed world is not merely a world restored to its Edenic state, but one taken to the destiny God intended when creating it. God's creation commission was that humanity should subdue the earth (Gen 1:28), win it to the internal harmony that was apparently not built into it even though it could be described as "good," and God is still committed to the fulfilling of that creation project. There will come a day [as Isaiah 11 and 65 envisions, in which there will be total harmony] . . . without human beings or animals eating one another [and that] is part of the dream vision of Genesis 1, of a world that reflects Yhvh's abode in heaven . . . a new creation in which the great limitation of the old, the reality of death, is overcome. 2 Paul Tillich calls this process "cosmic healing" and states that "when salvation has cosmic significance, healing is not only included in it, but salvation can be described as the act of cosmic healing." In a person, the cosmos converges and is united, and, therefore, in a person, it has to be reconciled, healed, saved, and subjected again. He says: [Not to see salvation and healing related] . . . implies a conscious or unconscious rejection of the idea of cosmic disease, the universal fall, and of cosmic healing, the universal redemption. It does not see that the eternal fulfilment is actual in the fragmentary fulfilment in time and space. Healing as well as salvation are temporal and, at the same time, are eternal. Healing acquires the significance of the eternal, and salvation the actuality of the temporal. 3 The New Testament Greek Lexicon and the Nuevo Diccionario Bíblico Español Certeza (NDBC) informs us that the meaning of healing is closely related to salvation, as the meaning of the Greek word soteria, sozo (to save, to heal) in the NT parallels the meaning of the Hebrew yasha´ (from which the words moshiah-messiah and Yeshua-Jesus derive) in the OT. 4 This Hebrew verb, according to The Old Testament Hebrew Lexicon, means to save, be saved, and be delivered. Salvation, known as the Greek word soteria, is merely a derivative noun from sozo, which is the verb form, and derives in turn from a primitive sos (contracted for the now obsolete word saos, "safe"). It has also been found that the root of the word "salvation" in many languages indicates healing, as Paul Tillich illustrates: Thus, the Greek word soteria is derived from saos (sic); the Latin word salvatio from salvus; the German word Heiland from heil, which is akin to the English word "healing." Saos, salvus, heil, mean whole, not yet split, not disrupted, not disintegrated, and therefore healthy and sane . . . the English translation of sesoken se ["has saved thee," referring to an act of healing by Jesus in Mat 9:12] reads: "made thee whole." Salvation is basically and essentially healing, the re-establishment of a whole that was broken, disrupted, disintegrated . 3 Both, in the Bible and in the mythological narrative of many non-biblical witnesses, Tillich finds a basis for saying that salvation, in the sense of making whole or healing, equally applies to the physical, psychological, and social dimensions of human life. He explains that . . . every specific state of health or salvation represents the cosmic wholeness in a being which is a fragment of the whole, and whose wholeness is, therefore, always conditioned, threatened, imperfect, and pointing beyond itself. 3 A vision of the kingdom of God is a vision of a creation brought to wholeness, that is, to a state of shalom ( ). It is also a vision of a society in which the values of justice, peace, and joy in relationship prevail without exclusion. Therefore, healing also "involves a struggle against injustice by making the necessary resources available to the poor," the agents of healing -themselves risking poverty. 5 This is the attitude in which the mission for salvation or healing must be forwarded in the world.
The journey towards shalom and wholeness includes pain and suffering. These are not realities that can be ignored. Health is not simply the absence of illness. If it were so, then the chronically sick, the disabled, and the frail elderly would be discounted from a healthy society, a reality which, though unacceptable and reproachable, is never far away in our Western lifestyle. Therefore, there is no guarantee that the healing agents will not be wounded in the process of fulfilling their mission. The recent experience of many health agents both secular and religious with Ebola epidemics in Liberia speaks a great deal about this. Sometimes, the healing that deals also with values and relationships includes breaking, wounding, and even permanent scars for those seeking healing and wholeness. As in the case of Jacob, after healing his relationship with his brother Esau, he walks away to a full life with a conquered blessing from God, with a dignifying new name, but with a permanent limp. Thus, for caring and healing, Tillich concludes: This is the function of reconciliation, to make whole the man who struggles against himself. It reaches the centre of personality, and unites man not only with his God and with himself, but also with other men and
nature. Reconciliation in the centre of personality results in reconciliation in all directions, and he who is reconciled is able to love. Salvation is the healing of the cosmic disease which prevents love. 3 Such are the powerful values and insights that the biblical shalom ( ) and eirene (εἰρήνη) shed for our health care concerns and practices of today. To recover them provides reasons to reject the "patent-evergreening" 6 free-market deformation of pharmaceutical, medical, and health developments 7 for commercial profit. 8 Conversely, it provides reasons to equally reject the abuse of users and beneficiaries of the blessings of health and care by running senseless risks, by carelessness, and by negligence that overload the health systems unnecessarily, revealing no personal or collective concern for those in real need. 9 It leads also to the rejection of all egotism in people, companies, and states that impose endeavours with negative impacts for nature, society, and life.
Anything less than an integrated approach to health and healing in the search for wholeness for the human person and creation will result in mere patchwork efforts and disillusionment. It can also wound and corrupt the healers. As those in mission, sent to heal in the comprehensive approach of God's intention, healers need to rediscover the biblical view of the person, as well as of health and healing. | 2019-05-07T14:20:43.491Z | 2014-11-06T00:00:00.000 | {
"year": 2014,
"sha1": "43bbdee109e36c67ea72d5d584ad88a7b7bed3de",
"oa_license": "CCBY",
"oa_url": "http://journal.cjgh.org/index.php/cjgh/article/download/39/148",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "861c7536d99f0f5957ca1b54142308c34a1af12b",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": [
"Sociology"
]
} |
244176149 | pes2o/s2orc | v3-fos-license | Study of Chemical Properties of Lycopene Containing Tomato Purees
A study on chemical properties of different treatments prepared using dried tomato (Lycopersicum esculentum), tomato pulp & water was carried out at School of Home Science, B.B.A.U, LUCKNOW during July 2020 to May 2021 to find out most appropriate treatment/puree having high content of lycopene and Vitamin-C, which can be used during off season for consumption, as a substitute to fresh tomatoes. Apart from lycopene & Vitamin-C; total soluble solids, acidity, ascorbic acid content, ash, moisture and pH of the samples drawn from different treatments were also studied during the investigation. Five different types of treatments viz. Dried tomato powder without food additives (T1), Mixture of tomato powder and water (ratio 1:10) without heating (T2), Mixture of tomato powder and water (ratio 1:10) heating at 60-70⁰C for 5 minutes (T3), Fresh tomato pulp (T4) & Tomato pulp cooked at 60-70⁰C for 35 minutes (T5) were used in the investigation. The effect of these treatments was discernible, as reflected on content of lycopene & Vitamin-C. The highest Lycopene content of 90.34±4.18 mg per 100 g was obtained from tomato pulp cooked at 60-70⁰C for 35 minutes followed by dried tomato powder without food additives (66.47±2.02 mg per 100 g). Similarly, highest content of Vitamin-C i.e. 109.03±6.68 mg per 100 g was obtained from dried tomato powder without food additives and lowest 19.43±0.95 mg per 100 g from mixture of tomato powder and water (ratio 1:10) heating at 60-70⁰C for 5 minutes. These results appeared highly promising considering the nature of powder & pulp.
Tomato is grown in India in abundance both
in summer and winters. Some of the Indian tomato varieties are Sankranti, Pusa-ruby, Arkaalok, Arka abha and Vaibhav [2].
Tomato though botanically a fruit is generally
considered as vegetable because of the way in which it is consumed.
1.5
Red tomato contains about 94% moisture, but, it is an excellent source of minerals and vitamins. Tomato contains large amounts of Vitamin-C, and A, providing 40% and 15% of the daily intake value, respectively.
1.6
As per Jayathunge et al., [3], lycopene found in tomatoes act as an antioxidant and neutralizes free radicals which can damage cells in the body, inhibit the lungs, breast, and endometrial cells and also cuts down the risk of developing prostate cancer by 45%.
1.7
Traditionally, fresh tomato and tomato based product such as sauce, puree, juice, and paste are the major dietary sources of lycopene.
Though, tomatoes have got a lot of benefits,
but some shortcomings also lie in it. As, tomato is a highly perishable in the fresh state, due to high moisture content leading to wastages and losses during harvesting and storage. Losses in tomato productions are also accrued to poor post harvest handling practices. Therefore, prevention of these losses and wastage is of paramount importance.
1.9
It is also important to emphasize the demand of dehydrated tomato products in domestic and in international markets, which is increasing rapidly and major portion of it being used for preparation of convenient foods [4].
1.10
Conversion of tomato into other forms such as tomato powder, souse, etc can be done on large scale to prevent losses occurring during harvesting and post harvest handling.
Investigations of better technologies that
can be used to reduce losses of antioxidant & Vitamins during processing; and technologies for reducing cost of processing, packaging, handling, and transportation of the products must also be done at faster pace by the research community.
1.12
The need of tomato processing usually arises to preserve the product for consumption during off season.
1.13
In thermal stability study using a pure lycopene standard, Mayeaux, et al., [5] have reported that, 50% of lycopene was degraded at 100⁰C after 60 min, at 125⁰C after 20 min and 150⁰C after less than 10 min. Only 64.1% and 51.5% lycopene was retained when the tomato slurry was baked at 177⁰C and 218 ᴼC for 15 min, respectively. At these temperatures, only 37.3% and 25.1% of lycopene was retained after baking for 45 min. In 1 min of the high power of microwave heating, 64.4% of lycopene still remained. However, more degradation of lycopene in the slurry was found in the frying study. Only 36.6% and 35.5% of lycopene was retained after frying at 145 and 165 ᴼC for 1 min, respectively.
1.14 Hence, this experiment was carried out at School of Home Science, Department of Food Science & Nutrition, B.B.A.U, LUCKNOW to find out most appropriate treatment/puree having high content of lycopene and Vitamin-C after processing, which can be used for consumption during off season, when the fresh tomatoes are not available in the markets of Lucknow, (U.P.), India and elsewhere. Apart from lycopene & Vitamin-C; total soluble solids, acidity, ascorbic acid content, ash, moisture and pH of the samples drawn from different treatments were also studied during the investigation.
Technique Used for Preparation of Raw and Cooked Tomato Puree
The amount of tomato (
Technique Used for Preparation of Pulp
Tomato pulp was obtained by passing the fresh fully red tomatoes through the fine pulping machine and seed & skin were separated following the protocols described by Dauthy [6]. The extracted pulp used as basic material from which other treatments were made. The recovery of pulp was 50% of tomatoes on weight basis. Pulp was concentrated in open kettle to evaporate the extra moisture present in it. Then pulp was filed into cans (temperature of filling 82⁰C to 88⁰C) and processed in boiling water for 20 minutes. The processed cane were cooled immediately by dipping them in cold water and stored in dry and cool place.
Technique used for Preparation of Tomato Powder
Tomato was cut into slices of uniform thickness to dry it quickly & placed on the tray of dehydrator in a single layer so that they can't stick with each other. The temperature of the dehydrator was kept at 50-60⁰C to dehydrate it. Tomato slices took 27 hours to dry in the dehydrator. Then, dehydrated slices used for pulverizing into powder, in a high powered blender.
Sampling Technique
Any experimental work has defined sampling procedure to conduct it, which has been followed in the present investigation. Sampling was done by selecting random samples from the treatments/purees prepared.
Step by step procedure was followed to find out sample from the whole material of treatment/puree. As already cited above, that five treatments were prepared using pulp, powder & water as planned under the experiment to substantiate the hypothesis and find out the best treatment having significantly higher amount of lycopene & Vitamin-C. Apart from lycopene & Vitamin-C; total soluble solids, acidity, ascorbic acid content, ash, moisture and pH of the samples drawn from different treatments were also studied during the investigation. To minimize the error/ bias three samples from each treatment were taken randomly from the treatment/puree prepared under this experiment and analyzed for chemical attributes.
The data collected during the investigation were compiled in tabular form and analyzed on the statistical method to find out means, standard deviation among the treatments, as per Gomez and Gomez [7] and to check the significance of the same, help of ANOVA table were taken.
Lycopene content
To estimate the lycopene content, 5 g sample from each treatment was drawn in triplicate and organic solvent was used to extract and solubilize lycopene contained in the samples, followed by chromatographic absorbance measurement at 504 nm of each sample, as per the method of Adsule et al. (1979).
Total soluble solids
To estimate the total soluble solids, 5 g sample from each treatment was drawn in triplicate & TSS content of each sample was determined by using hand refracto-meter of different ranges and value was expressed as percent of TSS [8].
Total acidity
To estimate the total acidity, 5 g sample from each treatment was drawn in triplicate, then the each sample was diluted with distilled water and the content was titrated against 0.1 N NaoH using phenolphthalein as indicator, separately. The total acidity was calculated in percentage of anhydrous citric acid present in the sample [8].
Ascorbic acid and vitamin C
To estimate the Ascorbic acid, 5 g sample from each treatment was drawn in triplicate & Ascorbic acid was determined by use of titration method, in which 2-6 di-chloro-phenol & indo-phenol solutions were used as described by Ranganna [8]. Estimation of vitamin C content in the samples was also determined by titration method, 2.6-diclorophenol & indo-phenol reagents were used for the purpose.
The method is based on colour change of the reagent, oxidation or reduction. The ionized form of 2,6-diclorophenol & indo-phenol gives red colour in acid and blue in basic medium. Dehydro-ascorbic acid is obtained through reaction with vitamin C, after reducing the identification reactive, 4-(hydroxi-phenol-amino acid)-2.6-dichlorophenol. This method is commonly used, due to the fact that it is easy to use and due to the reagent sensitivity.
Ash
To estimate the ash content, 5g sample from each treatment was drawn in triplicate & weighed accurately in separate dry crucible. The sample in the crucible was ignited with the flame of a suitable burner for about one hour. The ignition was done by keeping it in a muffle furnace at 250⁰C until a grayish black ash was formed. The dish was cooled in the desiccators and weighed [9]. The ash content was calculated as per the following formula:
RESULTS AND DISCUSSION
3.1 Chemical attributes such as lycopene (mg/ 100 g), vitamin C (mg/ 100 g), total soluble solids (%), acidity (%), ascorbic acid (mg/ 100 g), ash (%), moisture (%) and pH of the samples drawn from different treatments were studied during the investigation. The salient findings of the present study and brief discussions derived there are summarized hereunder:
Lycopene content of different treatments (mg/100 g)
It is pertinent from the data presented in Table 1 that the highest mean score of lycopene 90.34±4.18 mg/ 100 g was recorded in T5 (Tomato pulp cooked at 60-70⁰C for 35 minutes) & lowest 4.05±0.16 mg/ 100 g in T3 (Mixture of tomato powder and water in ratio 1:10, heating at 60-70⁰C for 5 minutes).
Lycopene content of different treatments varied due to mixing with water in different ratios, factors affecting their concentrations and its thermal stability. Mayeaux, et al., [5] have reported that in thermal stability study using a pure lycopene standard it was found that 50% of lycopene was degraded at 100⁰C after 60 min., hence, T3 which resulted in lowest lycopene content ie. 4.05±0.16 mg/100 g., was prepared from the mixture of tomato powder and water in ratio 1:10, heating at 60-70⁰C for 5 minutes, as the powder used for was obtained from tomato dried in oven at 50-60⁰C for 27 hours. Whereas, T5, was made of tomato pulp cooked at 60-70⁰C for 35 minutes resulted in very high mean score of lycopene content i.e. 90.34±4.18 mg per 100 g. due to very less thermal loss of the same. T1 had followed the T5 in mean lycopene content.
Vitamin C in different treatments (mg/100 g)
It is evident from the data presented in Table 2 that the highest mean score of Vitamin C content (mg/100g) recorded was 109.03±6.68 and lowest 19.43±0.95 in T1 (Dried tomato powder without food additives) and T3 (Mixture of tomato powder and water (ratio 1:10) heating at 60-70⁰C for 5 minutes ) respectively.
Chemical properties of different purees and powder prepared under the treatments varied due to mixing with water in different ratios, factors affecting chemical changes while processing/heating and their concentrations. As, T1 (Dried tomato powder without food additives) had resulted in very high mean score of Vitamin C i.e. 109.03 (mg/100 g) compared to other treatments, which may be due to high concentration of powder which consisted Vitamin C. T5 had followed the T1 in mean value of Vitamin-C.
Total Soluble Solids content of different treatments (%)
It is pertinent from the data presented in Table 3 that the highest mean score of total soluble solids (%) recorded was 12.23±1.13 and lowest 1.17±0.74 in T5 and T3, respectively.
Chemical properties of different treatments varied due to mixing with water in different ratios and factors affecting their concentrations. As, T5 had very high mean of total soluble solid i.e.
12.23±1.13% compared to other treatments, it was actually due to high concentration of puree which consisted more TSS in the treatment. T1 had followed the T5 in mean TSS content.
Acidity of different treatments
It is evident from the data presented in Table 4 that the highest mean score of acidity content (%) recorded was 4.20±0.04 and lowest 0.06±0.03 in T1 (Dried tomato powder without food additives) and T3 (Mixture of tomato powder and water ratio 1:10 heating at 60-70⁰C for 5 minutes), respectively.
As the chemical properties of different treatments varied due to difference in water content vis-a-vis factors affecting their concentrations, T1 had resulted in very high mean value of acidity i.e. 4.20±0.04% compared to other treatments, it was due to high concentration of powder & chemical changes took place in processing. T1 had followed the T5 in mean total acidity content.
Ascorbic Acid in different treatments (mg/100 g)
It is evident from the data presented in Table 5 that the highest mean score of ascorbic acid content (mg/100 g) recorded was 13.45±0.47 and lowest 2.04±0.21 in T4 (Fresh tomato pulp) and T3 (Mixture of tomato powder and water (ratio 1:10) heating at 60-70⁰C for 5 minutes ) respectively.
Chemical properties of different purees and powder prepared under the treatments varied due to mixing with water in different ratios, factors affecting chemical changes while processing/heating and their concentrations. As, T4 (Fresh tomato pulp) had resulted in very high mean of ascorbic acid content i.e. 13.45 (mg/100 g) compared to other treatments, which was due to high concentration of pulp which consisted it. T5 had followed the T4 in mean total ascorbic acid content.
Total Ash content of different treatments (%)
It is evident from the data presented in Table 6 that the highest mean score of ash content (%) recorded was 10.55±0.27 and lowest 4.43±0.54 in T4 (Fresh tomato pulp) and T5 (Tomato pulp cooked at 60-70 ᴼC for 35 minutes.) respectively.
Chemical properties of different purees and powder prepared under the treatments varied due to mixing with water in different ratios, factors affecting chemical changes while processing/heating and their concentrations.
Hence, T4 (Fresh tomato pulp) had resulted in very high mean of ash content i.e. 10.55 (%) compared to other treatments, it was due to high concentration of pulp which consisted ash. T1 had followed the T4 in mean ash content.
Moisture content of different treatments (%)
It is pertinent from the data presented in Table 7
pH of all treatments
It is pertinent from the data presented in Table 8 that the highest mean score of pH content (%) recorded 4.47±0.08 was and lowest 3.03±0.06 in T5 (Tomato pulp cooked at 60-70⁰C for 35 minutes) and T3 (Mixture of tomato powder and water (ratio 1:10) heating at 60-70⁰C for 5 minutes) respectively. | 2021-10-18T17:54:01.095Z | 2021-09-27T00:00:00.000 | {
"year": 2021,
"sha1": "5fcbcc63b32913a3cecc88c92545e7ff0a260b74",
"oa_license": null,
"oa_url": "https://journalajarr.com/index.php/AJARR/article/download/30405/57050",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cc5f7ccde7744ad4a9553b016768c0a0b674efef",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
262072642 | pes2o/s2orc | v3-fos-license | Feature recognition of English clauses based on particle swarm optimization algorithm
: Feature recognition of English clauses is a basic problem of syntactic analysis. It is the basis of English-Chinese machine translation. A feature recognition method of English clauses based on particle swarm optimization algorithm is proposed. This paper analyzes the characteristics of English clauses, delimits the boundary of clauses, and follows the current optimal particle in the solution space to search the best position through the cooperation and information sharing between particle swarm individuals. The feature set is selected, the crossover and mutation idea of genetic algorithm is introduced, and the crossover operation is carried out to complete the feature recognition of English clauses. The experimental results show that when the threshold P is 50, the recognition accuracy of this algorithm is consistent with that when p is 100, and the recognition accuracy is 93.45%. The accuracy of particle swarm optimization algorithm for English clause feature recognition is high, which remains at about 90%. Compared with the two literature methods, the convergence performance of particle swarm optimization algorithm is better.
Introduction
A clause is a grammatical unit that contains at least one subject and predicate and expresses a point of view [1].Clause recognition refers to the process of marking the level of clauses according to their grammatical structure.It belongs to the category of shallow syntactic analysis.Shallow layer the main task of syntactic analysis is the recognition and analysis of chunks, which simplifies the task of syntactic analysis to some extent.
It is also the basis for further analysis of sentences [2].In natural language processing, syntactic analysis research mostly focuses on single sentences and has achieved great success [3][4].Consider simplifying complex sentences before parsing them.For sentences with clauses.This simplified process can be realized by identifying English clauses first.Then analyze the subject sentence and clause respectively.The purpose of clause recognition is to reduce the complexity of in-depth analysis of sentences by defining the boundary of clauses.
Psycholinguists have found that when reading a sentence, people always find the boundary of the clause before dealing with the long sentence, and cut the long sentence into several sub sentence blocks for understanding respectively.This inspires researchers whether they can first divide the sentences of the source language into several sub sentences and simplify the sentence pattern of the source sentence in natural language processing, which is conducive to in-depth analysis.For the recognition of English clauses, researchers at home and abroad have put forward some research methods.Some researchers have proposed using machine learning algorithm to determine the boundary of clauses in text [5].Some researchers use hidden Markov model to identify the boundary of clauses.Some researchers use the connectives of clauses and other functional words to establish a dictionary or grammar rule library to segment long sentences; or based on corpus, using part of speech information, combining rules and statistics to identify the boundary of clauses.
From the perspective of machine learning, literature [6] introduces multiple features to improve the lack of information caused by Chinese small data sets.In addition, this study decomposes the character recognition problem into question and answer text matching problem, answer text implication problem and standard answer text implication problem, and continues to train on the short text score data set.The final result has some improvement, which proves the usability of the mechanism designed in this paper.Literature [7] proposed an ancient recognition neural network method based on multi-level convolution to help artists imagine the appearance of this ancient painting after restoration.If analysis is introduced to match the map by adding missing regions and nearest neighbor pixels to enhance the rough estimation of the complete image.Domain specific pyramid networks are used to capture various spatial context quantities.Reference [8] proposed the introduction of TM with integer weighting clause, i.e. integer weighting TM (IWTM), which solves the accuracy interpretation challenge in machine learning.The purpose is to improve the interpretability of TM by reducing the number of terms required for competitive performance.IWTM achieves this through weighted clauses so that a single clause can replace multiple repeated clauses.Since each TM clause is adaptively formed by tsetlin automata (TA) team, identifying effective weights has become a challenging online learning problem.This problem is solved by using another kind of automata to extend each TA team: Online random search (SSL) automata.
In this paper, an English clause feature recognition method based on the particle swarm optimization algorithm is proposed, which can effectively increase the diversity of the population by crossover and mutation of particles.
An analysis of the characteristics of English clauses
In terms of composition, compound sentences can be divided into the following two types: (1) Connect the clause with the main sentence with connectives (in some cases, connectives can be omitted).
If you're not good at figures, (clause) it is pointless to apply for a job in a bank (main sentence).
(2) Use verb infinitives or participle structures, which form part of a compound sentence rather than a simple sentence.
To get into university you have to pass a number of examinations.
Here, the second case is treated as a general phrase, mainly considering the recognition of clauses in the first case.
The identification of English clauses in this paper is mainly the delimitation of the boundary of clauses [9].
This paper is based on Penn tree bank corpus, which is an English corpus with various levels of annotation, such as part of speech level, phrase level, clause level and so on.21115 clauses are extracted, including 4936 recursive clauses.By analyzing the structure of these sentences, it is found that the composition of clauses, whether recursive or non-recursive, can be divided into the following three cases: (1) Guided by certain functional words, these functional words include who, which, when and other wh words, conjunctions such as before, after, if and so on: (2) Special verbs are followed by object clauses, such as say, think, etc. many adjectives describing personal feelings (such as afraid, glad, etc.) or adjectives expressing certainty (such as certain, sure, etc.) are followed by clauses [10]; (3) Special sentence structure, if a predicate verb is preceded by two consecutive BNPs (basic noun phrases), the predicate verb is most likely a clause verb, or if the two predicate verbs are together, one of them must belong to a clause verb.
Statistics show that the first category accounts for a large proportion, about 75%, and the third category of clauses accounts for a small proportion, only 710.According to the statistics of the training corpus, the priority of these three clear situations is decreasing.At the same time, in each case, they are arranged according to the probability from large to small.For example, the priority of wh word is higher than that of other conjunctions.In this way, in a complex sentence, if the above situations occur comprehensively, it can be determined according to the priority [11].In the recognition of the beginning of a clause, it is processed separately according to the different situations of the clause.Through the analysis of the clause in the corpus, it can be concluded that the position of the end of the clause is related to the following factors: the position of the clause in the sentence, the relationship between the clause and the main sentence, the position of the predicate verb of the clause in the sentence, as well as some punctuation marks, and so on.When recognizing the end of a sentence, process it according to the above information.
Basic principle of particle swarm optimization algorithm
When each element in Y is given an optimized weight [12], we can not only retain the number of features we need, but also highlight the role of useful elements in the features.Therefore, the particle dimension is equal to the feature dimension.Randomly initialize each particle of particle swarm Z (particle swarm size N ), set the initial position of the i particle as 12 , ,..., ( ) and the particle dimension as m .The speed is In order to prevent the target from crossing the boundary, premature or falling into local minimization, the position and velocity range of particles are limited 0.5 0.5,1 1 . The fitness function of the i particle is given by the following formula: Equation ( 1) is used as the fitness function because when each feature is multiplied by the corresponding optimization coefficient, the position with the smallest function value is the most optimal position [13].At this time, the correlation between each one-dimensional feature is the smallest and the discrimination between classes is the largest.At the same time, the function is a Gaussian function, which can reduce the absolute value of the result and improve the recognition efficiency.
For the i particle, after calculating the fitness function value in the t cycle, compare its fitness function value with its original historical optimal position () Pbest i .If it is better, update () Pbest i with the latest () i Zt , and then compare it with the global optimal position Gbest .If it is better, take it as the current Gbest .After each comparison, use equations ( 3) and ( 4) to update the position and speed.
( 1) ij vt indicates that the velocity of the i particle is the j dimensional element in the 1 t iteration, and ( 1) ij xt indicates that the position of the i particle is the j dimensional element in the 1 t iteration; w is called inertia weight, 12 is two normal numbers, called learning factor, and 1 2 , rr is two uniform random numbers between [0,1], [14].When the iteration termination conditions are met, the iteration process is ended.Experiments show that the number of iterations of termination condition selection reaches the predetermined number, and the recognition result is better.The final Gbest is the optimal solution
Feature constraints of clause recognition
In Clause feature recognition, the selection of feature set is a very important problem.The following will introduce the features used in Clause recognition.
Generally speaking, the features of sentence beginning recognition are divided into two categories: lexical features and sentence features.The lexical features adopt the sliding window method to obtain three types of features: word, part of speech tagging and phrase tagging.Second, sentence features can be divided into five aspects: sentence structure, function word information, verb information, punctuation information and special circumstances.
(1) Sentence structure ① Whether the current position is the beginning of a sentence is a feature; ② The part of speech string and phrase string of the left and right parts of the sentence are extracted, and only verb phrases, commas and related words are concerned in the phrase string [15][16].
(2) Functional word information ① When the current word is if / that / what / who / where / when / why / who / how / while, the feature is obtained by labeling the category with the word; ② When the current word is which, check whether its previous word is at / in / on, etc., and get the feature combined with its annotation category.
(3) Verb information Taking the current word as the boundary, take the number of VPS in the left and right parts of the sentence (including the case where the number is zero) as a feature.
(4) Punctuation information ① If there is only one comma in the sentence, it can be divided into the whole sentence; If there are multiple commas in the whole sentence, take whether there is VP between commas as a feature information [17][18]; ② When the current word is a colon or quotation mark, take the annotation of the word itself and the following word as a feature.
(5) Special circumstances ① When the current word is and or or, take whether there is VP around it as a feature; ② When the prototype of the current word is say, the word itself constitutes a feature with its annotation category.
The features used for sentence beginning and sentence end recognition are basically the same, only the latter also uses the estimation results of the former as a class of features.
For the annotation of complete clauses, this paper mainly follows Xavier's idea.First, judge whether the predicted clause head position is multiple clauses, then find out the possible clause candidate set for the starting position of each possible clause, and finally get the most appropriate clause annotation from the scoring function.In this part, in addition to the previously used features, the prediction results of the previous two steps are also introduced as known information.The phrase string features from the prediction start boundary to each prediction end position are added to the sentence features.
After feature selection, they need to be coded [19][20], and the features are represented by binary functions in Clause feature recognition: Here, a represents category, b represents environment information, and function () hb represents predicate.A feature consists of category and predicate.The value of the predicate is 1 / 0. In use, only those features whose value is equal to 1 are selected, that is, those features that only appear in training.
Recognition of English clause features by particle swarm optimization
In order to overcome the problem that the population can easily fall into local extremum, the crossover and mutation ideas of genetic algorithm are introduced.Before the end of each iteration, the particles in the population cross operate with a probability of 0.5 to generate a new generation of particles.After the crossover operation is completed, select those particles whose fitness value is worse than the average fitness value of the population [21][22], and mutate them.The variation process is as follows: , , Where m p is the variation probability.rand is a random number uniformly distributed on [0,1].In the late iteration of the algorithm, because the diversity of the population is enriched, the population can be prevented from falling into local extremum.
The feature recognition of English clauses based on particle swarm optimization algorithm is as follows: 1) Initialization of particles: all N training samples are numbered, and each training sample corresponds to a number in 1 N and does not repeat each other [23].Let the number of particles be l, take one particle randomly from N training samples, take the number of this training sample as the English clause feature ( 1, 2,..., ) of the particle, and initialize the particle speed to be equal to 0 2) Initialize particle fitness value: for particle l , according to the English clause feature l x of the particle, the sample corresponding to particle l can be uniquely found in N training samples [24].The correlation coefficient between the training sample and the test sample is recorded as i d , then i d can be used as the fitness value of particle l , that is: Repeat step 2 for all particles to obtain the initial fitness value.
3) Initialize individual optimal English clause features and global optimal English clause features: the initial individual optimal English clause features of each particle are the initial English clause features of the particle, namely: , 1, 2,..., The fitness value corresponding to the initial optimal English clause feature is the initial fitness value of the particle.For the particle with the largest initial fitness value, the English clause is characterized by the initial global optimal solution; 4) Update the speed of each particle and the characteristics of English clauses ; 5) Update the fitness value of each particle: method synchronization step 2; 6) Update the individual optimal English clause features and global optimal English clause features: for particle l , Set ( 1) l xt as the updated English clause features, then update the individual optimal English clause features and individual optimal fitness values.
7) The global optimal English clause feature and its fitness are worth updating.It is not only for a particle, but also needs to traverse the whole population.
8) Termination conditions: there are two main cases of iterative termination conditions: one is to find a good enough solution; Second, the loop reaches the maximum number of iteration steps.For the first case, this paper defines two termination conditions: a. () Among them, gd P represents the global optimal fitness value, and S is usually set to a value of 0.0-1.00.At this time, it is considered that particle swarm optimization algorithm can search the optimal English clause characteristics of particles.b. () () gd count P is an integer value indicating the number of times the global optimal particle of the whole population recognizes its English clause features in the iterative process.R is usually set to an integer value in the range of 15-30.When () gd count P R , it is considered that particle swarm optimization algorithm recognizes the optimal English clause features of particles.
8) Decision output: if the program does not meet the iteration termination conditions, return to step 4 to continue the iteration; If the program meets the iteration termination conditions, the training sample number with the largest correlation coefficient with the test sample is obtained.According to this number, the target category of the corresponding training sample can be found and determined as the test sample type.
So far, the recognition of English clause features based on particle swarm optimization has been completed.
Experiment
The experiment uses Penn tree bank corpus, in which wsj15-18 (211727 words) is used as the training set and wsj20 (47377 words) and wsj21 (40039 words) are used as the test set.Due to the uneven distribution of the two types of data in the training set, the characteristics of vocabulary becoming non sentence beginning (non sentence end) contribute much more than those becoming sentence beginning (sentence end).Therefore, in the experiment, instead of taking all words as the research object, the words at the phrase boundary are selected as candidates.Test the effect of particle swarm optimization algorithm in English clause feature recognition.
Experimental parameter setting
The experimental verification environment is amdathlon (th) 64 × 23600 + CPU, 1G memory, operating environment is Windows XP operating system, and verification platform is MATLAB 7.0.
In this experiment, the number of particles is obtained according to the following formula: Where, D is the dimension of particles, that is, the number of features.The round function is used for rounding operations.The maximum number of iterations max T is 100.The learning probability pc is set to 0.5.The coefficient of iBQPSO algorithm is updated according to the following formula: max max (0.95 0.55)* 0.55 Where, t is the current number of iterations.The experiment was conducted for 10 cycles.The optimal feature subset is obtained by particle swarm optimization algorithm.
Determination of filtering threshold
The setting of threshold P is very important because it determines the search space of particle swarm optimization algorithm.If the p value is set too large, the computational complexity of the algorithm will be increased, and the algorithm may not converge to the optimal solution.When the p value is set too small, the features that are helpful for recognition may be deleted, resulting in the reduction of the final recognition effect.We have conducted experiments on the three values of P taken as 50, 100, and 200 compared the algorithm in this paper with the methods in literature [6] and literature [7].The experimental results are shown in Table 1 1 that when the threshold P is 50, the recognition accuracy of the algorithm in this paper is consistent with that when p is 100, and the recognition accuracy is 93.45%.
The recognition accuracy of the methods in literature [6] and literature [7] is worse than that in the case of P 100.When p is 200, the recognition accuracy is reduced in all three methods.To sum up, we select the first 100 features as the primary feature subset.
Comparison of algorithm optimization performance
In order to verify the effectiveness of particle swarm optimization algorithm in this paper, the above two literature methods are used as comparison methods, and the accuracy of this algorithm is verified through this group of comparison experiments.See Figure 1 for details.As can be seen from Figure 1, the accuracy rate of particle swarm optimization algorithm for English clause feature recognition is high, which remains at about 90%, while the accuracy rate of the two literature methods is less than 80%, and the accuracy rate of literature [7] algorithm is the highest, only reaching 75%, and the recognition effect is poor.
Analysis of convergence performance of algorithm
In order to verify the convergence performance advantages of particle swarm optimization algorithm, the convergence performance of the above two literature methods are compared.See Figure 2 for details.As can be seen from Figure 2, the particle swarm optimization algorithm converges to the highest recognition accuracy after about 40 iterations.The two literature methods need more than 60 iterations to converge to their highest recognition accuracy.Compared with the two literature methods, particle swarm optimization algorithm has better convergence performance.It can be seen that the recognition effect of particle swarm optimization algorithm is better.This is mainly because particle swarm optimization algorithm introduces the crossover and mutation idea of genetic algorithm in order to overcome the problem that the population is easy to fall into local extreme points.To ensure that the recognition rate of English clause features is effectively improved, and then improve its effectiveness.
Conclusion
This chapter proposes a feature recognition method of English clauses based on particle swarm optimization algorithm.The identification of English clauses in this paper is mainly the delimitation of the boundary of clauses.In the recognition of the beginning of a clause, it is processed separately according to the different situations of the clause.The method of sliding window is used to get three kinds of features: word, part of speech tagging and phrase tagging.For the annotation of complete clauses, this paper mainly follows Xavier's idea.First, judge whether the predicted clause head position is multiple clauses, then find out the possible clause candidate set for the starting position of each possible clause, and finally get the most appropriate clause annotation from the scoring function.In order to overcome the problem that the population is easy to fall into local extreme points, the crossover and mutation ideas of genetic algorithm are introduced.The final experimental result is that the particle swarm optimization algorithm converges to the highest recognition accuracy after about 40 iterations.The two literature methods need more than 60 iterations to converge to their highest recognition accuracy.Compared with the two literature methods, particle swarm optimization algorithm has better convergence performance.It is proved that the particle swarm optimization algorithm is feasible for feature recognition and has better convergence performance.In the future work, we will try to effectively introduce more syntactic and semantic features of descriptive clause feature recognition to further improve the result of feature recognition.
Figure 2 :
Figure 2: Convergence of different methods
Table 1 :
: Comparison of recognition accuracy of different methods | 2023-09-20T15:12:36.142Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "1259482cbe8b5b3a4d797f3890504ac92e165c9e",
"oa_license": "CCBY",
"oa_url": "http://www.clausiuspress.com/assets/default/article/2023/09/06/article_1694014813.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "88d1d7c0f8d504720e5312c93ffeab65fff7ac25",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
6075152 | pes2o/s2orc | v3-fos-license | A method for merging nadir-sounding climate records, with an application to the global-mean stratospheric temperature data sets from SSU and AMSU
method for merging nadir-sounding climate records, with an application to the global-mean stratospheric temperature data
Introduction
Stratospheric cooling has long been regarded as a key indicator of two anthropogenic climate forcings (IPCC 2013;WMO, 2014): that from increasing abundances of CO 2 , and that from the ozone decline associated with the increased abundances of ozone-depleting substances (ODSs).The former has continued secularly, while the latter peaked in the late 1990s and has been slowly declining since then.Thus, the contrast between the early and more recent parts of the stratospheric temperature record is an important fingerprint of anthropogenic influence (Shepherd and Jonsson, 2008).In addition to the anthropogenic influences, stratospheric temperature is also strongly perturbed by the 11-year solar cycle and by volcanic eruptions.As a consequence, the anthropogenic cooling is considerably modulated in time.
In the stratosphere, global-mean temperature is, to a first approximation, unaffected by dynamics and is therefore close to radiative equilibrium (Fomichev, 2009).This makes it an ideal quantity for detection and attribution of anthropogenic influence (Shine et al., 2003).However, global averages are only obtainable from satellites, and the only longterm satellite record of stratospheric temperature is that from the operational nadir sounders, the Stratospheric Sounding Published by Copernicus Publications on behalf of the European Geosciences Union.
Unit (SSU)/Microwave Sounding Unit (MSU) and the Advanced Microwave Sounding Unit-A (henceforth AMSU) (Randel et al., 2009), which represent deep atmospheric layers.Note that the vertically resolved temperature data from global positioning system (GPS) radio occultation only begin in the current century (Wickert et al., 2001), and do not reach into the upper stratosphere, where the strongest cooling is found.The nadir-sounding measurements were never designed for climate monitoring, and homogenizing the data from different operational satellites, with rapidly drifting orbits, is a challenge (Wang et al., 2012;Zou et al., 2014;Nash and Saunders, 2015).
In the lower stratosphere, the relevant nadir record is provided by MSU channel 4 (and continued by AMSU channel 9; Christy et al., 2003;Mears and Wentz, 2009) and is supplemented by radiosondes and, since the early 2000s, by GPS radio occultation.The global-mean MSU4 record is considered fairly reliable and most attention has been focused on its latitudinal structure (Randel et al., 2009).
The middle and upper stratosphere is, however, a completely different story.There the nadir record is provided by three SSU channels which began in 1979 and ended in 2006, and by six AMSU channels which began in 2001 and are ongoing.Because the weighting functions of the SSU and AMSU channels are very different, the two records cannot be immediately combined.Moreover, confidence in the SSU record has been low, even for global-mean temperature, because of the lack of corroborative measurements, drift issues within the SSU record itself, and the striking differences identified by Thompson et al. (2012) between the two SSU products available at that time (from the National Oceanic and Atmospheric Administration (NOAA) and the Met Office) and between the measurements and chemistry-climate models.
Normally, differences between measurements and models would tend to cast suspicion on the models, not the measurements.However, because global-mean stratospheric temperature is radiatively controlled, its behaviour in the middle and upper stratosphere, where the radiative processes are well understood, should be reasonably well represented by chemistry-climate models.Indeed, Fig. 2 of Thompson et al. (2012) shows that for the SSU channels the differences in cooling between models and observations, and between the Met Office and NOAA products of the time, are in almost all cases much larger than the inter-model spread.One of the mysteries arising from Thompson et al. (2012) was the apparent lack of continued cooling in the SSU record during the early 2000s, in contrast to the models and in contradiction to physical expectations.Because the SSU record ended in 2005, this mystery was unresolved.
The large differences between the NOAA SSU results and models found by Thompson et al. motivated the development of a revised version of NOAA SSU (version 2), the results of which are published in Zou et al. (2014).The version 2 global-mean temperatures exhibit weaker long-term cool-ing trends than the version 1 temperatures that are shown in Thompson et al. (by ∼ 30 % for channels 1 and 3 and ∼ 17 % for channel 2).Although Zou et al. did not compare their results to models, visual inspection of their version 2 temperatures indicates much closer agreement with the model results shown in Thompson et al. (2012).There has also been a subsequent revision of the Met Office data set (Nash and Saunders, 2015).
In this paper, we propose a method for merging different nadir-sounding climate data records, and apply it to the NOAA SSU and AMSU global-mean stratospheric temperature records.Specifically we use the AMSU data to extend the three SSU channels forward in time, given the paradigmatic importance of that climate data record.We show that a purely statistical approach, using multiple linear regression, is unworkable for this particular application since the six AMSU channels are not sufficiently linearly independent.Instead, we propose a physically based method using limbsounding measurements, with much higher vertical resolution, to accurately represent the weighting functions of both SSU and AMSU and thereby act as a transfer function between the two nadir-sounding data sets.For this purpose we use temperature data from the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS).It is important to emphasize that the merged data set can only be as good as the component data sets going in, and relies on the extensive efforts spent on homogenizing the SSU and AMSU data records themselves.
Since we are dealing with monthly mean, global-mean data, the data are highly averaged and the effect of random measurement errors is expected to be low.Characterization of the systematic errors in such highly averaged quantities in a bottom-up fashion would be extremely challenging (Hegglin et al., 2013).Instead, our approach is to compare the different data sets (after transformation via the weighting functions) over their overlap periods to see whether the differences between them can be characterized in terms of a constant offset (within some noise).If this is the case, then the merging can be done with confidence.Thus, the validity of the approach can be assessed a posteriori.This approach was followed by Hegglin et al. (2014) in constructing a merged stratospheric water vapour record.Solomon et al. (2010) also performed such an additive relative bias correction to merge the Halogen Occultation Experiment (HALOE) and Microwave Limb Sounder (MLS) stratospheric water vapour records.Thus there is ample precedent for such an approach in the literature.
The data sets used are described in Sect. 2. The merging methodology and the comparison between MIPAS and the two nadir-sounding records are provided in Sect.3.1.This comparison shows that the different global-mean data sets track each other very well, so additive relative biases can be identified with small uncertainties.Section 3.2 examines the (near) global-mean temperature trends, both over the recent record (as represented by the six AMSU channels) where we As stated in the Introduction, version 2 was developed primarily as a result of the large differences found between SSU version 1 (Wang et al., 2012) and a Met Office version of SSU, as well as between SSU version 1 and models, that were documented in Thompson et al. (2012).Differences from version 1 include improvements in the radiance calibration and in the adjustments for diurnal drift and intersatellite biases.Please refer to Zou et al. (2014) for an in-depth discussion of the differences.
AMSU
AMSU-A is a microwave radiometer on board a series of recent, current and future NOAA satellites.It has 11 channels, 6 of which (channels 9 to 13) provide coverage in the stratosphere.The instrument was first launched in 1998, although not all of the stratospheric channels were in operation until 2001.We use brightness temperatures analysed by NOAA STAR (Wang and Zou, 2014), which are available at ftp://ftp.star.nesdis.noaa.gov/pub/smcd/emb/mscat/data/AMSU_v1.0/monthly.The corresponding weighting functions for channels 9 to 14 were provided courtesy of Likun Wang of NOAA STAR.The temperature data for channels 9 to 13 start in January 1999; those for channel 14 start 2 years later.As with SSU, the AMSU data extend from ∼ 85 • S to 85 • N.
MIPAS
MIPAS is a limb sounder which measured infrared emission from which vertical profiles of temperature and atmospheric constituents are derived (Fischer et al., 2008).We use zonal and monthly mean gridded temperatures computed from versions V3o_T_10 and V5r_T220 for the periods 2002-2004 and 2005-2011, respectively.These data are available at http://www.esa-spin.org/index.php/spin-data-sets and are provided on a 5 • latitude grid from ∼ 75 • S to 75 • N with 28 pressure levels ranging from 300 to 0.1 hPa.The parent data were produced by the Institute for Meteorology and Climate Research at Karlsruhe Institute of Technology, in cooperation with the Institute of Astrophysics of Andalusia, from calibrated radiance spectra provided by the European Space Agency.The MIPAS temperature retrieval method is discussed in von Clarmann et al. (2003) for the high-spectralresolution measurement period until 2004 and in von Clarmann et al. (2009) for the reduced spectral resolution measurement period from 2005 onwards.MIPAS temperatures have been validated by Wang et al. (2005) and Stiller et al. (2012).
MLS
Aura MLS is a limb sounder that measures thermal microwave emission.It has provided a nearly continuous set of measurements of temperature and trace gases in the middle atmosphere since August 2004.The data extend near globally and from the middle troposphere to the lower thermosphere.We use version 3.3 temperature data (Livesey et al., 2011) through to the end of 2011.The temperature retrieval method and validation are discussed in Schwartz et al. (2008).
CMAM30
The CMAM30 data set, which extends from 1979 to 2011, is produced using a specified-dynamics version of the Canadian Middle Atmosphere Model (CMAM) that is driven by winds and temperatures from the interim version of the European Centre for Medium-Range Weather Forecasts Reanalysis (ERA Interim; Dee et al., 2011), where the global-mean temperatures have been adjusted in the upper stratosphere to remove temporal discontinuities in 1985 and 1998 that have arisen from the introduction of new satellite data in the assimilation process (McLandress et al., 2014).Here we use the monthly mean CMAM30 temperatures, which are available at http://www.cccma.ec.gc.ca/data/ cmam/output/CMAM/CMAM30-SD/mon/atmos/.
CMIP5
Coupled atmosphere-ocean models from phase 5 of the Coupled Model Intercomparison Project (CMIP5) are also examined.Most of these models are not chemistry-climate models and do not have upper boundaries extending high into the stratosphere or above.The nine models that are used are listed in Table 1 two simulations for a given model are continuous and, thus, can be simply concatenated to produce a single time series.Since we use only the first few years of the RCP 4.5 simulation, differences between it and the three other RCP simulations (RCP 2.5, 6 and 8.5) are expected to be very small.Following Thompson et al. (2012) the SSU channels onto which the data are projected depend on the height of the top model data level: channel 1 (any model with data at 1 hPa), channels 1 and 2 (any model with data at pressure levels below 1 hPa), and channels 1-3 (any model with data at pressure levels below 0.1 hPa).
CCMVal2
Chemistry-climate model (CCM) simulations of the recent past from phase 2 of the Chemistry-Climate Model Validation (CCMVal2) project are used.These REF-B1 simulations use observed sea-surface temperatures and sea-ice distributions and observed forcings (volcanic aerosols, tropospheric concentrations of greenhouse gases, ozone-depleting substances, and solar variations).The data are available at the SPARC Data Center at http://www.sparc-climate.org/data-center/data-access/.The following 16 models, all with model tops above 1 hPa, were used: AMTRAC3, CCSRNIES, CMAM, CNRM-ACM, EMAC, EMAC-FUB, GEOSCCM (and hist-GEOSCCM), LMDZrepro, MRI, Niwa-SOCOL, SOCOL, ULAQ, UMETRAC, UMUKCA-METO, UMUKCA-UCAM and WACCM; these model acronyms are defined in Morgenstern et al. (2010).Two models (CAM3.5 and E39C) were excluded because their upper boundaries were at pressures above 1 hPa.A third model (UMSLIMCAT) was excluded because the file containing the zonal and monthly mean temperature data did not have a latitude array.For model data sets containing a missing data flag for points below ground, those points were filled using tem-peratures from the first good data point above.Since such points occur at high latitudes (Antarctica) and at pressure levels corresponding to altitudes far below the peak of the SSU weighting functions, their impact on the SSU-weighted near-global mean is negligible.The CCMVal2 models are described in Morgenstern et al. (2010).
Results
This section is divided into two parts.The first part (Sect. 3.1) pertains to the merging of the SSU and AMSU data sets.Since this is achieved using MIPAS data as a transfer function, we begin by demonstrating that MIPAS is in good agreement with SSU and AMSU.We then describe the algorithm used to merge SSU and AMSU, and present the merged results.The second part (Sect.3.2) is an analysis of temperature trends for the post-2000 time period when the AMSU, MIPAS and MLS data are all available, as well as a comparison of our "extended" SSU results to other longterm data sets, including models.All results presented here are for monthly and near-global means (75 • S to 75 • N).This particular latitude range is dictated by the use of the MIPAS data in merging the SSU and AMSU data sets.
Comparisons to MIPAS
In order to compare MIPAS to SSU and AMSU, the MIPAS temperatures must be averaged in the vertical using the SSU and AMSU weighting functions, which are shown in the left and right panels of Fig. 1, respectively (thick solid curves).
For simplicity we follow Thompson et al. (2012) in using fixed weighting functions, rather than attempting to account for possible state dependence.The three SSU weighting functions (channels 1-3) peak at approximately ∼ 30, 39 and 44 km.The six stratospheric AMSU weighting functions (channels 9-14) peak at ∼ 17, 20, 25, 30, 37 and 42 km.The other curves in the left panel of Fig. 1 will be discussed in due course.
The vertical averaging is performed on a log-pressure height grid, with the limits of integration being the corresponding height range of the MIPAS data: 300 hPa (∼ 8.4 km) and 0.1 hPa (∼ 64.5 km).The vertically averaged temperature for channel n (denoted T n ) is therefore given by where t is time in months and z is the log-pressure height [z = −H ln(p/p s ), with H = 7 km and p s = 1000 hPa], and z b and z t are the limits of integration, namely z(300 hPa) and z(0.1 hPa).Before computing the vertical average, the weighting functions are normalized so that their vertical integral from z b to z t equals 1.By excluding the lower troposphere and upper mesosphere in Eq. ( 1), the full vertical integrals of the weighting functions are approximated.This approximation is less accurate for SSU than it is for AMSU since the SSU weighting functions extend down lower and up higher than for AMSU (Fig. 1).To investigate the possible impact of this incomplete vertical averaging using the SSU weighting functions, we first filled the MIPAS temperature data below 300 hPa and above 0.1 hPa using the corresponding CMAM30 data, and then performed the integration using z b = 0 km to z t ∼ = 100 km.The resulting vertically averaged temperatures for the three SSU channels (not shown) are virtually indistinguishable from those obtained by averaging only over the MIPAS domain (8-65 km), leading us to conclude that the effect of the incomplete vertical sampling of the integral given by Eq. ( 1) is negligible.
Atmos
Figure 2 compares the SSU-weighted MIPAS temperatures to SSU for 2002-2007, the years when the two instruments overlap.The thick and thin lines denote, respectively, the results with and without the seasonal cycle included, where the seasonal cycle is given by the first three harmonics of the annual cycle.The MIPAS time series have each been offset by a constant amount with respect to SSU, with the offset being determined so that the mean difference between the deseasonalized MIPAS and SSU time series is zero over the 4-year overlap period.The offsets are small: ∼ −0.2 K for channels 1 and 3 and ∼ −0.7 K for channel 2. There is very good agreement between MIPAS and SSU for the seasonal cycle; however, as will be discussed later, the MIPAS data exhibit a larger trend than does SSU (see Fig. 9).
Figure 3 shows the corresponding results for AMSU and AMSU-weighted MIPAS.As in Fig. 2, the MIPAS results are offset with respect to AMSU, with the magnitude of the off- sets again all being less than 1 K.As seen with SSU, there is very good agreement between MIPAS and AMSU for the seasonal cycle, but with MIPAS exhibiting stronger cooling in the upper three channels (12-14).We will discuss this trend difference in Sect.3.2 when we compare the trends to MLS.
Algorithm for merging SSU and AMSU
Since the SSU and AMSU weighting functions differ in shape and height of the maxima, the two data sets must be combined by taking suitably weighted averages of the different channel temperatures.One way this might be done would be purely statistically, fitting the deseasonalized temperatures of instrument A to instrument B using multiple linear regression as follows: where T A n (with the hat) denotes the fitted deseasonalized temperature from channel n of instrument A, T B m denotes the actual deseasonalized temperature from channel m of instrument B, and the constants α m are the coefficients determined using a least-squares fit.However, this method, which we shall refer to as the temperature-fit method, is problematic because the time series used in computing the fit (T B m ) are highly linearly dependent, as is shown in Fig. 4 in the case where B = AMSU.The top panel shows the deseasonalized temperature anomalies for the six channels superimposed.Adjacent or near-adjacent channels are highly correlated.Given the overlap in the AMSU weighting functions (W ), some correlation is to be expected.For example, for the highest three channels, the overlap between W 13 and W 14 is ∼ 61 %, between W 12 and W 13 it is ∼ 60 % and between W 12 and W 14 it is ∼ 31 %.However, the fact that the correlations are actually close to unity for those pairs of channels, i.e. r(13,14) ∼ 0.96, r(12,13) ∼ 0.96 and r(12,14) ∼ 0.88, suggests that they also reflect strong vertical relationships in the variability of global-mean temperature.A similarly high correlation of 0.91 is found between channels 9 and 10, while channel 11 is highly correlated with both channel 10 (r ∼ 0.90) and channel 12 (r ∼ 0.87).Thus, there appear to be only two degrees of freedom among the six channels, representing the upper stratosphere and the lower stratosphere.Similarly high correlations are found, albeit with more noise, in the CMAM30 data shown in the bottom panel of Fig. 4, which is plotted over the 1979 to 2011 period.The high correlations between the different channel temperatures means that the system of equations defined by Eq. ( 2) is highly underconstrained, and that there are no unique values of the coefficients α m .This was verified in a calculation in which one of the α m 's was specified and the remaining ones were computed, which yielded an almost identical temperature time series yet with very different coefficients.For this reason the temperature-fit method will not be used.
An alternative method, which is the method we have adopted, is to determine the fit coefficients from the weighting functions.Such a method has also been examined by the Remote Sensing Systems group, which has processed and combined the SSU data (C.Mears, personal communication, 2014).Using the weighting functions to generate the temperature fit coefficients makes physical sense since it is the channels of instrument B that have weighting functions peaking closer to the peak of a given weighting function of instrument A that should be given the most weight in the fit.Another advantage of this method is that it does not require the two temperature data sets to overlap in time, as does the temperature-fit method.
The weighting function fit method proceeds as follows.We first express the channel-n weighting function of instrument A as a linear combination of the weighting functions of instrument B: where the hat denotes the fitted weighting function.The constants β m are computed using least squares and are normalized so that m 2 m=m 1 β m = 1.The deseasonalized temperatures for channel n of instrument A are then constructed as follows: where the constants c n represent an additive relative bias between the two measurements.
The dotted curves in the left panel of Fig. 1 are the fits to the three SSU weighting functions using the six AMSU weighting functions (m 1 = 9 and m 2 = 14), computed using Eq. ( 3), but before the β m 's are normalized.The values of the unnormalized β m 's are given in Table 2.The reason that they do not sum to unity is due to incomplete sampling of the target weighting function.As seen in Fig. 1 the fits to SSU channels 1 and 2 are excellent, with the only significant departures from the true weighting function occurring below ∼ 10 km and above ∼ 50 km, where the SSU weighting functions do not have much strength anyway.Not surprisingly, the fit is poorest for the upper SSU channel 3 since there are no AMSU weighting functions that peak above it.The corresponding fits using the normalized β m 's are given by the thin solid curves.
The reason for normalizing the β m 's becomes apparent by considering the case of a constant temperature T o profile with an assumption of no relative bias between instruments A and B, in which case it can be easily shown that (5) Since we have assumed no relative bias between the two instruments, c n should vanish.This will only occur if To compute c n we use temperatures from a third instrument (C), which overlaps in time with instruments A and B and is of high enough vertical resolution that a sufficiently accurate representation of the temperatures obtained from the weighting functions of both instruments A and B can be computed.In this case, instrument C provides a transfer function between instruments B and A, whereby c n can be expressed as the sum of three biases, namely
Atmos
where where the angle brackets denote a time average, and, as before, all temperatures are deseasonalized.For clarity, we have omitted the subscript n since it is common to all terms.The quantities T AC and T BC denote the temperatures of instrument C that have been averaged in the vertical using the weighting functions for instruments A and B, respectively.The first term (E A-C ) in Eq. ( 6) denotes the relative bias between the temperature of instrument A and the instrument A-weighted temperature of instrument C. The second term (E C-B ) is the same but for instrument B (with a minus sign), where the summation over m is required since we are computing the temperature bias for channel n of instrument A.
The third term (E W ) is the weighting function bias, which accounts for the error in the fits to the weighting functions; this term must be evaluated using the height-dependent temperatures from instrument C. If the period over which the time averages of the different terms in Eq. ( 6) are computed is the same, then in which case instrument C is not needed.The advantage of Eq. ( 6) over Eq. ( 10), however, lies in the fact that instrument C enables us to separate the relative biases into different components.Moreover, if there is a gap in time between instruments A and B, but instrument C still overlaps with instruments A and B, then Eq. ( 10) could not be used.
Merging SSU and AMSU using MIPAS
Here we consider only the case where we extend SSU forward in time, which means that A = SSU and B = AMSU in Eq. ( 4).While it is certainly possible to extend AMSU backward (i.e.A = AMSU and B = SSU), we do not do so because the weighting function bias terms (E W ) are substantially larger when fitting the three broad SSU weighting functions to the six narrower AMSU weighting functions.
Table 3 shows the different bias terms given in Eq. ( 6), which are used to compute c n in Eq. ( 4).The bottom row lists the sum of the three biases, which are the c n 's.The magnitudes of the individual bias terms are all less than 1.2 K, with some cancellation between the different terms.The E SSU-MIPAS term is identical to the offsets between SSU and SSU-weighted MIPAS shown in Fig. 2. The weighting function term E W is largest for channel 3 since the fit is the poorest (see Fig. 1). Figure 5 shows the difference between the deseasonalized SSU temperatures and the fitted tempera- .The horizontal lines are the constants c n used in Eq. ( 4).See text for more details.
tures computed using AMSU as a function of time, and indicates that the relative biases (whose means are the c n 's) are fairly stable in time.The standard deviations of the differences, which provide a conservative measure of the uncertainty of the fits, are 0.06, 0.09 and 0.09 K for channels 1, 2 and 3, respectively.These values are clearly much smaller than the dynamic range seen in Fig. 6, which shows the SSU data (black) and the corresponding extension derived from AMSU and MIPAS using Eqs.( 4) and ( 6) for the 1979-2012 time period.These fit uncertainties have been propagated into our trend uncertainties; the effects are small although The SSU and fitted SSU deseasonalized temperature time series can be combined into a single time series, which we shall refer to as the "extended SSU" time series T SSU n (denoted with a tilde), as follows: where T SSU n is the time series computed using Eqs.( 4) and ( 6), and the time-dependent coefficients α and β are given by
Stratospheric temperature trends
In this section we take a closer look at the temperature trends in the first decade of this century using not only the AMSU and MIPAS data but also MLS.We then take a step back and re-examine the long-term trends in the context of model simulations.
Figure 8 compares AMSU temperatures (black) to the AMSU-weighted results computed from MIPAS (blue) and MLS (red), with the latter two being offset with respect to AMSU for display purposes.The offsets are computed so that the time means in the overlap period are identical to those of AMSU.As remarked earlier, the AMSU-weighted MIPAS temperatures exhibit stronger cooling in the upper channels than do AMSU.MIPAS is known to have a drift due to time-dependent detector nonlinearity, which had not been considered for the calibration of radiance spectra used here (e.g.Eckert et al., 2014).A latitude-and altitude-dependent drift of MIPAS temperatures relative to MLS of the order of −1 K decade −1 has been identified for most parts of the stratosphere (Eckert, 2012), which is in agreement with the trend differences found here.A refined calibration, which takes the time dependence of the detector nonlinearity into account, is currently under investigation.The MLS results, however, do not show such an effect, and are in fact in better agreement with AMSU on a year-to-year basis.
The temperature trends from MIPAS and MLS computed from 2004 to 2012 are shown in Fig. 9 as a function of height.Two types of uncertainties are shown.The first assumes the data points are independent (thick error bars and dark shading); this is appropriate when comparing trends between different data sets over the same time period, where the differences will be mainly instrumental.The second takes into account serial correlation using the lag-1 autocorrelation coefficient to estimate the reduced number of degrees of freedom following Santer et al. (2000) (thin error bars and light shading).Since serial correlation is a property of the atmosphere, not of a particular instrument, the lag-1 autocorrelation coefficient computed from the MLS data is used in calculating the reduced number of degrees of freedom for the sparser MIPAS data.Although the time period is relatively short, global-mean temperature exhibits limited internal variability (since it is under radiative control) and so the uncertainties in the trends in the upper stratosphere are relatively low.Superimposed in Fig. 9 are the AMSU trends (black circles) and the AMSU-weighted MLS and MIPAS trends (black squares).The weighted trends are seen to lie along a vertically smoothed version of the profile trends.As was seen in Fig. 8, the agreement between MLS and AMSU is excellent (left panel), while MIPAS shows substantially stronger cooling trends in the upper stratosphere (right panel).The same conclusions can be inferred from the trends from extended SSU (red circles) and SSU-weighted MLS and MIPAS (red squares), computed for the 2004-2012 period, which are also shown in Fig. 9.
Although MLS uses as its a priori an analysis that has assimilated AMSU radiances, the impact of AMSU on the MLS temperatures is thought to be relatively small since the MLS retrievals are more susceptible to vertical variations much shorter than the widths of the AMSU weighting functions (M.Schwartz, personal communication, 2014).We therefore believe that the good agreement between MLS and AMSU is real and therefore an independent validation of the MLS data, while the strong cooling in the MIPAS data is attributed to its known drift.It is not clear whether the zig-zag vertical structure seen in the MLS profile trends is real, and we note that the model trends (cf.Fig. 10) do not exhibit such a structure.
We now return to Fig. 7, which shows the extended SSU temperature anomalies (with respect to 1979-1982) plotted from 1979 to 2012, along with those from the CMIP5 models.Near-global-mean model temperatures are constructed from monthly means and vertically averaged using the SSU weighting functions using Eq. ( 1 malizing the weighting functions to have a vertical integral of unity over the data height range.As explained in Sect.2.6 (see also Table 1), the CMIP5 models with poor vertical resolution in the stratosphere are not projected onto all three SSU channels, which explains why more grey curves are present in the bottom panel than in the top panel.The agreement between the CMIP5 multi-model mean (black) and extended SSU (red) is remarkably good.The good agreement from 1979 to 2006 has arisen, of course, because we are using version 2 of the NOAA SSU data.However, as noted earlier, Zou et al. (2014) did not compare SSU version 2 to models; here we do.After 2006 (the end of the SSU data record) the extended SSU temperatures also compare favourably with the CMIP5 models, with both exhibiting continued stratospheric cooling followed by warming starting in about 2009.
The cooling is due to a combination of the effects of increasing CO 2 and the declining phase of the previous solar cycle, while the warming is presumably due to the current solar cycle, which commenced in 2008.Note that the CMIP5 RCP simulations included a solar cycle by repeating the last solar cycle (1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008) into the future.
Figure 10 compares the long-term temperature trends for extended SSU and the CMIP5 models (1980-2012; left) and for extended SSU and the CCMVal2 models (1980-2005; right).For 1980-2012 the trends for extended SSU are −0.63 ± 0.13, −0.71 ± 0.15 and −0.80 ± 0.17 K decade −1 for channels 1, 2 and 3, respectively.The 95 % uncertain-Atmos.Chem.Phys., 15, 9271-9284, 2015 www.atmos-chem-phys.net/15/9271/2015/ties, which are computed the same way as in Fig. 9, take into account serial correlation.The extended SSU cooling trends for 1980 to 2005 are ∼ 9 % larger than those for 1980-2012 for channel 1 and ∼ 15 % larger for channels 2 and 3.This reflects the much weaker cooling rate over the second half compared with the first half of the extended record.In all cases, the SSU-weighted model trends (squares) agree with the observed trends within the uncertainties (error bars).The cooling increases with increasing altitude for both the models and the observations.Although the channel 3 extended SSU trend is considerably weaker than the CCMVal2 trend profile at the altitudes where the weighting function peaks (∼ 44 km), the channel 3 CCMVal2 trend is entirely consistent with the extended SSU trend.This difference between the weighted and profile trend is due to the large curvature in the profile trend.This illustrates why nadir measurements should never be directly compared with profile measurements.For the CMIP5 models the curvature of the profile trend is much weaker than for the CCMVal2 models, which explains why the weighted and profile trends are in much closer agreement.The lack of strong cooling above ∼ 40 km in the CMIP5 models is presumably a result of coarser stratospheric resolution and lower upper boundaries than the CCMVal2 models, which also have more comprehensive physical parameterizations for the middle atmosphere.
Figure 11 shows near-global-mean temperature differences for extended SSU and the CCMVal2 multi-model means for the period of strong ozone depletion (1986-1995; left) and the start of ozone recovery (1995-2004; right).(Note that for these periods, the merging is irrelevant and the comparison is basically with the version 2 NOAA SSU record itself.)We prefer differences to linear trends for this purpose because of the highly nonlinear time evolution.To minimize the impact of solar variability, which clearly has a large modulating effect on the long-term cooling (e.g.Fig. 7), we compare the two recent decadal periods between solar minima.For extended SSU, distinct cooling of about −0.7 K is seen at all levels over 1986-1995, whereas negligible cooling is found over 1995-2004.This highlights the important role of ozone depletion in the observed stratospheric cooling up to the mid-1990s.A similar though somewhat less pronounced contrast between the two periods is seen in the temperature differences from the models.
Conclusions
We present a physically based method for merging nearglobal-mean brightness temperatures from SSU and AMSU using measurements from a third instrument, in this case MIPAS, which has high enough vertical resolution that it can sufficiently accurately simulate the vertically weighted temperatures of both SSU and AMSU.The SSU temperatures are expressed as a linear combination of AMSU tempera- 1986-1995 (left) and 1995-2004 (right).The CCMVal2 temperature difference profiles are given by the black curves, the SSU-weighted CCMVal2 differences by the black squares and the extended SSU differences by the open circles.The latter two are plotted at the heights of the maxima of the three SSU weighting functions, ranging from channel 3 at the top to channel 1 at the bottom; the symbols are offset slightly in the vertical for clarity.The error bars denote the 95 % confidence levels.The differences are computed from data that have been averaged over 2 years spanning each of the two end points.
tures, with the coefficients determined by fitting the AMSU weighting functions to the SSU weighting functions.The MIPAS data are used in matching the SSU temperatures and the AMSU-simulated SSU temperatures.Multiple linear regression does not work for merging the SSU and AMSU temperatures because the AMSU channels are not sufficiently linearly independent (in a statistical sense) and thus the determination of the regression coefficients is underconstrained.Part of the correlation between the channels arises from the overlap of the weighting func- The relative bias between SSU and the AMSU-simulated SSU channels is expressed as a sum of three relative biases: between SSU and MIPAS, between the SSU channels and the AMSU-simulated SSU channels (both applied to MIPAS data), and between MIPAS and AMSU.In this way, MIPAS is used as a transfer function between SSU and AMSU.
In this particular case, SSU and AMSU overlap in time and so a transfer function is not strictly required, but our method would be applicable in cases where the two data sets to be merged did not overlap in time, so long as there was a higher-resolution data set that bridged between them.Also, this method allows for quantification of the error incurred by the approximation of the SSU weighting functions by the AMSU weighting functions.
MIPAS was found to track the three SSU channels and the six AMSU channels very well in time, especially in their seasonal cycle.This provides well-defined relative biases between MIPAS and the two nadir instruments, allowing for the merging of the two nadir records to be performed with confidence.In particular, the standard deviation of the differences during the overlap period is less than 0.1 K for all three SSU channels, which is much less than the dynamic range of the time series.Thus, uncertainties in the merging make only a very small contribution to the uncertainties in the long-term changes.The relative bias that results from imperfect approximation of the SSU weighting functions by the AMSU weighting functions is a significant contributor to the overall relative bias for SSU channels 1 and 2, and the dominant contributor for channel 3.Although the relative bias for channel 3 seems stable over the overlap period (e.g. the correlation coefficient between SSU channel 3 and the AMSUsimulated channel 3 is 0.895), it does introduce a potential systematic uncertainty into the extension of SSU channel 3 into the future using AMSU.
The coefficients β m and relative biases c n developed here can be used to continuously extend the NOAA version 2 SSU record forward in time using AMSU, as the AMSU record lengthens.
The near-global-mean linear temperature trends for the extended SSU data set for 1980-2012 are −0.63 ± 0.13, −0.71 ± 0.15 and −0.80 ± 0.17 K decade −1 for channels 1, 2 and 3, respectively.These trends are in agreement with those from CMIP5 model simulations over this period.
Because global-mean temperature exhibits relatively little interannual variability, compared to the temperature in particular latitude bands, trends can be determined with confidence even over relatively short records.We analyse trends over the period 2004-2012 when data from a second vertically resolved temperature data set, Aura MLS, are available.While MLS temperature trends are essentially identical to those of AMSU, the current version of MIPAS data shows a cooling trend relative to AMSU, which is in agreement with preceding drift analyses (Eckert, 2012).This does not compromise the use of MIPAS as a transfer function between SSU and AMSU, because the relative biases are computed for a particular period, nor for the use of MIPAS data to examine seasonal cycles and interannual variability.However, this version of MIPAS temperature should not be used to determine long-term trends.On the other hand, the high level of agreement between MLS and AMSU provides confidence in both data sets for trend analysis.Over the 2004-2012 period these data show a statistically significant cooling ranging from ∼ 0.6 ± 0.3 K decade −1 for channel 14 to ∼ 0.3 ± 0.2 K decade −1 for channel 12, and no statistically significant change for the three lowest channels 9, 10 and 11.
It is worth noting that even the narrower weighting functions that characterize the AMSU channels, relative to the deeper weighting functions of the SSU channels, strongly smooth the vertical structure seen in the MLS trends.Thus, nadir measurements should never be compared with profile trends derived from higher-vertical-resolution instruments or models; the latter must always be first filtered through the weighting functions of the nadir measurements.
The long-term stratospheric near-global-mean temperature record since 1979, which is represented by the SSU channels, exhibits considerable temporal structure associated with cooling from increasing CO 2 and from ODS-induced ozone depletion, the effects of the solar cycle, and warming from volcanic eruptions.Version 2 of the NOAA SSU record is found to be consistent with the behaviour seen in model simulations.This is in contrast to the findings of Thompson et al. (2012), who examined version 1 of those data.In particular, the (extended) SSU record and the CCMVal2 models show the same contrast in cooling trends between the ozone depletion and recovery periods, with weak cooling over 1995-2004 compared with the large cooling seen in the period 1986-1995 of strong ozone depletion.The extended SSU data show a continued cooling beyond the end of the SSU record, with a small warming in the last few years (up to 2011) which is presumably associated with the solar cycle.Both features are consistent with the high-top CMIP5 models.Thus, the extended SSU global-mean temperature record constructed here, which covers 1979-2012, is consistent with physical expectations of the vertical structure and temporal variations in the rates of stratospheric cooling over this period.
Figure 1 .
Figure1.Vertical weighting functions (thick solid curves) for SSU (left) and AMSU (right).The thin solid and dotted curves in the left panel are, respectively, the normalized and unnormalized fits to the SSU weighting functions obtained using the AMSU weighting functions using Eq.(3); see text for details.
Figure 2 .
Figure2.SSU (red) and SSU-weighted MIPAS (blue) temperatures for channels 1-3.The thin curves are the deseasonalized temperatures.The weighted MIPAS temperatures are offset by a constant amount so that the mean difference between the deseasonalized SSU and MIPAS time series is zero; the value of this offset is labelled in each panel.In this and all other figures, monthly and nearglobal (75 • S to 75 • N) means are shown, and the tick marks directly above each year label on the horizontal axes are for January of that year.
Figure 4 .Figure 5 .
Figure 4. Top: deseasonalized AMSU temperature anomalies with respect to the 1999-2011 mean for channels 9 to 13 and the 2001-2011 mean for channel 14, with the variance of each channel normalized to 0.25 K 2 .Bottom: same but for AMSU-weighted CMAM30 for the 1979 to 2011 time period.The correlation coefficient between the different channels is labelled in each panel.
Figure 6 .Figure 7 .
Figure6.Deseasonalized temperatures for SSU channels 1-3 (black) and the fits computed from AMSU (red) using Eqs.(4) and (6).The insets show blow-ups of the time series in the overlap period (with the SSU time means subtracted off), along with the correlation coefficients (r) between each pair of curves.
t 2 and β = 1 − α, where t 1 = 2001.00and t 2 = 2006.25 are the start and end dates of the overlap period between SSU and AMSU channel 14.The extended SSU temperatures, expressed as anomalies with respect to the 1979-1982 mean, are shown in Fig. 7 (red curves).The other curves in this figure will be discussed in the next section.
Figure 8 .Figure 9 .
Figure8.Deseasonalized temperatures for AMSU channels 9-14 (black) and the corresponding AMSU-weighted temperatures computed from MIPAS (blue) and MLS (red).The constant offsets between MIPAS and AMSU and between MLS and AMSU are labelled in each panel.
Figure 10 .Figure 11 .
Figure 10.Temperature trends for extended SSU and the CMIP5 multi-model mean for 1980-2012 (left) and extended SSU and the CCMVal2 multi-model mean for1980-2005 (right).The trend profiles and weighted trends for the models are given by the lines and squares.The latter are plotted at the heights of the maxima of the three SSU weighting functions; for clarity the symbols for the models are offset slightly with respect to extended SSU.The error bars denote the 95 % confidence levels.
compare the MIPAS and AMSU trends to those from MLS on the Aura satellite, and over the extended SSU record.The extended SSU record is found to be in agreement with hightop coupled atmosphere-ocean models over the 1980-2012 period, including the continued cooling over the first decade of the 21st century.Conclusions are drawn in Sect. 4.
Table 3 .
The three bias terms in the expression for c n in Eq. (6) for n = 1-3 in the case where instrument A = SSU, B = AMSU and C = MIPAS.Units are K.The sum of the terms, which is listed in the bottom row, is the constant c n used in Eq. (4).See text for details. | 2017-11-18T21:08:12.552Z | 2015-08-20T00:00:00.000 | {
"year": 2015,
"sha1": "db1f712b7fc79ae919ff92c94e7aabd9c9e05fb6",
"oa_license": "CCBY",
"oa_url": "https://acp.copernicus.org/articles/15/9271/2015/acp-15-9271-2015.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "743511f0265973f00fc2c9c74aeb4e6cbfa9b857",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
18271054 | pes2o/s2orc | v3-fos-license | Secretion of Novel SEL1L Endogenous Variants Is Promoted by ER Stress/UPR via Endosomes and Shed Vesicles in Human Cancer Cells
We describe here two novel endogenous variants of the human endoplasmic reticulum (ER) cargo receptor SEL1LA, designated p38 and p28. Biochemical and RNA interference studies in tumorigenic and non-tumorigenic cells indicate that p38 and p28 are N-terminal, ER-anchorless and more stable relative to the canonical transmembrane SEL1LA. P38 is expressed and constitutively secreted, with increase after ER stress, in the KMS11 myeloma line and in the breast cancer lines MCF7 and SKBr3, but not in the non-tumorigenic breast epithelial MCF10A line. P28 is detected only in the poorly differentiated SKBr3 cell line, where it is secreted after ER stress. Consistently with the presence of p38 and p28 in culture media, morphological studies of SKBr3 and KMS11 cells detect N-terminal SEL1L immunolabeling in secretory/degradative compartments and extracellularly-released membrane vesicles. Our findings suggest that the two new SEL1L variants are engaged in endosomal trafficking and secretion via vesicles, which could contribute to relieve ER stress in tumorigenic cells. P38 and p28 could therefore be relevant as diagnostic markers and/or therapeutic targets in cancer.
Introduction
Multiple homeostatic mechanisms that control protein folding and assembly and promote the disposal of defective proteins operate in distinct cellular compartments to afford protection from endogenous proteotoxic stress [1][2][3][4]. The endoplasmic reticulum (ER) is the folding and assembly site for resident structural proteins and enzymes, as well as for secretory and plasma membrane proteins [5]. This remarkable workload is managed by efficient and high-fidelity protein folding and misfold-correction systems, based on ATP-dependent chaperones and disulfide isomerases, in parallel with quality control mechanisms that allow Golgi transit only to properly folded proteins [6]. Furthermore, clearance of aberrant proteins retained in the ER is mediated through the ERassociated degradation (ERAD) pathway [7], a multi-step process which requires recognition of defective proteins, retro-translocation to the cytosolic side of the ER membrane, ubiquitination and degradation by the 26S proteasome [8].
Nonetheless, the cellular protein-folding capacity and the ERAD pathway may be impaired and/or overloaded by a variety of pathological conditions that perturb energy and calcium homeostasis, increase secretory protein synthesis and/or interfere with protein glycosylation and disulfide bond formation [6,9]. In such cases the intralumenal accumulation of unfolded/malfolded proteins determines ER stress, which in turn activates a complex cascade of survival signaling pathways, collectively termed unfolded protein response (UPR). This aims at relieving ER stress by attenuating the rate of protein synthesis and by up-regulating the protein folding enzymes, the ERAD machinery and the secretory capacity [6,10,11]. If homeostasis cannot be restored, UPR-activated machineries can trigger death/senescence programs [12].
It is increasingly evident that the UPR has a major role in cancer, where it is required to maintain the malignant phenotype and to develop resistance to chemotherapy [13]. In fact cancer cells must adapt to nutrient starvation and hypoxia, which affect cellular redox status and availability of energy from ATP hydrolysis. This is expected to compromise their protein folding capacities, predisposing to ER stress [14][15][16]. Hence, upregulation of the ERAD-UPR pathways may substantially contribute to the complex cellular adaptations needed for cancer progression [17,18]. In this regard it is known that many ERresident proteins are deregulated, post-translationally modified, abnormally secreted and/or cell surface re-localized in various cancer types [13,[19][20][21].
We report here the identification, characterization and subcellular localizations of two novel anchorless endogenous SEL1L variants, p38 and p28, studied in the breast cancer cell lines SKBr3 and MCF7, the multiple myeloma line KMS11 and the non-tumorigenic lines MCF10A (breast) and 293FT (embryo kidney). We found that: i. p38 and p28 are encoded by the 59 end of the SEL1L gene; ii. p38 is up-regulated and constitutively secreted in the cancer cells, differently from the non-tumorigenic MCF10A line; iii. p28 is expressed only in the poorly differentiated SKBr3 breast cancer line; iv. ER stress/UPR strongly enhance p38 secretion in the cancer cells; v. N-terminal SEL1L is present in secretory and degradative compartments of SKBr3 and KMS11 cells, and in vesicles released into the extracellular space. Overall, the biochemical and morphological evidence supports the view that SEL1L p38 and p28 are implicated in pathways linking ER stress/UPR to endosomal trafficking and to secretion via extracellularly-shed vesicles. Furthermore the expression of p38 and p28 and their release into the culture medium is upregulated in tumorigenic relatively to non-tumorigenic cells, suggesting cancer-related functions.
Constructs
Tagged TPD52 isoform 1 constructs were generated by cloning the full-length TPD52 isoform 1 coding sequence, fused with a myc or GFP tag at the 39 end, into pCDNA3.1myc-Hys(-)A or peGFPN3 vectors, respectively (Invitrogen, S. Giuliano M.se, Italy). The Myc-tagged SEL1LB constructs were previously described [26]. Western blotting, immunoprecipitation, analysis of culture supernatants, N-glycosidase F and endoglycosidase H digestion Monoclonal SEL1L antibody was raised against the N-terminal peptide of human SEL1L [34]. Affinity-purified polyclonal antibody to the SEL1L N-terminus was kindly provided by Dr. H.L. Ploegh [35]. Affinity-purified polyclonal C-terminal SEL1L antibody was raised against a bacterially-expressed recombinant fragment encoding amino acids 575-738 of human SEL1L (Primm, Milan, Italy). Monoclonal anti-vinculin and anti-myc antibodies were purchased from Sigma-Aldrich (Sigma-Aldrich, Milan, Italy). Polyclonal anti-TPD52 was kindly provided by Prof. J. Byrne (University of Sydney, Sydney, NSW, Australia). Cells were lysed in 10 mM Tris-HCl (pH 7.4), 150 mM NaCl, 1% NP40, containing protease inhibitors (Pierce, Celbio, Pero, Italy). Protein concentrations were determined by the Bradford assay; samples were resolved on SDS-polyacrylamide gels, blotted onto PVDF membranes, probed with specific antibodies and developed with ECL (Genespin, Milan, Italy). For immunoprecipitation cell lysates were pre-cleared and incubated with antibodies immobilized on protein G-sepharose (Invitrogen S. Giuliano M.se, Italy). Immunoprecipitates were washed twice with 10 mM Tris-HCl pH 7.4, 150 mM NaCl, 0,25% NP40, once with 5 mM Tris-HCl, and eluted with sample buffer before gel electrophoresis. For analysis of supernatants cells were washed and incubated with OPTIMEM (Invitrogen, S. Giuliano M.se, Italy) for 16 hrs. Supernatants were recovered, centrifuged, precipitated with 10% TCA, re-suspended with 1 M Tris-HCl pH 7.4 and resolved on SDS polyacrylamide gels. All Western blots were performed using the X-BLOT Chamber (www.isenet.it).
RT-PCR
For digestion by N-glycosidase F (PNGase F) or endoglycosidase H (endo H), cell lysates were denatured by heating at 95uC in 0.05% SDS, 0.1% mercaptoethanol for 10 min and then incubated at 37uC for 1 h with or without PNGase F or endo H (New England Biolabs, Celbio, Pero, Italy), according to the supplier's indications.
Immunofluorescence microscopy
SKBr3 cells, grown on coverslips, untreated or treated with DTT as described above, were fixed with 4% paraformaldehyde in PBS for 30 min, washed in 0.1 M glycine for 20 min, and permeabilized in 0.1% Triton X-100 for additional 5 min. To investigate the subcellular localizations of SEL1L, the cells were incubated with the N-terminal anti-SEL1L monoclonal antibody [34] and with polyclonal antibodies against the ER marker calreticulin (Affinity Bioreagents, Breda, The Netherlands) and the Golgi marker giantin (Covance, Princeton, NJ, USA). The nuclei were stained with 4,6-diamido-2-phenylindole (DAPI, Sigma-Aldrich, Milan, Italy). Primary antibodies were visualized using fluorescein isothiocyanate-conjugated goat anti-mouse IgG (Cappel Research Products) or Texas-Red-conjugated goat anti-rabbit IgG (Jackson Immunoresearch Laboratories) for 30 min at room temperature. Cells were analyzed using an Apotome Axio Observer Z1 inverted microscope (Zeiss, Oberkochen, Germany), equipped with an AxioCam MRM Rev.3 at 40X magnification. Colocalization of fluorescence signals was analyzed with AxioVision 4.6.3 software. Image analysis was performed using Adobe Photoshop.
Cryoimmunoelectron microscopy
Cells processed for cryoimmunoelectron microscopy were grown as above and fixed in 2% paraformaldehyde, 0.2% glutaraldehyde in 0.1 M phosphate buffer, pH 7.4, for 2 hrs at 25uC. Cells were scraped off the coverslips, centrifuged, embedded into 10% gelatin (Sigma-Aldrich) in 0.1 M PBS, pH 7.4 and solidified on ice. After infusion in 2.3 M sucrose overnight at 4uC, cell blocks were mounted on aluminum pins and frozen in liquid nitrogen. Ultrathin cryosections (60 nm) were cut at 2120uC using an Ultracut EM FC6 cryoultramicrotome (Leica Microsystems, Vienna, Austria), collected with 1% methylcellulose in 1.15 M sucrose and single-or double-immunolabelled with primary antibodies. Bound antibodies were visualized using goat anti-mouse conjugated with 5-or 15-nm colloidal gold (British BioCell International, Cardiff, UK) or by protein-A conjugated with 10-nm colloidal gold (supplied from G.Posthuma and J. Slot, Utrecht, The Netherlands). Immunolabeling was performed with the following primary antibodies: monoclonal anti-SEL1L Nterminus [34], polyclonal anti-SEL1L N-terminus [35], polyclonal anti SEL1L-C terminus, polyclonal anti-calreticulin (Affinity Bioreagents), anti-CD63 monoclonal H5C6, developed by J.T. August and J.E.K. Hildreth, obtained from the Developmental Studies Hybridoma Bank developed under the auspices of the NICHD and maintained by The University of Iowa, Dept. of Biology (Iowa City, IA 52242), and monoclonal anti-c-Myc (Sigma-Aldrich) for SEL1LBmyc-transfected 293 FT cells [26]. Single and double immunolabeling were performed as described previously [23,36]. Cryosections were analyzed with a Philips CM10 transmission electron microscope.
Off-gel proteins fractionation
Semi-liquid phase isoelectrofocusing fractionation was performed on an Off-Gel 3100 apparatus (Agilent Technologies, Cernusco, IT) using a 24 cm strip with immobilized pH gradient gels in the 4-7 range. Total cell lysate samples (360 ml, corresponding to 1300 mg of proteins) were mixed with 3240 ml of isoelectric focusing buffer (Agilent Technologies) and loaded onto the apparatus. Protein fractionation was performed for 26 hrs with maximum 8000 Volt for a total of 60,000 V/hrs, according to the manufacturer's protocol. The resulting liquid fractions (150 ml) with 0.25 pH range were collected and aliquots of proteins further resolved on 10% acrylamide SDS-PAGE for Western blot probing.
Protein identification by MALDI-TOF mass spectrometry (MS) analysis
Bands of interest from SDS-PAGE were excised from gels, reduced, alkylated and digested overnight with bovine trypsin (Roche, Milan, Italy), as previously described [37]. One ml (1 ml) aliquots of the supernatant were used for mass analysis using the dried droplet technique and a-cyano-4-hydroxycinnamic acid as matrix. Mass spectra were obtained on a MALDI-TOF Voyager-DE STR mass spectrometer (Applied Biosystem, Foster City, CA). Alternatively, acidic and basic peptide extraction from gel pieces after tryptic digestion was performed and the resulting peptide mixtures subjected to a single desalting/concentration step before MS analysis over Zip-TipC18 (Millipore Corporation, Bedford, MA, USA). Spectra were internally calibrated using trypsin autolysis products and processed via Data Explorer software. Proteins were unambiguously identified by searching a comprehensive nonredundant protein database of the National Center for Biotechnology Information (NCBI, http://www.ncbi.nlm.nih.gov/) and the Mass Spectrometry protein sequence DataBase (MSDB, http:// msdn.microsoft.com/en-us/library/ms187112.aspx), selected by default using in house the software programs ProFound v4.10.5 and Mascot v1.9.00, respectively [38,39]. One missed cleavage per peptide was allowed, and an initial mass tolerance of 50 ppm was used in all searches.
Novel ER-anchorless SEL1L variants
A monoclonal antibody raised against the N-terminus of the SEL1L protein [34] was used to screen a panel of whole lysates from five human cell lines, including MCF7 and SKBr3 (breast cancer), KMS11 (Ig-K-secreting multiple myeloma), 293FT (embryo kidney) and the non-tumorigenic epithelial breast cell line MCF10A. Besides the well-known 95 KDa SEL1LA protein, two additive bands, designated as p38 and p28, were visualized at approximately 38 KDa and 28 KDa ( Figure 1A). While p28 was exclusively detected in the SKBr3 line, p38 was strongly expressed, at levels higher than SEL1LA, in all the cancer cell lines tested, with lower levels in MCF10A, where SEL1LA expression (protein and RNA) was also lower ( Figure 1A-B). A polyclonal antibody raised against the SEL1L C-terminus, which encompasses the SEL1LA tail anchor for insertion into the ER membrane, detected the 95 KDa SEL1LA protein, but not p38 and p28, although an additional band was evidenced at about 55 KDa, which could indicate the carboxy-terminal fragment resulting from cleavage of the native protein ( Figure S1A). Notably the p38 variant was strongly expressed also in other tested cancer cell lines, including Namalwa (lymphoma), MDAMB453 (breast cancer), HeLa (cervical cancer), and G144, G166, G179 (glioblastoma), all of which resulted negative for p28 ( Figure S1B). Analysis of SEL1L protein profile in the non-tumorigenic human fetal brain cell line CB660 compared to the three glioblastoma cell lines ( Figure S1B) provided further in vitro evidence that p38 was indeed expressed at higher levels in tumor cells.
When analyzed in 293FT cells by SDS-PAGE and immunoblot under reducing condition, p38 migrated as a monomer, while it appeared as a doublet under non-reducing conditions ( Figure 1C). Thus, while many proteins containing intramolecular disulphide bonds migrate more rapidly under non-reducing conditions due to the more compact native structure [40], p38 migrated faster in reduced than in oxidized state, a phenomenon suggesting that the more slowly migrating form could be engaged in intermolecular disulphide bonds [41][42][43]. The p28 signal appeared as a single band under both reducing and non-reducing conditions (data not shown).
To ascertain that p38 and p28 were bonafide endogenous SEL1Lencoded proteins, SKBr3 and 293FT cells were transfected with siRNAs targeting the SEL1L N-terminus. The levels of the three SEL1L signals were significantly lower in the cells treated with probed with monoclonal to SEL1L N-terminus. Vinculin was used as a loading control. In addition to SEL1LA (95 KDa), the N-terminal SEL1L antibody recognized two smaller encoded products, at approximately 38 and 28 KDa (designated p38 and p28). P38 was more abundant than SEL1LA and both were up-modulated in the cancer cell lines relative to MCF10A. P28 was detectable only in SKBr3. Bands above p38, probably corresponding to immature precursors or post-translationally modified products, were occasionally seen in the tested cell lines. The blot is representative of four independent experiments. B. RT-PCR analysis: RNAs from KMS11, MCF10A and SKBr3 cells were analyzed by RT-PCR using primers specific for SEL1LA. Signals shown were obtained with 27 cycles for SEL1LA. HPRT was used as a loading control. SEL1LA was up-modulated in the cancer cell lines relative to the non-tumorigenic MCF10A line. The image is representative of three different assays based on independent treatments. C. Intra/inter-molecular disulfide bonds analysis of p38. 293FT cell lysates were resolved by SDS-PAGE (10%) under reducing (R) and non-reducing (NR) conditions and blotted with monoclonal anti-SEL1L N-terminus. P38 migrated as a doublet under non-reducing conditions (the lanes comparing p38 migration under reducing and non-reducing conditions are from the same gel). D. Down-modulation of SEL1LA, p28 and p38 by SEL1L small interfering RNA (siRNA): Left panel: SKBr3 cells (3610 5 ) were treated with scrambled siRNA or siRNA specific to SEL1L (siRNA SEL1L) for 48 hrs, followed by a second siRNA treatment for further 48 hrs. Silencing efficiency was verified by Western blot. SEL1LA, p28 and p38 protein levels decreased close to 55%, 30% and 16% respectively compared to cells treated with scrambled siRNA. Vinculin was used as a loading control. Right panel: 293FT cells (6610 5 ) were treated with scrambled siRNA or siRNA-SEL1L for 48 hrs. SEL1LA and p38 protein levels decreased close to 45% and 23% respectively compared to cells treated with scrambled siRNA. Vinculin was used as a loading control. The histograms show values normalized relative to housekeeping signals and expressed as fold modulation relative to controls, densitometric analysis was determined using the Scion imaging program. (www.scioncorp. com). The data are the averages of three independent experiments, 6SD; Student's t-test was used to determine statistical significance *p,0.1; **p,0.05. E. Analysis of p38 and SEL1LA stability: 293 FT cells (6610 5 ) were treated for 3, 6 and 18 hours with cycloheximide (CHX, 200 mg/ml). Aliquots of lysates (50 mg) were resolved by SDS-PAGE (10%) and probed with monoclonal anti-SEL1L and anti-vinculin antibodies. Unlike SEL1LA, which progressively decreased during cycloheximide exposure up to about 90%, p38 levels did not change significantly. The histogram shows the densitometric quantifications obtained through Scion imaging program (www.scioncorp.com). Values were normalized relative to housekeeping signals and expressed as fold modulation relative to untreated samples. The data are averages of two independent experiments, 6SD. doi:10.1371/journal.pone.0017206.g001 SEL1L versus scrambled siRNAs, with decreases of 55% and 45% for SEL1LA and of 16% and 23% for p38 in SKBr3 and 293FT cells respectively, and of 30% for p28 in SKBr3 ( Figure 1D). In 293FT cells, inhibition of protein synthesis by cycloheximide for 3 and 6 hrs, time windows which did not result in cell death, decreased the SEL1LA level by about 50%, but had almost no effect on p38 ( Figure 1E). At 18 hrs, cycloheximide caused cell death, concomitant to a drastic depletion of SEL1LA (about 90%), but did not modify the p38 level. This indicates that p38 is more stable than SEL1LA, which may account for the modest p38 depletion by siRNAs to the SEL1L N-terminus.
Altogether, these data indicate that p38 and p28 are SEL1L protein variants encoded by the 59 end of the SEL1L gene. While p38 resulted expressed in all the tested cell lines, with higher levels in the cancer lines, p28 was detected only in the poorly differentiated metastatic breast cancer line SKBr3 [44].
The p38 and p28 variants display secretory properties
To explore the behavior and secretion of p38 and p28 in cells under ER stress/UPR, we analyzed by SDS-PAGE and immunoblot cell lysates and culture supernatants of MCF10A, SKBr3 and KMS11 cells under normal conditions and after treatment with DTT, which causes protein misfolding by reducing disulfide bonds [45] ( MG132, which determines ER stress through ERAD blockage by proteasome inhibition [46], was tested on the KMS11 myeloma line, being known that the breast cancer cell lines are quite resistant to such treatment [47]. An approximately fivefold increase of p38 in the KMS11 medium occurred after three hours of MG132 exposure, concomitant to a significant increase in CHOP and GADD45b expression, indicative of ER stress/UPR activation, even if in absence of XBP-1 splicing and BIP and ATF6 up-modulation (Figure 2C-C1; Figure S2C). MG132 treatment also increased the SEL1LA protein level ( Figure 2C, C2), which is consistent with reported evidence that SEL1LA is degraded via proteasome [22]. In contrast, the mRNA levels of SEL1LA and of the E3 ubiquitin ligase HRD1 decreased, suggesting that proteasomal blockage attenuated the transcription of these ERAD genes, whose products associate in the ER membraneembedded HRD1-SEL1L ubiquitin ligase complex ( Figure 2C1; Figure S2C). Prolonged MG132 treatment (22 hours) resulted in cell death, concomitant with up-modulation of BIP, CHOP and GADD45b, indicative of UPR activation ( Figure S2C).
Altogether these data indicate that p38 is constitutively secreted in at least two different cancer cell models, i.e., breast cancer and myeloma, and that secretion is up-regulated by ER stress/UPR. This does not occur in the non-tumorigenic MCF10A breast line. Secretion of p28 is restricted to the poorly differentiated breast cancer cell line SKBr3, which expresses this variant, and occurs only after ER stress/UPR.
Subcellular localizations of SEL1L products
To define the subcellular localizations of the endogenous SEL1L products, SKBr3 and KMS11 cells labeled with the monoclonal or polyclonal antibodies against SEL1L were analyzed by high-resolution immunoelectron microscopy (IEM). Ultrathin cryosections of untreated SKBr3 cells revealed that, in addition to the ER, the monoclonal and polyclonal antibodies against the SEL1L N-terminus [34,35] labeled peripheral cytoplasmic vesicles, often associated with structures morphologically consistent with late endosomes/multivesicular bodies (MVBs), organelles containing membrane vesicles that can be released extracellularly as exosomes [48] (Figure 3A, B). In agreement, immunofluorescence showed that the immunolabeling obtained with the monoclonal antibody to the SEL1L N-terminus co-localized with the ER marker calreticulin and, in few dots only, with the Golgi marker giantin, but was uniquely present in the peripheral cytoplasm ( Figure 3D-I). Double immunolabeling by IEM confirmed that SEL1L codistributed with calreticulin along the ER, while only SEL1L was present in endosomes/MVBs ( Figure S3G). In DTT-treated SKBr3 cells the IEM labeling was more evident along the plasma membrane (PM), particularly in association with microvilli, and in 80-200 nm vesicles apparently emerging from the PM and shed extracellularly ( Figure 4A-A1, C, F; Figure S3H-I), as observed for the exogenous myc-tagged SEL1LB protein in SEL1LBmyc-transfected 293 FT cells ( Figure S3J-K) [26].
Immunolabeling with CD63, a marker of lysosomes/MVBs [48], revealed that CD63 and SEL1L were similarly distributed along the microvilli and in membrane vesicles released from the PM ( Figure 4D, G, I; Figure S3J), in addition to the localization in lysosomes/MVBs ( Figure 4B). Furthermore double immunolabeling revealed that N-terminal SEL1L codistributed with CD63 in some extracellular vesicles ( Figure 4E, H; Figure S3K), while the C-terminal SEL1L antibody, which recognizes only the full-length SEL1LA protein, was detected in the ER ( Figure 4J). In agreement, immunofluorescence analysis evidenced that Nterminal SEL1L was uniquely localized in peripheral dots and along PM profiles ( Figure S3A-F), which were not labeled with the ER marker calreticulin.
Correspondingly, in MG132-treated KMS11 cells, IEM showed that SEL1L was present in vesicles dispersed in the peripheral cytoplasm or associated with late endosomes/MVBs ( Figure 5A, C). Furthermore, in areas where MVBs were adjacent to the PM, SEL1L-labeled vesicles appeared to emerge into the extracellular space ( Figure 5B).
Overall the morphological data indicate that N-terminal SEL1L localizes in endosomes/MVBs and in vesicles released into the extracellular space, consistently with the SDS-PAGE and immunoblot analysis of the SKBr3 and KMS11 culture supernatants.
Biochemical characterization of SEL1L variants
We next biochemically characterized the two new SEL1L variants through isoelectrofocusing off-gel fractionation, coupled to Western blot. The isoelectric points (pIs) of p38 and p28 ranged between 5.25 and 5.50 ( Figure S4A), resulting slightly more acidic than the pI of SEL1LA, estimated at 5.8 (http://www.ncbi.nlm.nih. gov/IEB/Research/Acembly). Such pI values are compatible with the ultrastructural localization of N-terminal SEL1L labeling in late endosomes/MVBs, which have pH values in the 5 to 6 range [49,50]. Unlike SEL1LA [23], both p38 and p28 were resistant to N-glycosidase F (PGNase F), which removes all types of N-linked carbohydrates, and to endoglycosidase H (Endo H), which removes high-mannose N-linked oligosaccharides ( Figure S4B).
To isolate p38 and p28, SKBr3 lysates were immunoprecipitated with monoclonal anti-SEL1L N-terminus, fractionated by SDS-PAGE and either immunoblotted ( Figure 6A, left panel) or stained with Coomassie Brillant Blue ( Figure 6A, right panel). As shown in Figure 6A, SEL1LA and p28 were immunoprecipitated with different stoichiometric ratios (left panel, lane 3, arrows), but p38, which yielded the most intensely recognized band by immunoblotting, was not recovered in the immunoprecipitates obtained using the same monoclonal antibody. The inability to immunoprecipitate p38 even at small level suggests epitope masking in the native protein, but not in the protein subjected to Figure 2. Analysis of p38 and p28 secretion in SKBr3, KMS11 and MCF10A cells exposed to chemical and pharmacological treatments. A. Western blot analysis of untreated and DTT-treated MCF10A cells: MCF10A cells were exposed to DTT (2 mM) for 3 hours and successively maintained for 24 hrs in OPTIMEM. Secreted protein (50 mg) extracted from the culture medium by TCA precipitation, and aliquots of cell lysates (50 mg) were resolved by SDS-PAGE (10%) and blotted with monoclonal anti-SEL1L and anti-vinculin antibodies. P38 was not detectable in the culture medium, both in presence and in absence of DTT. In addition to p38, MCF10A cells showed a higher band, probably corresponding to an immature precursor or post-translationally-modified product. The image is representative of two independent experiments. A1. RT-PCR analysis of untreated and DTT-treated MCF10A cells: RNA was extracted from the samples described in panel A and analyzed by RT-PCR for the UPR response. The histogram shows expression values normalized relative to housekeeping signals and expressed as fold modulation relative to the untreated samples; densitometric analysis was performed by Scion imaging program. UPR activation upon DTT treatment is indicated by up-modulation of BIP and CHOP and XBP-1 splicing, concomitantly SEL1LA is incremented (gray bar). The corresponding images are shown in Figure S2A. The data are the averages of two different assays based on independent treatments, 6SD. B. Western blot analysis of untreated and DTT-treated SKBr3 cells: Secretion of p38 and p28 was evaluated in untreated and DTT-treated SKBr3 cells. Cells exposed to DTT (2 mM) for 3 hrs or not exposed were maintained for 24 hrs in OPTIMEM. Secreted protein (50 mg) extracted from the culture medium by TCA precipitation, and aliquots of cell lysates (50 mg) were resolved by SDS-PAGE (12%) and blotted with anti-SEL1L and anti-vinculin antibodies. ER stress/UPR strongly promoted secretion of p38 and, to a lesser extent, p28 in the culture medium. The image is representative of five independent experiments. C. Western blot analysis of KMS11 cells treated with DTT and MG132: KMS11 cells were exposed to DTT (2 mM) or MG132 (10 mM) for 3 hrs and successively maintained for 24 hrs in OPTIMEM. Secreted protein (30 mg), extracted from the culture medium by TCA precipitation, and aliquots of cell lysates (50 mg) were resolved by SDS-PAGE (10%) and blotted with anti-SEL1L and anti-vinculin antibodies. Both treatments markedly induced p38 secretion in the culture medium. The image is representative of five independent experiments. C1. RT-PCR analysis of KMS11 cells treated with DTT and MG132: RNA was extracted from the same samples described in panel C and analyzed by RT-PCR for the UPR response. The histogram shows expression values normalized relative to housekeeping signals and expressed as fold modulation relative to untreated samples; densitometric analysis was determined by Scion imaging program. UPR activation upon DTT treatment is indicated by the up-modulation of ATF6, BIP, and CHOP and by XBP-1 splicing (see Figure S2C), concomitantly the expression of SEL1LA, HRD1 and GADD45b is incremented (gray bar). The corresponding images are shown in Figure S2C. MG132 treatment did not trigger UPR activation, as indicated by the absence of ATF6 and BIP un-modulation and lack of XBP-1 splicing (see Figure S2C), nevertheless, CHOP and GADD45b were markedly up-modulated and SEL1LA and HRD1 down-modulated (black bars). The data are averages of four different assays based on independent treatments, 6SD. C2. SEL1LA protein expression in KMS11 cells treated with DTT and MG132: The histogram shows SEL1LA protein expression values obtained from the samples described in panel C, normalized relative to housekeeping signals and expressed as fold modulation relative to the untreated sample; densitometric analysis was performed using the Scion imaging program. MG132 determined SEL1LA protein accumulation up to 2 times relative to the control level (black bar). The data are the averages of four independent experiments, 6SD. doi:10.1371/journal.pone.0017206.g002 SDS-PAGE, which could reflect: i. protein-protein interactions; ii. additive post-translational modifications occurring only in the p38 form; iii. nature of the p38 structure. Scaling-up the immunoprecipitations allowed to detect by Coomassie staining a 28 KDa protein band ( Figure 6A, right panel, lane 4, arrow), which was subjected to matrix-assisted laser desorption/ionization-time-of-flight mass spectrometry (MALDI-TOF MS) analysis. Surprisingly, MALDI-TOF MS revealed the presence of peptides pertaining to TPD52 (Table 1), a secreted coiled-coil motif-bearing cancer-associated protein implicated in endosomal trafficking and in secretion via membrane-bound vesicles [51][52][53][54][55][56][57][58][59][60][61]. TPD52 was readily detectable in SKBr3 cells ( Figure S5A), as expected based on the increased gene copy number reported in this cell line [61]. Blast alignment between the coding sequences of SEL1L and of three alternatively-spliced TPD52 isoforms (accession numbers: P55327-1, P55327-2, P55327-3) did not show similarities (data not shown), the monoclonal anti-SEL1L N-terminus did not appear to recognize the myc/GFP-tagged TPD52 isoform 1 (Figure S5B-C) and, in SKBr3 cells, siRNA silencing of SEL1L did not affect TPD52 protein level ( Figure S5D). ER stress/UPR slightly promoted the release of TPD52 in the SKBr3 culture medium ( Figure S5E).
To investigate whether SEL1LA and/or p28 physically interacted with TPD52, SKBr3 lysates were immunoprecipitated with either anti-SEL1L N-terminus or anti-TPD52 antibodies and conversely analyzed by Western blot using anti-TPD52 or anti-SEL1L ( Figure 6B). TPD52 was immunoprecipitated using monoclonal anti-SEL1L (left panel, lane 3, arrow); reciprocally, in spite of the low immunoprecipitation efficiency, p28, but not SEL1LA, was recovered using anti-TPD52 (right panel, lane 3, arrows). This suggests that in SKBr3 cells p28 and TPD52 interact, with a stoichiometric imbalance that might reflect differences in expression level and/or immunoprecipitation efficiency.
Discussion
We report here two new anchorless endogenous SEL1L variants, p38 and p28, identified in lysates of different cell lines, including KMS11 (multiple myeloma), 293FT (embryonic kidney), MCF7, SKBr3 (breast cancer) and MCF10A (non-tumorigenic breast). In addition to the signal of the canonical ER-resident SEL1LA protein, we found distinct additive bands at approximately 38 KDa (p38) and 28 KDa (p28). While p28 was detectable only in the poorly differentiated breast cancer line SKBr3, p38 was expressed in all the cell lines tested, at levels higher than SEL1LA and with stronger signals in cancer cells. In this regard, recent studies of SEL1L expression in human colorectal tumors revealed higher p38 levels in adenomas compared to matched normal colonic mucosa, suggesting an association between upregulation of p38 and in vivo colonic tumorigenesis (Ashktorab et al., unpublished results).
Recognition by antibodies to the SEL1LA N-terminus, but not to the C-terminus, and RNA interference assays indicate that p38 and p28 are low molecular mass N-terminal SEL1L forms, that could originate either from splicing events at the 59 end of the SEL1L pre-mRNA transcript, as the recently reported SEL1LB and -C isoforms, cloned from RNA extracted from normal peripheral blood lymphocytes [26], or, more likely, from proteolytic cleavage of the ER-resident SEL1LA. In this regard it is relevant that bioinformatic analysis predicts several cleavage sites in the SEL1LA protein sequence (peptide cutter program, http://expasy.org/tools/peptidecutter). The hypothesis that p38 could originate from SEL1LA cleavage would be consistent with the evidence that DTT treatment upregulates SEL1LA mRNA, but not SEL1LA protein level, which could suggest either that DTT, by altering terminal folding, compromises SEL1LA stability, or that most of SEL1LA undergoes cleavage to p38, that is then secreted. In this case the band at about 55 KDa evidenced in SKBr3 cells using antibody to the SEL1L C-terminus could represent the carboxy-terminal fragment obtained after cleavage of p38. Furthermore, we recently observed that miR183 negatively regulates both SEL1LA and p38, a finding supporting the view that at least p38 results from a post-translational modification of the SEL1LA product (Biunno, unpublished results). Thus, while SEL1LB and -C are generated from alternatively-spliced mRNAs expressed at low levels and up-modulated in cancer cells ( Figure S6) and under ER stress [26], the two new soluble SEL1L forms, abundantly expressed in cancer cells, could likely originate from proteolytic cleavage of SEL1LA.
As SEL1LB and -C, p38 and p28 lack the C-terminal SEL1LA membrane-spanning region, but are predicted to retain several sel1- like tetratricopeptide repeats, known to serve as protein-protein interaction modules [26,[62][63][64]. Unlike SEL1LA [23], both p38 and p28 are PGNase F and Endo H resistant, which may reflect the lack of the N-linked glycosylation sites at the SEL1LA C-terminus [65,66], while the N-linked glycan identified in the SEL1LA Nterminus [67] could be proximal to or beyond the splicing or cleavage sites. The lack of asparagine-N-linked high-mannose-type carbohydrate chains implies major differences in the folding, oligomerization, sorting, and transport of p38 and p28 relative to SEL1LA [68]. The modest depletion of the two new forms, especially p38, after RNA interference or blockage of protein synthesis, points to their higher stability compared to SEL1LA.
Most interestingly, p38 is constitutively secreted in the culture media of the SKBr3 and KMS11 cancer cell lines, and secretion is strongly augmented by ER stress or proteasomal blockage. The p28 form is detectable in the SKBr3 culture medium only after ER stress. Importantly, no SEL1L immunoreactive bands are found in the MCF10A culture medium under normal and ER-stressed conditions, suggesting that, at least in cells of breast epithelial origin, secretion of the two soluble SEL1L forms is associated with the tumorigenic phenotype.
Overall, the structural and functional properties of endogenous p38 and p28 resemble those of the previously cloned exogenous SEL1LC and -B in isoelectric point, high stability and localization in endosomes/MVBs and secretory vesicles [26]. As SEL1LB and -C, also p38 and p28 are predicted to be structurally related to secreted bacterial virulence factors involved in pathogen-host interactions, such as the Legionella pneumophila LpnE, EnhC and LidL proteins and the Helicobacter pylori cysteine-rich protein A (HcpA) [26,63]. LpnE is implicated in the ability of L. pneumophila to establish infection and/or manipulate host cell trafficking events, and its sel-1 like repeats, that interact with proteins containing Ig-like domains, are necessary for host cell invasion [69]. HcpA is a b-lactamase with hydrolytic activity, implicated in drug resistance and proinflammatory/immune responses [70][71][72].
Morphological analyses indicate that in SKBr3 and KMS11 cells N-terminal SEL1L immunolabeling is detectable not only in association with the ER, but also in endosomes/MVBs, along the PM profiles and within peripheral cytoplasmic or extracellular vesicles. These diverse subcellular localizations were observed using two distinct antibodies to the SEL1L N-terminus, while an antibody to the SEL1L C-terminus, unique to the ER-resident SEL1LA, confirmed only the immunolabeling of the ER. However, the N-terminal SEL1L antibody cannot discriminate between p38 and p28, and the distribution of the N-terminal SEL1L immunoreactivity in the different subcellular compartments was similar in cell lines that express both p38 and p28, such as SKBr3, or only p38, such as KMS11. By IEM, the N-terminal SEL1L labeling in the vesicles shed by SKBr3 and KMS11 cells appears to increase after induction of ER stress, in agreement with the SDS-PAGE and immunoblot analysis of the culture supernatants. Furthermore, the co-immunoprecipitation data obtained in SKBr3 cells suggest a functional parallelism between p28 and the TPD52 family proteins, cancer markers that localize to endosomes/MVBs and act as regulators of membrane trafficking in exocytic pathways [51][52][53][54][55][56][57][58][59][60][61]. MVBs are endosome-derived multivesicular organelles containing hydrolases, which may evolve into lysosomes or into secretory organelles [73,74]. The localization of the N-terminal SEL1L immunolabeling in endosomes/MVBs is consistent with the slightly acid pIs of p38 and p28 [49,50]. In this regard, it is known that ER proteins that escape ERAD, as well as ERAD components, can be targeted to the endosomal pathway for lysosomal or basal autophagic degradation [74][75][76]. Alternatively, endosomes/MVBs can be involved in exocytosis, which may contribute to relieve ER stress through the expulsion of damaged proteins and membrane constituents [77][78][79]. The extracellularlyreleased vesicles containing N-terminal SEL1L products appear to be heterogeneous in origin, deriving from vesicles segregated within MVBs and discharged upon fusion with the plasma membrane (exosomes), and from small plasma membrane protrusions shed after fission of the stalk (shedding vesicles) [78][79][80]. Notably, in agreement with our data, a recent report includes SEL1L peptides among the proteins identified by mass spectrometry in purified Rab27b-secretory vesicles of MCF7 breast cancer cells [81]. Interestingly, Rab27b, a GTPase implicated in PM delivery and fusion of different secretory vesicle types, is reported to be present in CD63-containing multivesicular elements located adjacent to the PM and in exosomes [81,82].
Vesicles participate in plasma membrane traffic and in intercellular communication, enabling the horizontal transfer of membrane and/or cargo molecules, including proteins and mRNAs, from cell to cell or to the extracellular compartment, where they can dissolve, releasing their contents. In this regard shed vesicles provide platforms for integrated multisignaling, required for rapid phenotype adjustments in cell populations [80]. Being extracellularly released, p38 and p28 could be found in abnormal amounts in biological fluids, and could be potentially developed as tumor markers. In this regard, it is intriguing that bayesian network modelling of microarray and mass spectrometry data identified an N-terminal SEL1LA sequence as a putative serum biomarker of prostate cancer [83].
It could be speculated that p38 and p28, which seem to be secreted only in tumorigenic cells, might be involved in hydrolytic and proinflammatory processes associated with cancer-related autocrine/paracrine signalling induced by ER stress. In this respect, signaling via tumor-released vesicles is implicated in processes that facilitate metastasis, such as extracellular matrix remodelling, angiogenesis and migration [80]. Thus, p28 and p38 could represent new tumor markers and provide potential targets for cancer therapy. Figure S1 P38 and p28 are not identified by antibody against the SEL1L C-terminus and are detected in cancer cells of various origin by antibody against the SEL1L Nterminus. A. P38 and p28 are not recognized by polyclonal antibody against the SEL1L C-terminus: Lysates (50 mg) from 293FT (embryo kidney) and SKBr3 (breast cancer) cells were resolved by SDS-PAGE (10%) and probed with polyclonal anti-SEL1L C-terminus. Vinculin was used as a loading control. The polyclonal C-terminal SEL1L antibody recognized the ER-resident SEL1LA protein (95 KDa), but not the p38 and p28 forms. The blot is representative of three independent experiments. B. p38, detected with monoclonal antibody against the SEL1L N-terminus, is more evident in cancer cells of various origins relative to a normal human fetal brain cell line: Lysates (50 mg) from Namalwa (lymphoma), MDAMB453 (breast cancer), HeLa (cervical cancer), G144, G166 and G179 (glioblastoma) and CB660 (human fetal brain) cells were resolved by SDS-PAGE (10%) and probed with monoclonal anti-SEL1L N-terminus. Vinculin was used as a loading control. P38 was expressed at much higher levels in the tested cancer cell lines relative to CB660, while p28 was undetectable. A higher band of approximately 60 KDa may represent an additional SEL1L-related form expressed in glioblastoma cell lines.
Supporting Information
(TIF) Figure S2 UPR studies in MCF10A and SKBr3 cells treated with DTT and in KMS11 cells treated with DTT and MG132. A. RT-PCR analysis of DTT-treated MCF10A cells: UPR activation was analyzed by RT-PCR in the samples described in Figure 2A1. UPR activation was confirmed by XBP-1 splicing and up-modulation of BIP and CHOP. HPRT serves as internal control. The image is representative of two different assays based on independent treatments. B. RT-PCR analysis of DTTtreated SKBr3 cells: RNA was extracted from the samples described in Figure 2B and analyzed by RT-PCR for the UPR. UPR activation was confirmed by XBP-1 splicing and CHOP upmodulation, concomitantly with increase of SEL1LA. HPRT serves as internal control. The image is representative of five different assays based on independent treatments. C. RT-PCR analysis of DTT-and MG132-treated KMS11 cells: UPR activation was assessed by RT-PCR on KMS11 cells treated with DTT or with MG132. UPR activation upon DTT treatment was confirmed by XBP-1 splicing and CHOP, BIP and ATF6 up-modulation; concomitantly SEL1LA also increased. MG132 treatment for 3 hrs resulted in an increase of CHOP and GADD45b, but there was no evidence of XBP-1 splicing and BIP and ATF6 modulation. Concomitantly, SEL1LA and HRD1 decreased. After 22 hours of MG132 treatment, BIP, CHOP and GADD45b increased. HPRT serves as internal control. The image is representative of five different assays based on independent treatments. (TIF) Figure S3 Localizations of SEL1L in DTT-treated SKBr3 cells and of myc-tagged exogenous SEL1LB in transfected 293FT cells. Immunofluorescence shows that in DTT-treated SKBr3 cells N-terminal SEL1L (green) intensely labels peripheral areas negative for the endoplasmic reticulum marker calreticulin and for the Golgi marker giantin (panels A-F). Cryoimmunogold electron microscopy of DTT-treated SKBr3 cells shows N-terminal SEL1L labeling in multivesicular bodies (panel G, arrowhead), on endoplasmic reticulum profiles, identified by calreticulin (panel G arrows), and in vesicles released from the plasma membrane after fission of the stalk (panels H-I, arrows point to stalks). Similarly, in SEL1L-Bmyctransfected 293FT cells, exogenous myc-tagged SEL1LB labeling was detected along plasma membranes and in vesicles emerging from plasma membrane (panels J-K, arrows). Bars: 0.1 mm; er: endoplasmic reticulum; MVB: multivesicular body; PM: plasma membrane.
(TIF) Figure S4 Biochemical characterization of SEL1L variants. A. Off-gel electrophoresis and Western blot analysis: Off-gel electrophoresis coupled with Western blot analysis was used to analyze the pIs of p38 and p28. The proteins extracted from the medium of DTT-treated SKBr3 cells were fractionated according to their pI using an Off-gel 3100 fractionator (Agilent Technologies) and aliquots of these fractions were analyzed by Western blot with monoclonal anti-SEL1L antibody. Both p38 and p28 (arrows) were detected in the six th fraction, corresponding to the pI range of 5. Vinculin was used as a loading control. SKBr3 cells over-expressed TPD52. The blot is representative of three independent experiments. B-C. SEL1L antibody does not recognize TPD52. B: To exclude possible TPD52 protein recognition by anti-SEL1L antibody, lysates (50 mg) obtained from 293FT cells transfected with myc-tagged TPD52 isoform 1 or empty vector (mock) were resolved by SDS-PAGE (10%) and probed with anti-myc, anti-SEL1L and anti-TPD52 antibodies recognizing TPD52 isoforms 1 and 2. Exogenous tagged TPD52 isoform 1 acquired a molecular weight similar to that of endogenous p38 (arrows for exogenous TPD52 isoform 1 and asterisks for endogenous p38), interfering with the evaluation of SEL1L antibody cross-reactivity. However, no increase of reactivity was observed in cells transfected with myc-TPD52 isoform 1. Both myc and TPD52 antibodies selectively detected the exogenous protein (see arrows), confirming correct translation. C: Lysates (50 mg) from 293FT cells transfected with GFP-tagged TPD52 isoform 1 or empty vector (mock) were resolved by SDS-PAGE (10%) and probed with anti-GFP, anti-SEL1L and anti-TPD52 antibodies. Exogenous tagged TPD52 isoform 1 acquired a molecular weight of 58 KDa, well distinguishable from the endogenous SEL1L bands (see arrows for exogenous TPD52 isoform 1 and asterisks for endogenous p38 and SEL1LA). No or barely detectable reactivity with SEL1L antibody was observed in cells transfected with TPD52 isoform 1. Both GFP and TPD52 antibodies selectively detected the exogenous protein, confirming correct translation. D: TPD52 protein levels are unaffected by SEL1L small interfering RNA (siRNA): Lysates obtained from the same samples described in Figure 1 C were resolved by SDS-PAGE (12%) and blotted with anti-TPD52 antibody. While SEL1LA, p28 and p38 decreased close to 55%, 30% and 16% respectively, compared to cells treated with scrambled siRNA (see Figure 1C), TPD52 levels did not change. Vinculin was used as a loading control. E: TPD52 secretion is enhanced in SKBr3 cells exposed to DTT: Lysates and secreted proteins obtained from the samples described in Figure 2A were resolved by SDS-PAGE (12%) and blotted with anti-TPD52 and anti-vinculin antibodies. ER stress/UPR slightly promoted TPD52 secretion in the culture medium. The image is representative of two independent experiments. (TIF) Figure S6 SEL1LB and -C transcripts are up-modulated in cancer cell lines. RNAs extracted from KMS11, MCF10A and SKBr3 cells were analyzed by RT-PCR using primers specific for SEL1LB and -C. Signals shown here were obtained with 30 cycles for both isoforms. HPRT was used as a loading control. The SEL1LB and -C transcripts were up-modulated in the tested tumor cell lines relative to the non-tumorigenic MCF10A line. The image is representative of three different assays based on independent experiments. (TIF) | 2017-04-05T19:41:00.537Z | 2011-02-17T00:00:00.000 | {
"year": 2011,
"sha1": "8aa29020a6acba2c3adcc1c401eb2a81ad11808f",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0017206&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8aa29020a6acba2c3adcc1c401eb2a81ad11808f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
269920920 | pes2o/s2orc | v3-fos-license | SIMULATING PEDIGREES ASCERTAINED ON THE BASIS OF OBSERVED IBD SHARING
In large genotyping datasets, individuals often have thousands of distant cousins with whom they share detectable segments of DNA identically by descent (IBD). The ability to simulate these distant relationships is important for developing and testing methods, carrying out power analyses, and performing population genetic analyses. Because distant relatives are unlikely to share detectable IBD segments by chance, many simulation replicates are needed to sample IBD between any given pair of distant relatives. Exponentially more samples are needed to simulate observable segments of IBD simultaneously among multiple pairs of distant relatives in a single pedigree. Using existing pedigree simulation methods that do not condition on the event that IBD is observed among certain pairs of relatives, the chances of sampling shared IBD patterns that reflect those observed in real data ascertained from large genotyping datasets are vanishingly small, even for pedigrees of modest size. Here, we show how to sample recombination breakpoints on a fixed pedigree while conditioning on the event that specified pairs of individuals share at least one observed segment of IBD. The resulting simulator makes it possible to sample genotypes and IBD segments on pedigrees that reflect those ascertained from biobank scale data.
Introduction
Simulations of genetic transmission within fixed pedigrees are used for a many applications, such as generating training or testing sets to support method development 8,11,20,22,23 , performing power analyses 4,5,16,21 , studying biological phenomena or processes such as inbreeding 12,17 , and empirically obtaining probability distributions that are used for relationship inference 2,9 .
Because the probability that two distant relatives share tracts of DNA identically by descent (IBD) can be small, the probability of sampling observed tracts of IBD when simulating a pair of distant relatives can also be small.For example, fourth cousins have only a fifty percent chance of sharing a detectable segment of IBD 9,26 while sixth cousins share detectable IBD segments less than five percent of the time 9,26 , even when using realistic simulations that account for cross-over interference and sex-specific genetic maps 3 .Moreover, the probability of sharing IBD decreases exponentially quickly as the degree of the relationship increases 6,9,26 .
When multiple distant relationships are considered simultaneously within a pedigree, the probability of sampling detectable IBD among all relative pairs can be very low.For example, given that fourth cousins share detectable IBD less than half the time, the probability of sampling IBD in a pedigree with just ten fourth degree relative pairs is less than 1/2 10 ≈ 10 −4 if the pairs are approximately mutually independent.For ten sixth cousins, the probability is less than 1/20 10 ≈ 10 −13 .
Although the probability of IBD sharing through a particular pair of common ancestors is small, individuals have many distant relatives with whom they share many common ancestors.Thus, individuals in biobank-scale datasets typically have thousands of fourth and fifth cousins 9 with whom they share detectable amounts of IBD.The ability to simulate genetic transmission on pedigrees relating subsets of such distant relatives is important for testing methods that aim to infer these pedigrees 11,20,23 or which use the IBD among distant relatives for downstream analyses 8,19 .
In pedigrees in which relatives are closely-related, one can use an existing simulation method together with rejection sampling to obtain samples under the conditional distribution by rejecting any sample in which the required individuals share no IBD.However, when relatives are distantly related, rejection sampling becomes impractical because the fraction of rejected samples is close to one.Therefore, it is necessary to simulate using a method that allows pedigree sampling conditional on the event that particular pairs share IBD.Although many methods have been developed for simulating meioses within a pedigree structure 1,[3][4][5]13,15,16,18,21,24,25 , no existing method makes it possible to condition on the event that two or more individuals share IBD.
Here, we show how to sample meioses and transmitted IBD among pairs of individuals and within a pedigree, conditional on the event that two or more individuals share IBD with one another.We then demonstrate the utility of this simulator for two downstream applications: pedigree inference and the simulation of an individual together with their set of distant relatives.
Sampling conditional on any observed IBD
Suppose two individuals (i and j) are related through relationship R and suppose we know that they share some nonzero, but unspecified, amount of IBD I. We may simply know that I is greater than zero, or we may know that i and j share at least one segment longer than a minimum observable length threshold τ .We want to simulate all recombination events on the ancestral path connecting i and j, conditional on the event that I > 0 or conditional on the event that they share at least one segment longer than τ .We then want to scale this up to sample meioses in a pedigree in which arbitrary sets of individuals are known to share IBD with one another.
In this paper, we will use the notation of Ko and Nielsen 14 to specify a relationship R = (u, d, a) between i and j, where u is the number of meioses "up" from i to their common ancestor(s) with j and d is the number of meioses "down" from the common ancestor(s) to j.The number of common ancestors is a, which is either 1 or 2 in an outbred pedigree.Although inbreeding is common in many populations and although all human pedigrees contain loops when we consider a sufficient number of generations in the past, we will focus on outbred pedigrees here because it makes the bookkeeping simpler.The model can be extended to include consanguineous relationships by including considerably more bookkeeping, but the method is also slower in the case of inbreeding because of the need to sample larger groups of meioses at the same time to satisfy constraints.A pedigree of individuals (circles) with meioses between them (lines).Nodes i and j are related through common ancestors x and y.The number of meioses "up" from i to its common ancestors with j is u = 3.The number of meioses down from the common ancestors to j is d = 2.The number of common ancestors is a = |{x, y}| = 2.This yields the relationship R = (3, 2, 2) between i and j using the notation of Ko and Nielsen 14 .
We want to sample all u meioses up from i to its first common ancestor with j and all d meioses down from its first common ancestor to j (black meioses in Figure 1).If i and j have two common ancestors (a = 2) then we must sample two additional meioses (orange meioses in Figure 1) leading to their second shared common ancestor.
3.1.IBD propagation.In a pedigree, genetic material is passed only from ancestors to descendants.However, in developing the sampling framework presented in this paper, it is useful to conceptualize the transmission of tracts of IBD rather than genetic material itself.A tract of IBD is a contiguous region along a single chromosome that is shared identically by descent between two individuals.For example, in Figure 2, the blue regions depict the tracts of IBD shared between an individual and each of their ancestors of various degrees.
Although genetic material is passed down through a pedigree, Figure 2 shows that one can conceptualize IBD transmission as occurring upward.IBD transmission can occur either upward or downward.One can also conceptualize IBD transmission as occurring both upward and downward, for example between cousins.
3.2.
The sampling approach.To sample conditional on the event that i and j share at least one IBD segment of at least τ cM with one another, we make use of Gibbs sampling in which we sequentially sample each meiosis in turn, conditional on all the other meioses.We first consider the case of transmission for a single chromosome and then generalize to the whole genome.
For meiosis m, let ⃗ b m be the sorted list of recombination breakpoints among the two parental haplotypes undergoing recombination for a particular chromosome and let h m ∈ {0, 1} be the parental haplotype on which copying starts.In a diagram of the pedigree, we adopt the convention that are shared between an individual (not shown) and an ancestor along the full linear genome.The bottom row shows that the individual shares exactly one copy of their full linear genome (half their entire diploid genome) with a parent.The second line from the bottom shows that they share approximately half their linear genome (1/4 of their diploid genome) with a grandparent.Thin red lines show the regions between recombination breakpoints that are shared between adjacent ancestors (e.g., between a 3 rd great grandparent and a 4 th great grandparent).N GG = N th Great Grandparent.ybp = years before present, assuming 30 years per generation.
that h m = 0 corresponds to the haplotype of the "left" grandparent and h m = 1 corresponds to the haplotype of the "right" grandparent.For example, in the meiosis between child c and parent p in Figure 1, haplotype 0 in p is the one inherited from x, whereas haplotype 1 in p is the haplotype inherited from y.In general, the lexicographically first parent label corresponds to the "left" parent.
Our approach is to use Gibbs sampling to sample the list meioses between a pair of individuals by sampling each tuple (h m , ⃗ b m ) sequentially, one at a time, conditional on the breakpoints sampled at all other meioses.
For any given meiosis m, we use rejection sampling to sample (h m , ⃗ b m ).We sample h m from a Bernoulli random variable.We then sample the number n m = dim( ⃗ b m ) of breakpoints from a Poisson random variable with mean L/100 and we sample the positions of each of the n m breakpoints uniformly on the interval [0, L], where L is the length of the chromosome in cM.This approach is analogous to that of 24 for the related problem of sampling breakpoints conditional on observed genotype data; however, instead of sampling the descent graph locus-by-locus along the genome, we sample all breakpoints at once and instead of conditioning on the exact observed genotypes (of which there are generally none) we condition on the event that specified pairs of individuals share IBD.We first initialize the Gibbs sampler by sampling exactly one instance of (h m , ⃗ b m ) at each meiosis m without conditioning on observed IBD.We then proceed one meiosis at a time through the pedigree and we resample (h m , ⃗ b m ) at each meiosis m conditional on the values (h m ′ , ⃗ b m ′ ) at all other meioses m ′ .We sample (h m , ⃗ b m ) conditionally on the other meioses by proposing samples from the unconditional sampling distribution and rejecting proposals until we obtain a sample in which segments transmitted from i overlap by at least τ cM with segments transmitted from j. Once we achieve this result, we accept the proposed tuple (h m , ⃗ b m ) and move on to sample (h m+1 , ⃗ b m+1 ).
3.2.1.
Acceptance rules when a segment can be transmitted.To illustrate the acceptance rule, a single meiosis in shown in Figure 3.In this meiosis, a child haplotype c inherits from the two haplotypes in parent p corresponding to parents x and y of p.We are interested in sampling the starting haplotype h and the breakpoints ⃗ b conditional on the event that some IBD is transmitted between relatives i and j.
In Figure 3, the red region corresponds to genetic material in individual c that is shared with relative i.The blue region corresponds to genetic material in parent p that is shared with individual j.Note that i and j can share IBD with both haplotypes of p.A meiosis (h, ⃗ b) successfully transmits IBD between i and j if the red segments in c overlap with the blue segments in p by at least τ cM on a haplotype that is transmitted from p to c.These transmitted haplotypes are shown in purple in Figure 3.When the minimum length of a purple segments in Figure 3B is greater than τ , we accept the proposed tuple (h, ⃗ b).
A version of this sampling approach can also be used to sample from a genetic map.We discuss the genetic map approach in Section 3.4 and we discuss how to use the genetic map approach to sample from the full genome in Section 3.5.
3.2.2.
Acceptance rules when multiple meioses must be sampled together.In some cases, IBD between two individuals can be propagated along two different paths in the pedigree.A common example of this case is when two individuals share IBD through a pair of common ancestors.This scenario shown in Figure 1, in which i and j can share IBD either along the path through common ancestor x or common ancestor y, or through both.
In Figure 1, we cannot apply an acceptance criterion that requires IBD to be transmitted between i and j separately to each of the meioses (p, x) and (p, y).If we did so, we would require IBD to be transmitted along both paths, which is more restrictive than the criterion that IBD is transmitted at all.In such a case we must jointly sample both meioses (p, x) and (p, y) and accept a proposed sample as a success if either (p, x) or (p, y) is a success.
Moreover, it can dramatically increase the rate of acceptance if we jointly sample all meioses (c, p), (p, x), (p, y), (z, x), (z, y), and (j, z), which ensures that IBD from i and j is passed to the proper parental haplotypes in x and y.This kind of block Gibbs sampling is analogous to that implemented by Tong and Thompson 24 for a similar problem.If we did not include c and j in the sampling block, then we could accept meioses in which all segments from i are shared on the haplotype of p that comes from y and all segments from j are shared on the haplotype of z that comes from x.In such a case, we would not be able to sample meioses (p, x), (p, y), (z, x), and (z, y) in which segments from i are shared with j through p and z.
In general, sampling is most efficient if we jointly sample all meiosis pairs that form a block in which constraints must be simultaneously satisfied.Such blocks can be assembled by the following rules: (1) If meiosis (c, p) is in the block then all meioses involving full siblings of c must be in the block.
(2) If meiosis (c, p) is in the block then all meioses between p and children of p must be in the block.
More general forms of block sampling can be used to handle more complicated loops.In such blocks, we jointly sample all meioses along all divergent paths in the loop.In principle, this would allow us to sample from inbred pedigrees, although we do not investigate inbreeding here.The longer the path, the more unlikely it is that IBD will be transmitted through it, making the simultaneous sampling of multiple paths increasingly less practical as the paths become longer.
3.2.3.
Acceptance rules when no segment can be transmitted.Note that in sampling meiosis (c, p) between child c and parent p in Figure 1, the other meioses in the pedigree may have been sampled in such a way that no segments from i have been propagated between i and c and/or no segments from j have been propagated between p and j.In such a case, we must establish rules for accepting or rejecting a proposed tuple (h m , ⃗ b m ).
Rules for accepting proposed samples when IBD cannot be transmitted are helpful for propagating IBD through the pedigree before we have reached a valid state in which IBD is shared among all pairs who must share IBD.These rules help us to arrive more quickly at a valid state.
When it is not possible to transmit any segment of at least τ cM we can still accept a tuple (h m , ⃗ b m ) under the following conditions: (1) No segments are available either above or below to transmit; so transmission is impossible.
(2) At least one IBD segment longer than τ cM is available from below to transmit and at least one IBD segment longer than τ cM is transmitted up.(3) At least one IBD segment longer than τ cM is available from above to transmit and at least one IBD segment longer than τ cM is transmitted down.(4) At least one IBD segment longer than τ cM is available both above and below to transmit and at least one IBD segment longer than τ cM is transmitted both down and up.
This rule for accepting proposed samples when it is not possible to fully transmit a segment longer than τ cM has the purpose of speeding up the transmission of IBD through the pedigree until all required IBD constraints have been transmitted.Once IBD has been transmitted among all IBD-sharing pairs, we can begin to accept proposed samples (h m , ⃗ b m ) under the correct condition that at least one segment longer than τ cM is propagated.
In general, when sampling sets of meioses together, rather than one at a time, we apply the above criteria to the "receiving' and "transmitting" nodes in the group, rather than the child and parent nodes.For instance, if our group consists of nodes p, x, y and z in Figure 1, then node z is the node in the group that "transmits" segments from j and node p is the node that "receives" these segments and passes them on to i. Thus, we require the following to be true in order to accept a sample jointly from (p, x), (p, y), (z, x) and (z, y): (1) If nodes i and p share at least one IBD segment longer than τ cM, then at least one IBD segment longer than τ cM must be transmitted to z (for propagation efficiency).( 2) If nodes j and z share at least one IBD segment longer than τ cM, then at least one IBD segment longer than τ cM must be transmitted to p (for propagation efficiency).(3) If nodes i and p share at least one IBD segment longer than τ cM and if j and z share at least one IBD segment longer than τ cM and if at least one IBD segment longer than τ cM overlaps between these segments on a meiosis path (black or orange in 1), then at least one IBD segment of length τ cM of this overlap must be transmitted.Note that this criterion requires the overlap to occur on a feasible meiosis path (black or orange in Figure 1).If a segment in z physically overlaps a segment in p, but on opposite parental haplotypes, then the transmission is impossible and we must accept the proposed sample.
Data structures.
In order to establish constraints on each meiosis, we set up and update several data structures.The first data structure, ibd dict, records the segments propagated at each sampling step between each genotyped node in the pedigree and each haplotype of each other node in the pedigree.This dictionary is updated at each sampling step.Another data structure, constraint dict records the pairs of genotyped IDs that must transmit IBD through each haplotype of each other node in the pedigree, along with the direction of the transmission; for example, the dict specifies that IBD is passed from node i to node j "up" through haplotype 0 of node n.This dictionary is established before sampling begins and does not change.
Another two dictionaries, start hap dict and bpt pos list dict, store the starting haplotypes and breakpoint positions in each meiosis.These dictionaries are updated on each sampling step.Finally, the pedigree structure itself is stored in a dictionary, up dict of the form {c : {p 1 , p 2 }, ...} mapping each child node c to zero, one or two parent nodes.It is also convenient to store the reverse (down dict) of up dict mapping parents to their child nodes.
3.4.
Sampling with a genetic map.Let [(f i , g i )] N i=1 denote a genetic map from physical positions (denoted f i since we used p for parents) to genetic positions g i .The values f i are strictly increasing in i and the map from f to g is monotonically non-decreasing.To sample from such a map, we note that the genetic distance from f i to f i+1 is g i+1 −g i .Therefore, we sample the number of breakpoints in the region (f i , f i+1 ) from a Poisson distribution with mean (g i+1 − g i )/100.The positions of these breakpoints in the region are then uniformly distributed over the interval (f i , f i+1 ).We sequentially sample in each region from left to right along the chromosome.The starting haplotype is sampled according to a Bernoulli distribution as usual.
3.5.Sampling a full genome with any transmitted IBD.Sampling a full genome is more complicated than sampling each chromosome independently because some IBD is guaranteed to be transmitted using our method.Thus, applying it to each chromosome separately would yield IBD on each chromosome.A simple work-around to this problem is to sample from the whole genome all at once by modeling it as a single, long chromosome amounting to all chromosomes concatenated end-to-end with gaps between them.Over this long chromosome, we apply a mask to ignore sampled breakpoints within the regions between chromosomes.We also set up our genetic map so that the gaps between chromosomes correspond to long genetic distances, thereby allowing essentially free recombination in these regions (Figure 4).
Another more efficient approach is to sample all chromosomes at the same time in a meiosis (i.e., without concatenating and masking) and accept a set of proposed breakpoints if at least one IBD segment of at least τ cM is transmitted in the full genome.This approach has the advantage that we do not need to sample segments within masked regions.Ultimately, either approach is efficient since we ignore masked segments.We present masking here because it is useful for ignoring chromosomal regions, such as those with high levels of background IBD or "pile-up."3.6.Comparison with rejection sampling.To check that the conditional sampler is producing the correct distribution, we compared it with simple rejection sampling for scenarios in which rejection sampling was computationally tractable.Figure 5 shows a comparison between the Gibbs conditional sampler (orange) and the simple rejection sampler (blue) for several different relationships.The simulations are based on one chromosome of length 100 cM.Each panel shows a histogram based on 1,000 samples.For each sample, the Gibbs sampler made 10 passes, sampling each group of meioses ten times.The event on which we conditioned was τ > 0 cM: i.e., that any IBD was transmitted between the two relatives.
Figure 5 shows that the conditional Gibbs sampler produces the same segment count and length distributions as the simple rejection sampler, but that the relative number of Gibbs replicates becomes increasingly small relative to the number of simple rejection replicates as the degree of the relationship becomes larger.From Figure 6, it can be seen that the overall timing for the Gibbs sampler stays manageable even beyond 30 generations in the past, at which point the probability of observing IBD between two genealogical relatives becomes low.and genetic length 100CM are sampled by simulating one long chromosome comprised of the two chromosomes concatenated together with a gap between them (region shaded by the grey rectangle).We ignore all segments and transmissions in the gap and only accept transmissions that occur outside of it.Regardless of the physical length of the gap, the genetic length of the gap can be set to a large number to approximate free recombination between the two chromosomes.In the figure, the gap is 1Mb long in physical coordinates, but 1,000 cM long in genetic coordinates.
The approach can be extended to arbitrarily many chromosomes with the caveat that sampling in the masked regions requires compute time.d, a).1,000 replicates of each relationship were sampled using each method.Both samplers conditioned on the event that τ > 0 cM of IBD was transmitted between the two relatives.
Timing (seconds)
Rejection Gibbs Figure 6.Timing of the simple rejection sampler compared with timing of the Gibbs sampler.Simulations were performed on a pair of n th full cousins whose common ancestors lived g generations in the past.Each datapoint is the mean time over 100 replicates and error bars show one standard deviation above and below the mean.Simulations were performed only for g ≤ 13 for the simple rejection sampler, after which point rejection sampling became difficult.
Although the number of required replicates is higher for the Gibbs sampler when the degree of the relationship is small, it quickly becomes small relative to the number of replicates required for full rejection sampling.The relative computation time of the two samplers is shown in Figure 6.
3.6.1.Sampling on a pedigree.The Gibbs conditional sampling approach becomes especially useful when there is more than one pair of relatives who must share IBD with one another.In this case, the probability that all constrained pairs share IBD at once can become exceedingly small, making simple rejection sampling impractical even for small pedigrees.
Figure 7 shows a comparison of the simple rejection sampler and the Gibbs conditional sampler for the pedigree structure in Figure 7A.In contrast with the analysis shown in Figure 5, we now condition on the much more restrictive event that three different pairs of genotyped individuals share IBD with one another.Specifically, we condition on the event that i shares IBD with j, j with k, and i with k.
Figure 7C shows a comparison of the distributions of segment counts and lengths for individuals i and j.This is easier to visualize than the full joint distribution of segments shared between all pairs of genotyped nodes i, j and k.By comparing the segment count distribution for the relationship R = (2, 2, 2) in Figure 7C with the count distribution for this relationship in Figure 5, it can be seen that these are not the same distributions due to the additional constraint that and j must share IBD with k as well as with one another.By comparing Figure 7B with Figure 6, it can be seen that including the additional requirement that IBD is shared with a third individual increases the computational time of both the simple rejection sampler and the Gibbs conditional sampler, but that it affects the simple rejection sampler much more than the conditional sampler.In particular, the Gibbs sampler becomes computationally more efficient than the simple rejection sampler when the common ancestors are around = 4 generations in the past, compared with g = 7 generations in the past for the two-relative case shown in Figure 6.In the more highly constrained case of a three-person pedigree, the simple rejection sampler becomes unwieldy by g = 7 generations in the past, compared to g = 13 generations in the two-person case.In contrast to the rejection sampler, the Gibbs sampler can be used for quite distant relationships.
Although one can run rejection simulations in parallel, the expected amount of shared IBD decreases exponentially with each degree of separation between two IBD-sharing individuals or with the addition of IBD-sharing pairs.As a result, one cannot simply run jobs in parallel to overcome the computational slowness of rejection sampling.For instance, the probability that two individuals share any IBD when they are separated through a single common ancestor who lived 15 generations Adding in additional constrained pairs quickly makes the problem completely intractable.For instance, the lineage extending upward from 1 to 16 via nodes and −5 in Figure 9 is nearly independent of the lineage extending upward from nodes 1 to 10 through nodes −2 and −4.If we wish to jointly sample IBD sharing among nodes 1 and 16 and 1 and 10, we would need to satisfy these two nearly independent constraints, each of which has a probability 10 of approximately 10 −3 .From this, we see that the sampling problem quickly becomes computationally challenging after the addition of a few IBD-sharing pairs.4. Applications 4.1.Simulating IBD-sharing distributions for relationship inference.The conditional sampler described in Section 3 is essential for simulating IBD between distant relatives who are known to share some amount of IBD.An immediate application of this sampler is simulating IBD sharing distributions that can be used in relationship estimators.As we have noted, the unconditional distributions that are currently used for this purpose are inappropriate because they do not account for the fact that putative relatives are generally ascertained on the basis of IBD sharing with one another.
For relationships of increasingly distant degree, Figure 8A shows the mean total IBD between two relatives separated by various degrees.The mean IBD from the unconditional distribution is shown in blue.The IBD from the conditional distribution (requiring τ > 0) is shown in orange.From Figure 8A, it can be seen that the two means begin to diverge when relationships are approximately 8 degrees (approximately third cousins).For relationships ten degrees and greater (approximately fourth cousins) the distributions have largely diverged.The unconditional mean total IBD goes to zero quickly and cannot be shown on the log-scale plot in Figure 8A for degrees above approximately 20.
The discrepancy between the conditional and unconditional distributions has major implications for relationship inference.Figure 8B shows degree estimates from a very simple method of moments estimator.The estimator takes the total IBD observed for a pair of individuals and finds the degree for which the expected total length of IBD most closely matches the observed total IBD.This kind of estimator is similar to those used by several direct-to-consumer genetic testing companies and bias engendered by employing the unconditional distribution is found in all currently-used relationship estimators.
Figure 8B shows that the estimator based on the unconditional distribution is heavily biased and reaches an asymptote near ten degrees.This explains the fact that the most distant relationships inferred by existing estimators are around ten degrees.In contrast, the unconditional estimator makes it possible to infer much greater degrees of relationship, although the moment estimator is still biased.The implication of Figure 8 is that one must use conditional sampling to obtain empirical distributions that are used for relationship inference in the usual case in which putative relatives are ascertained on the basis of IBD sharing.
4.2.
Simulating the pedigrees of today.In datasets that contain many genotyped or sequenced individuals like the UK Biobank 7 and direct-to-consumer genetic testing databases 2,9 , genealogical relationships can be detected that extend tens of generations into the past.
There is a strong need to be able to simulate data for these kinds of pedigrees in order to support the development of new algorithms.The conditional sampler presented in this paper makes it possible to simulate IBD between an individual and a set of distant relatives.
Figure 9 shows one such pedigree, including an individual and twenty of their distant relatives, each separated from the focal individual by 20 degrees of relatedness.This pedigree was simulated in under a minute using the conditional sampler.A single lineage of this pedigree would require an average of 10 3 replicates to achieve an acceptance from the rejection sampler and all 20 lineages together would require approximately 10 60 replicates, rendering the simulation effectively impossible using existing simulation methods.
Discussion
Methods like the one presented here are necessary in order to simulate the kinds of pedigrees that appear in large genotyping datasets.As these databases increase in size, simulation methods will need to handle increasingly large and dense pedigrees.
Several aspects of the existing simulator can be improved in order to decrease the runtime.In particular, we have been somewhat cavalier about the rejection sampler that is still employed to sample each meiosis in the Gibbs sampler.This rejection sampler does not take into account the regions in which sampled breakpoints must or must not occur and therefore it can be fairly wasteful.For example, in the pedigree in Figure 9, we might find that the tract transmitted from node 6 (the node to the immediate left of node 1) is in close proximity to the tract transmitted from node 16 (the node to the immediate right of node 1).Since these tracts must lie on opposite haplotypes in ancestor −5 of node 1 (though potentially near each other on the same chromosome), we must achieve a recombination event between them.If the distance between these tracts is small, the probability of obtaining a recombination event in this region will be small.In order to speed up sampling, we can constrain the locations in which breakpoints must or must not occur.The existing sampler makes it possible to simulate from genetic maps including sex-specific genetic maps.However, it is important to account for additional phenomena like cross-over interference 3 .Future versions of this sampling method will be updated to include cross-over interference.
Using IBD sharing statistics simulated with this approach, it is possible to replace the biased distributions that are still used for relationship inference with distributions that are appropriate for the inference problem.Doing so has a considerable effect on the distant relationships inferred in large genotyping databases, moving them from five or six generations in the past to fifteen or twenty generations in the past or more.The variance in these estimates also increases substantially, reflecting true underlying uncertainty.
Figure 1 .
Figure1.A pedigree of individuals (circles) with meioses between them (lines).Nodes i and j are related through common ancestors x and y.The number of meioses "up" from i to its common ancestors with j is u = 3.The number of meioses down from the common ancestors to j is d = 2.The number of common ancestors is a = |{x, y}| = 2.This yields the relationship R = (3, 2, 2) between i and j using the notation of Ko and Nielsen 14 .
Figure 2 .
Figure 2. One unconditional simulation replicate showing the tracts of IBD transmitted between relatives of various degrees.Each row shows the tracts of IBD (blue)that are shared between an individual (not shown) and an ancestor along the full linear genome.The bottom row shows that the individual shares exactly one copy of their full linear genome (half their entire diploid genome) with a parent.The second line from the bottom shows that they share approximately half their linear genome (1/4 of their diploid genome) with a grandparent.Thin red lines show the regions between recombination breakpoints that are shared between adjacent ancestors (e.g., between a 3 rd great grandparent and a 4 th great grandparent).N GG = N th Great Grandparent.ybp = years before present, assuming 30 years per generation.
Figure 3 .
Figure 3. IBD propagation in a meiosis.An example of the meiosis between child node c and parent node p in Figure1is shown.The haplotype that c inherits from p is shown, along with the "left" (x) and "right" (y) haplotypes in individual p.The red region is the IBD tract that c shares with i and the blue regions are the tracts that p shares with j.Arrows indicate the haplotype that c copies from.From left to right, c begins copying on haplotype x of p (left-most arrow).It then switches to haplotype y (second arrow) and then back to x (third arrow).In the region between the vertical dashed lines a and b, individuals i and j both share IBD with haplotype x of p.In the region between vertical lines b and c, individual i shares with haplotype y and individual j shares with haplotype x. Between c and d, both i and j share IBD with haplotype y.
Figure 4 .
Figure 4. Masking.In the figure, two chromosomes, each of physical length 100MBand genetic length 100CM are sampled by simulating one long chromosome comprised of the two chromosomes concatenated together with a gap between them (region shaded by the grey rectangle).We ignore all segments and transmissions in the gap and only accept transmissions that occur outside of it.Regardless of the physical length of the gap, the genetic length of the gap can be set to a large number to approximate free recombination between the two chromosomes.In the figure, the gap is 1Mb long in physical coordinates, but 1,000 cM long in genetic coordinates.The approach can be extended to arbitrarily many chromosomes with the caveat that sampling in the masked regions requires compute time.
Figure 5 .
Figure 5.Comparison of the simple rejection sampler with the Gibbs sampler for different relationships R = (u,d, a).1,000 replicates of each relationship were sampled using each method.Both samplers conditioned on the event that τ > 0 cM of IBD was transmitted between the two relatives.
Figure 7 .
Figure 7.Comparison of rejection sampler and Gibbs sampler for a pedigree with multiple constrained pairs.(A) Pedigree structure.Individuals i, j and k are related such that i and j share two common ancestors and k shares a single common ancestor with each of i and j. (B) Timing for the rejection and Gibbs samplers for different values of u, u ′ , and d.In each pedigree, we constrained u ′ = 2 and u = d = g and we varied g from 1 to 10. (C) Comparison of the rejection and Gibbs sampling distributions.Two statistics, the number of segments and segment lengths, are shown between i and j for relationship types R = (2, 2, 2) (first cousins) and R = (5, 5, 2) (fourth cousins).
Figure 8 .
Figure 8.Comparison of the unconditional and conditional sampling distributions.(A)Mean total IBD as a function of degree between two individuals connected through a single common ancestor.(B) A simple method of moments estimator of degree applied to the total amount of IBD between relatives of different degrees.
Figure 9 .
Figure 9.An example of a pedigree containing an individual and twenty of their distant relatives that can be simulated using the conditional sampler.Each individual is denoted by a disc and each line represents a parent-child connection between two individuals.Genotyped nodes are indicated with positive numbers and ungenotyped nodees are indicated with negative numbers.In this topology, a focal individual (1) is connected to their distant relatives, each through a single common ancestor who lived ten generations in the past. | 2024-05-21T13:11:31.495Z | 2024-05-16T00:00:00.000 | {
"year": 2024,
"sha1": "c4eff2d33d917d6caf8aa9a055a0a73ab33998b0",
"oa_license": "CCBYNCND",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2024/05/16/2024.05.13.594012.full.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "59e6707653a95487a9d9fffb6955414009360e04",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
258256581 | pes2o/s2orc | v3-fos-license | Comparative analysis of clinical efficacy between laparoscopic and open pancreaticoduodenectomy
Laparoscopic pancreaticoduodenectomy (LPD) is a technically demanding procedure but is gradually gaining acceptance in clinical practice. This study was performed to compare the short-term outcomes of LPD with open pancreaticoduodenectomy (OPD). The perioperative data of the patients who underwent LPD (n = 25) and OPD (n = 40) from January 1, 2017 to December 31, 2021 at Zhangjiagang Hospital Affiliated to Soochow University were collected and retrospectively analyzed. All patients received R0 resection, and none of the patients died within the perioperative period. The preoperative data (gender, age, body mass index [BMI], and preoperative bilirubin), the intraoperative data (operative time, number of retrieved lymph nodes), and postoperative data (level 1 monitoring time, postoperative fluid diet time, postoperative fluid feeding time, and hospitalization cost) were comparable between the 2 groups (P > .05). The estimated blood loss, abdominal drainage tube removal time, postoperative hospital stay, catheter removal time, and analgesic drug use were significantly lesser in the LPD group, when compared to the OPD group (P < .05). LPD is safe and feasible. Compared to OPD, LPD has less surgical trauma, less intraoperative bleeding, and faster postoperative recovery.
Introduction
Pancreatic cancer is the seventh leading cause of cancer death. In 2020, there were about 466,000 deaths, accounting for about 4.7 percent of deaths from malignant tumors. About 47.1% of new cases and 48.1% of deaths globally occur in Asia. [1] For the past few decades, pancreaticoduodenectomy (PD) has been the primary treatment for pancreatic cancer.
Laparoscopic pancreaticoduodenectomy (LPD) is a technically difficult minimally invasive procedure, since the pancreatic head and duodenum are located in the retroperitoneum, and lie in close proximity to the major vessels. However, due to technological innovations and improvements in surgical techniques, LPD has gained increasing popularity and acceptance at several surgical centers worldwide. [2] Since Gagner and Pomp reported their first LPD experience in 1994, [3] LPD has gradually been accepted in clinical practice, and clinical studies that compared LPD and open pancreaticoduodenectomy (OPD) have been carried out at major medical centers in China and abroad. Previous studies have reported that LPD can be a promising alternative to OPD in selected patients, with good surgical and oncology outcomes. [4][5][6] However, some researchers consider that although LPD is safe and feasible, the overall complications and perioperative mortality are comparable to OPD. Hence, there is much debate on the merits and demerits of LPD. We present a case series that compared the short-term outcomes of the LPD and OPD performed at our center.
Clinical data
The data of patients, who underwent PD at Zhangjiagang Hospital Affiliated to Soochow University from January 1, 2017 to December 31, 2021, were retrospectively analyzed. According to the surgical technique, these patients were divided into 2 groups: LPD group and OPD group. All patients having periampullary tumors (including the ampulla itself, lower segment of common bile duct, duodenal papilla, and pancreatic head) without distant metastasis, and no other serious organ insufficiency of the heart, lungs, brain, kidneys, or other important organs were included. Patients with pancreatic head tumor diameter >4 cm were included in the OPD group. All surgeries were performed by surgeons with experience in open PD and minimally invasive surgery. Experience in LPD was in the early phase of the learning curve due to the small number of cases. In this early phase of the learning curve, the authors selected young patients (age <50 years, good cardiopulmonary function, body mass index [BMI] < 28) with good general condition having tumors in the lower segment of common bile duct or periampullary region and dilated common bile duct and pancreatic duct without vascular or pancreatic invasion and no history of complicated abdominal surgery for LPD. All patients provided written informed consent for the operation.
The reviewed data included the following: age, gender, BMI, surgical and postoperative recovery indicators, postoperative complications, postoperative pathological parameters, length of hospital stay, and cost of treatment. The surgical indicators included operative time and intraoperative blood loss, and the postoperative recovery indicators included postoperative intensive care unit (ICU) stay, total duration of drainage tube, postoperative urinary tube removal time, and postoperative fluid feeding time. The postoperative complications were classified as postoperative pancreatic fistula, biliary leakage, abdominal bleeding, delayed gastric emptying, ascites, and perioperative death. The postoperative pathological parameters were the R0 resection rate and number of dissected lymph nodes.
The diagnosis of postoperative pancreatic leakage was based on the criteria formulated by the International Organization of Pancreatic Surgery in 2016. [7] Other complications were graded according to the Clavien-Dindo classification system (higher than grade III was defined as a major complication). [8] Delayed gastric emptying were defined according to the established international consensus. [7]
Ethics
The present study was approved by the Ethics Committee of Zhangjiagang Hospital Affiliated to Soochow University (number:ZJGYYLL-2022-07-010). Written informed consent was obtained from each participant. The study was conducted in accordance to the Helsinki Declaration.
Operative methods
2.3..1. Preoperative treatment. All patients were routinely examined by contrast-enhanced computed tomography or magnetic resonance imaging before surgery, gastroduodenoscopy, contrast-enhanced ultrasound, and other examinations, as required. Patients with significantly elevated bilirubin (≥340 μmol/L) before surgery were treated with preoperative biliary drainage. All patients were selected for laparoscopic or open surgery, based on the informed consent of the patient and their families.
2.3..2.
Laparoscopic pancreaticoduodenectomy. Pneumoperitoneum was established using the Veress needle or open Hasson technique, and a 10 mm port was placed below the umbilicus. Then, a 12 mm port was placed in the subcostal region in the left axillary line, and a 5 mm port was placed in the midclavicular line on the left and right sides. Classical PD was performed, which included the distal stomach, common bile duct, uncinate process of the pancreas, horizontal part of the duodenum, and surrounding lymph nodes. The gallbladder was temporarily retained for traction during the operation. A 3 cm incision was made below the umbilicus to remove the specimen.
Reconstruction method: The choice of reconstruction was duct-to-mucosa pancreatojejunostomy. A supporting tube was placed in the pancreatic duct, and the posterior wall of the pancreas was continuously sutured to the jejunum using a 3-0 prolene suture. After opening the jejunum at an appropriate position, the supporting tube was placed in the jejunal opening, and the pancreatic duct and small intestine were sutured with 5-0 prolene using 3 stitches. The cut margin of the pancreas was sutured to the jejunum using 3-0 prolene, and this was continuously stitched posteriorly and anteriorly. Approximately 5 cm away from the pancreaticojejunal anastomosis, a jejunal opening of 1 cm was made. The cut end of the common bile duct was continuously anastomosed to the jejunum in an endto-side fashion using 4-0 prolene sutures. Then, the gall bladder was removed, and the transverse mesocolic opening was closed. Next, the small intestine and stomach were anastomosed at 40 cm from the mesangial foramen. Initially, gastrojejunal anastomosis was performed using a long articulating endoscopic linear cutter. Then, the common opening was closed by intermittent sutures.
2.3..3. Open pancreaticoduodenectomy. The abdomen was opened in layers, and the peritoneal cavity was explored for metastasis. The second part of the duodenum and the head of the pancreas were fully mobilized using Kocher technique. Then, cholecystectomy was performed, and the common bile duct was divided just below the hepatic hilum. Afterwards, the distal end of the stomach was divided using a cutting stapler. Next, the pancreas was cut above the superior mesenteric vein. Then, the uncinate process and head of the pancreas were separated, and the surrounding lymph nodes were dissected. Afterwards, the jejunum was divided at approximately 10 cm from the ligament of Treitz.
Reconstruction method: A 10F thin tube was inserted into the pancreatic duct. Then, duct-to-mucosa pancreaticojejunostomy was performed in single layer using non-absorbable sutures. Afterwards, end-to-side hepaticojejunal anastomosis was performed using the 1-layer method at approximately 10 cm away from the pancreaticojejunostomy using 5-0 prolene sutures. Subsequently, end-to-side manual gastrojejunostomy was performed at approximately 45 cm away from the pancreaticojejunostomy using a long articulating endoscopic linear cutter.
Postoperative management
In the early postoperative period, the patients were kept fasting. Medications included proton pump inhibitors, intravenous antibiotics, analgesics, fluids and liver protection drugs. Blood routine and biochemical tests were conducted regularly. Patients were started on enteral nutrition through jejunostomy tube on third or fourth postoperative day. Color and volume of the drainage fluid were observed, and amylase content of drainage fluid was detected. The timing of drainage tube removal was determined by the color of the drainage fluid, drain output and postoperative abdominal computed tomography findings.
Statistical analysis
SPSS version 25.0 was used to analyze the perioperative data of patients included in the present study. The quantitative data were expressed as mean ± standard deviation, and compared using t test. The qualitative data were compared using χ 2 -test. P < .05 was considered statistically significant.
Results
The present study included 25 and 40 patients in the LPD and OPD groups, respectively. R0 resection was achieved for all patients. None of these patients required conversion from LPD to OPD. The preoperative characteristics of patients in the 2 groups are presented in Table 1. The preoperative data (gender, age, BMI) of patients in the 2 groups were similar (P > .05). There was no significant difference in tumor size, American Society of Anesthesiologists grade, number of diabetes mellitus patients, number of preoperative biliary drainage and preoperative bilirubin between the 2 groups.
The intraoperative and postoperative parameters of these patients are presented in Table 2. The number of removed lymph nodes in the 2 groups were similar. The intraoperative blood loss was significantly lesser and the operative time was significantly longer in the LPD group, when compared to the OPD group (P < .05). In terms of postoperative recovery, the duration of ICU stays, and duration of postoperative liquid diet of patients in the 2 groups were similar (P > .05). The abdominal drainage tube removal time in LPD group was less than that in OPD group (P < .05). The urinary catheter removal time and number of analgesic drugs used were lesser in the LPD group, when compared to the OPD group (P < .05). None of the patients died during the perioperative period.
The postoperative complications are presented in Table 2. There were 9 cases in the LPD group, with a complication rate of 36% (9/25). Among these 9 cases, there were 4 cases of pancreatic leakage (all grade B pancreatic leakage), which were treated with abdominal irrigation, fluid rehydration and antimicrobials. One case of abdominal bleeding was treated with embolization of the splenic artery by interventional angiography, and 3 case of delayed gastric emptying (grade 2)was treated by fasting, nasogastric decompression, nutritional support, and gastrointestinal motility drugs. There was 1 case of bile leakage managed conservatively. Furthermore, fifteen patients in the OPD group developed complications, and the complication rate was 37.5% (15/40). Major complications (CD grade ≥ II) occurred in both groups (LPD, 8 vs OPD,14) had no significant difference. There were 8 patients with pancreatic leakage, and all cases were grade B pancreatic leakage. Furthermore, 4 patients had delayed gastric emptying (grade 2). These patients improved after fasting, nasogastric decompression, nutritional support, and gastrointestinal motility stimulants. Moreover, 2 patients with abdominal bleeding were treated with reoperation and blood transfusion. There was 1 case of bile leakage treated conservatively.
The hospitalization indicators are presented in Table 3. The duration of postoperative hospitalization days was significantly lesser in the LPD group, when compared to the OPD group (P < .05). However, the hospitalization expenses between these 2 groups were similar (P > .05).
Discussion
Undeniably, the concept of precise and minimally invasive surgery has gradually gained popularity among people. LPD has the advantages of small wound, less intraoperative blood loss, more precise operation, less postoperative pain, and short recovery time. [9] At present, a number of studies have confirmed [3,4,10,11] that LPD has advantages over OPD, in terms of reducing surgical bleeding, relieving postoperative pain, and shortening the postoperative recovery time. The results of the present study confirmed the benefits of LPD over OPD. It was considered that with the continuous development of laparoscopic technology and improvements in surgical expertise, the advantages of laparoscopic technology would be further reflected.
A number of studies [2,12] have reported that LPD requires a longer operation time, which means prolonged anesthesia and pneumoperitoneum, thereby increasing the risk of perioperative cardiopulmonary complications. Similar results were found in the present study. However, the operative time of LPD at our center gradually decreased from 460 minutes in the initial period to 300 minutes in the later period of our surgical experience. In the LPD group, the operative time of the first case was the longest due to the lack of experience and limited skills of our surgical team in doing pancreatojejunostomy and hepaticojejunostomy laparoscopically. However, with the increase in the number of surgical cases, the laparoscopic skills improved, especially the time to perform laparoscopic pancreaticojejunostomy and biliojejunostomy leading to gradual shortening of the operation time. Inorder to reduce the operation time, we also performed gastrojejunal anastomosis through the small epigastric incision made to retrieve the specimen. However, it did not shorten the operation time significantly. Hence, these findings indicate that LPD has a steep learning curve, and that the operative time gradually shortens with the increase in experience of the surgical team.
Cost is another obstacle in the popularization of LPD. High cost associated with the use of laparoscopic instruments and staplers may make it difficult for patients to accept LPD. However, in the present study results, there was no statistical difference in hospitalization cost between LPD and OPD. In addition, Gerber et al [13] reported that the cost of LPD and OPD are similar, while the total nursing cost was even lower than OPD. This means that LPD may be more acceptable, in terms of total cost, when compared to OPD, in the future.
In terms of surgical resection, both OPD and LPD achieved R0 resection, and the number of removed lymph nodes in both groups were similar, suggesting that LPD was not inferior to OPD in achieving R0 resection and adequate lymphadenectomy. These findings are consistent with that of existing studies.
PD is associated with a high incidence of postoperative complications. A meta-analysis conducted by Professor Boggi [10] revealed that the total incidence of postoperative complications after LPD was 41.2% (252/611). Furthermore, a multi-center retrospective study conducted in China reported postoperative complications in almost half of LPD patients. [11] In the present study, postoperative complications occurred in 36% and 37.5% of cases after LPD and OPD, respectively, which were lower than those previously reported. However, LPD has no advantage over OPD in reducing complications.
One of the major complications of PD is anastomotic leakage, especially pancreaticoenteric anastomosis. [14] Pancreaticojejunostomy is the most preferred anastomotic technique for the pancreatic stump. In the present study, there were 4 cases of postoperative pancreatic leakage (16%) in the LPD group and 8 cases of postoperative pancreatic leakage (20%) in the OPD group. In our hospital, end-to-side pancreaticojejunostomy anastomosis has always been used for LPD. A meta-analysis conducted by Hua J [15] revealed that catheter-to-mucosal pancreaticojejunostomy had no advantage in preventing pancreatic leakage, but this could reduce anastomotic bleeding, and may be beneficial for anastomotic healing. Therefore, the investigators intend to perform catheter-to-mucosal pancreaticojejunostomy in future LPDs. In addition, a number of hospitals in China perform pancreatogastrostomy. A domestic meta-analysis [16] suggested that pancreatogastrostomy can produce good results, in terms of pancreatic leakage. Furthermore, Fernandez-Cruz et al [17] reported a lower incidence of postoperative pancreatic fistula in pancreatogastrostomy, when compared to pancreaticojejunostomy, through a randomized controlled trial (4% vs 22%, P < .01). However, pancreatogastrostomy requires intraoperative gastric partitioning, which is quite complicated, especially in the case of laparoscopy, and there are higher requirements for the surgeon. Therefore, the author considered that pancreatogastrostomy is still too early to be popularized.
There were some limitations in the present study. First, the present study was a single-center study with a small sample size. Second, merely the short-term outcomes were investigated. Third, the postoperative ICU stay and duration of hospital stay were prolonged in this study due to institutional policy of gradual removal of endotracheal tube, close monitoring in ICU, delayed enteral feeding and prolonged observation in hospital due to safety concerns by the treating physicians and the family members. However, in future we intend to reduce the postoperative hospital stay of the patients by adopting enhanced recovery protocols. Fourth, the steep learning curve of LPD contributed significantly to its long operative time.
Briefly, the present study revealed that LPD has advantages, in terms of lesser intraoperative blood loss and faster postoperative recovery indicators. Furthermore, the operation time was longer, when compared to OPD. However, the operation time gradually decreased with increasing experience. Therefore, with the advancements in the laparoscopic equipment and the standardization of the surgical technique of LPD, the popularity of LPD would be accelerated due to its advantages over OPD. | 2023-04-22T05:05:58.155Z | 2023-04-21T00:00:00.000 | {
"year": 2023,
"sha1": "98517c7e50f425870f80405fc25d7023c9c94e10",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "98517c7e50f425870f80405fc25d7023c9c94e10",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
9018675 | pes2o/s2orc | v3-fos-license | Bone Marrow Metastasis Is an Early Stage of Bone Metastasis in Breast Cancer Detected Clinically by F18-FDG-PET/CT Imaging
Objective To determine the value of 18F-FDG PET/CT in detection of bone marrow (BM) metastasis in breast cancer which is considered an early stage of bone metastasis. Patients and Methods Retrospectively, breast cancer patients with bone metastasis were included. BM metastasis was considered if the lesion was PET positive/CT occult while bone metastasis was considered if the lesion was PET positive/ CT positive. BM metastases were observed sequentially on F18-FDG PET/CT. Results We included 35 patients. Eighteen patients (51%) had BM metastases in addition to other bone metastases. BM metastases comprised 24% of all lesions. Posttreatment scan was performed on 26/35 patients. Twenty-three percent of BM metastases had resolved completely without causing bone destruction after treatment. Sixty-five percent of BM metastases had converted into bone metastases after treatment. Twelve percent of BM metastases had persisted after treatment. Conclusion This retrospective study showed clinically by 18F-FDG PET/CT imaging that BM metastasis is an early stage of bone metastasis in breast cancer. Interestingly, 18F-FDG-PET/CT showed that early eradication of individual BM metastasis by systemic treatment precluded development of bone metastasis. However, more research is needed to study the impact of an early diagnosis of BM metastases on treatment outcome.
Introduction
A small number of breast cancer cells (BCCs) could exit the primary tumor site and enter the bone marrow (BM) during the early phase of tumor development. BCCs show a preference for the bone marrow. Immediately after BCCs migration and invasion into the BM, they interact with mesenchymal stem cells which protect BCCs from immunosurveillance. Moreover, these cells become dormant as they can remain in cycling quiescence close to the endosteum area in the BM. Quiescence of breast cancer cells (dormant cells) makes it increasingly difficult to target the dormant cancer cells by chemotherapy [1].
Anatomically, BCCs frequently metastasize to axial bone skeleton, that is, the spine, ribs, girdles, and bony pelvis. In adults, axial bone skeleton contains the red marrow, which provides vital factors for the BCCs creating what is called "bone metastatic niche." These factors include ample cells, extracellular matrix, nutrition, and signaling molecules [2]. Bone marrow macrometastasis appears once the dormant BM metastatic cells outgrow (proliferate). Conventional bone marrow biopsies indicate that about 26-40% of patients with metastatic breast cancer have bone marrow involvements [3,4].
In osteolytic bone metastasis, a complicated molecular interaction (called "vicious cycle" of molecular crosstalk) takes place between metastatic BCCs and bone metastatic niche. During this interaction, a variety of cytokines and growth factors are produced by metastatic BCCs which directly stimulate the osteoclast maturation or indirectly promote osteoclast differentiation. The latter is usually accomplished by stimulating the BM osteoblasts to produce Interlukin-6 (IL-6) and receptor activator of nuclear factor-kB ligand (RANKL) [1]. The survival and proliferation of metastatic BCCs in osteolytic bone metastasis are in turn promoted by several factors released by bone matrix resorption caused by osteoclast activation. These factors include transforming growth factor-(TGF-) and insulinlike growth factor-1 (IGF1) [2].
2 BioMed Research International F18-FDG-PET/CT is a sensitive molecular imaging modality capable of diagnosing bone marrow metastases by means of increased FDG uptake in growing metastatic cancer cells [5]. In addition, F18-FDG-PET/CT is sensitive in detecting metastatic bone lesions, particularly osteolytic and mixed lesions [6]. However, until now, few studies have evaluated the BM metastases by F18-FDG-PET/CT in breast cancer. Our study aims to determine the value of F18-FDG-PET/CT in the diagnosis of bone marrow (BM) metastasis in breast cancer patients which is considered an early stage of bone metastasis.
Patients and Methods
The medical records of breast cancer patients with metastatic disease were reviewed retrospectively from January 2012 to June 2015. The study was approved by hospital IRB. We included in our study the patients who had metastatic bone disease proven by either staging or follow-up F18-FDG-PET/CT.
Bone marrow metastasis was considered if there was PET positive/CT occult lesion, that is, focal F18-FDG uptake on PET images overlying intact bone on CT images. This is in contrast to bone metastasis which is focal F18-FDG uptake on PET images overlying destructive bone lesion (osteolytic, osteoblastic, or mixed) on CT images [5,6]. Bone marrow lesions were observed sequentially in the patients who had undergone sequential F18-FDG-PET/CT, that is, pretreatment and posttreatment F18-FDG-PET/CT scans. Posttreatment assessment was categorized as responsive, progressive, or stable based on FDG focal uptake. Disappearance of FDG focal uptake posttreatment was considered responsive, increasing FDG focal uptake (in terms of intensity and/or number), posttreatment was considered progressive and stable disease was considered if no change was noted in posttreatment FDG focal uptake.
Included patients in this study were referred for F18-FDG-PET/CT for staging of breast cancer, for follow-up of breast cancer, and/or for posttreatment evaluation of metastatic disease. The patients received systemic chemotherapy and/or hormonal therapy according to current international guidelines.
2.1. Imaging. F18-FDG-PET/CT imaging was acquired utilizing an integrated PET/CT device (Discovery 600; GE Medical Systems, Milwaukee, Wis). The whole-body mode (from the base of the skull down to upper thighs) was implemented as the standard software. Before the PET/CT acquisition, the patients fasted for at least 6 hours. All patients were tested to confirm that their glucose level was not more than 200 mg/dL before F18-FDG administration. Before PET, unenhanced CT was performed according to a standardized protocol performed with the following settings: transverse 2.5 mm section thickness, 120 kVp, and 80-180 mA according to local body thickness. PET scans were obtained 40-90 minutes after an intravenous administration of mean 296 Mbq (8 mCi) F18-FDG. The acquisition time was 2-3 minutes per bed position in the two-dimensional mode. Images were reconstructed with attenuation-weighted ordered-subset expectation maximization with and without attenuation correction.
Results
Thirty-five patients, with an average age of 48.1 y (ranging between 27 and 80 years), were included in our study. Twenty patients were newly diagnosed with breast cancer metastasized to bone while 15 patients had developed bone metastasis several years after diagnosis of breast cancer (Table 1). Eighteen patients (51%) had BM metastases (ranging between 2 and 70 lesions with average of 23 lesions) ( Figure 1) in addition to other structural (destructive) bone metastatic lesions (ranging between 1 and 110 lesions with average of 33) ( Table 1). BM metastases comprised 24% of all metastatic lesions noted on pretreatment F18-FDG-PET/CT according to the following formula: BM lesions/(BM lesions + bone lesions) ( Table 2).
Twenty-six out of 35 patients had undergone 3-10 months' posttreatment F18-FDG-PET/CT. Two out of 35 patients had been lost to follow-up at our hospital. Seven out of 35 patients were severely ill secondary to development of disseminated metastatic disease (BM, bone, liver, lung, and lymph nodes) several years after breast cancer diagnosis. Accordingly, they died within 2-3 months afterward, so they had no follow-up. Of note, they had had large number of BM lesions (Table 1).
In 26 patients who had follow-up, only 4% of metastatic lesions were BM lesions and 96% of the lesions were bone metastases ( Table 2). Eighteen out of 26 patients (69%) had complete response after treatment as all BM lesions had disappeared and all bone metastatic lesions had become PET negative. Five out of 26 patients (19%) had no response to treatment with disease progression as PET positive lesions had increased in number including BM lesions and/or bone metastatic lesions. Three out of 26 patients (12%) had stable disease or had partial response to treatment as PET positive lesions had been stable or had decreased in number, respectively (Table 1).
BM lesions had totally disappeared in responsive patients. In 5 progressive patients, 2 patients had new BM lesions, 1 patient had her BM lesions increased in number, 1 patient had her BM lesions decreased in number as they had progressed into bone metastases, and 1 patient had partial response as her BM lesions had partially disappeared (Table 1).
Twenty-three percent of BM metastases (17 out of 75 lesions) had resolved completely without causing bone destruction after treatment as noted on posttreatment 18F-FDG-PET/CT (Figure 1). Sixty-five percent of BM metastases (49 out of 75 lesions) had converted into structural destructive bone metastatic lesions in those patients who underwent posttreatment 18F-FDG-PET/CT. The structural destructive bone metastatic lesions were mostly osteolytic/mixed lesions ( Figure 2) and less frequently osteoblastic ( Figure 3). Twelve percent of BM metastases (9 out of 75 lesions) had persisted on posttreatment F18-FDG-PET/CT (Table 1).
Discussion
Bone marrow and bone receive a high volume of blood flow and both are rich in growth factors. It is well known that BCCs spread hematogenously. Ninety percent of bone metastases in breast cancer patients start as intramedullary BCCs deposits in the red marrow [3,4]. The deposited BCCs in the red marrow usually go in quiescence escaping immunosurveillance and chemotherapy. Months or years later, the quiescent BCCs overgrow and become BM macrometastasis. The latter will interact with BM microenvironment. Such interaction eventually results in the formation of bone metastasis and it is regulated by various growth factors among the tumor, osteoclasts, and osteoblasts. Interestingly, bone metastasis was observed in more than 70% of patients with advanced breast cancer as bone microenvironment is suitable for the growth of metastatic BCCs [7]. Bone marrow carcinosis is not always associated with radiographic abnormality [3]. F18-FDG-PET/CT is highly sensitive in detecting bone marrow metastasis and osteolytic bone metastases [5,6]. On the other hand, CT is not able to detect early BM metastasis (CT occult lesions) even when utilizing the optimal CT window width and level [4]. Bone marrow metastases were noted clinically on F18-FDG-PET/CT in about half of our patients who presented BioMed Research International with new bone metastases. Our data showed that bone marrow metastases played a significant role in metastatic bone disease pathogenesis as noted clinically in pre-and posttreatment F18-FDG-PET/CT. This study showed that molecular imaging (FDG-PET scanning) but not CT scanning has a capability to detect BM metastasis. This study showed by clinical molecular imaging that osteolytic and osteoblastic metastatic bone lesions were preceded by bone marrow metastases. In other words, we showed clinically that BM metastasis is an early stage of bone metastasis that is detected by F18-FDG-PET but not by CT. Interestingly, we showed that early successful eradication of bone marrow metastatic lesion by the systemic treatment precluded the development of destructive metastatic bone lesion as 24% of the observed BM lesions in our patients had disappeared without causing bone destructive lesions on follow-up. However, it is not necessary to find concomitant BM metastasis with newly diagnosed bone metastasis in every breast cancer patient by F18-FDG-PET/CT as half of our patients had no BM metastasis. This is justifiable because BM metastasis detection by molecular imaging is time dependant. The probability of detecting BM metastasis by molecular imaging is high if the patient is imaged at the beginning of bone metastatic process. More time elapsed after the beginning of bone metastatic process means that few or no BM metastatic lesions are detected by molecular imaging.
In one study utilizing F18-FDG-PET/CT, 17 breast cancer patients were found on restaging F18-FDG-PET/CT to have bone marrow metastases concomitant with bone metastases causing 57% stage upgrade. The early identification of BM metastases in this study had a direct consequence on the choice of the therapeutic approach. They showed that one patient with bone marrow metastases had better prognosis due to the early beginning of the systemic therapy [5].
Another case report showed biopsy proven diffuse bone marrow carcinosis in recurrent breast cancer patient by F18-FDG-PET/CT. Interestingly, this patient had the phenomenon of SuperScan on bone scintigraphy, which is thought to be secondary to high bone turnover stimulated by diffuse bone marrow carcinosis which was reversible after aggressive treatment [4].
To the best of our knowledge, this is the first clinical report demonstrating the role of bone marrow in the pathogenesis of bone metastases in breast cancer by tracking bone marrow metastases clinically on sequential F18-FDG-PET/CT on several patients. This study confirms clinically by molecular imaging the relation between BM metastases and bone metastases. This in turn leads to the fact that F18-FDG-PET/CT could be helpful in early diagnosis of bone metastasis particularly in high-risk breast cancer patients (i.e., young patients, locally advanced disease, and inflammatory breast cancer). Accordingly, treatment can be started early leading to potentially better outcome.
It is important to differentiate between diffuse bone marrow carcinosis and focal bone marrow metastases. Anemia was the most frequent symptom at presentation in patients with diffuse bone marrow carcinosis which is associated with poor prognosis according to one study [8]. Median survival time after the diagnosis of apparent diffuse bone marrow carcinosis was found to be 6.43 months in that study [8]. In contrast, the estimated median overall survival from the time of diagnosis of diffuse bone marrow carcinosis was 19 months in another study, thereby emphasizing the fact that diagnosis of diffuse bone marrow carcinosis should not be regarded as a poor prognostic indicator with possible achievement of long-lasting disease control by systemic treatment [9]. In our study, 7 patients had disseminated BM metastasis as part of disseminated visceral, nodal, and BM/bone metastasis. Those patients had bad prognosis as they died within 2-3 months after presentation. Those patients presented with disseminated disease relapse several years after breast cancer diagnosis (range of 1-6 years, average 3.3 years).
This study is limited by being retrospective. This is true because bone marrow metastasis is a temporary state of bone metastasis. In other words, detection of BM metastasis is affected by when to do the molecular scan. We admit that most patients who were found to have bone marrow metastases by molecular imaging already had bone metastases somewhere else in their bony skeleton. Within the limitation of this retrospective study, we do not know for sure whether early detection of bone marrow metastases by molecular imaging will have a significant impact on patients' management/prognosis or not.
Conclusion
This retrospective study showed clinically by F18-FDG-PET/CT imaging that bone marrow metastasis is an early stage of bone metastasis in breast cancer preceding osteolytic and osteoblastic metastatic bone lesions. Interestingly, F18-FDG-PET/CT showed that early eradication of individual bone marrow metastasis by systemic treatment precluded development of destructive bone metastasis. However, more research is needed to study the impact of an early diagnosis of bone marrow metastases by molecular imaging on breast cancer treatment outcome.
Conflicts of Interest
The author declares no conflicts of interest. | 2018-04-03T04:24:22.825Z | 2017-08-13T00:00:00.000 | {
"year": 2017,
"sha1": "a9233a9d2598f24c252db5732c7cd9428dd60585",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/bmri/2017/9852632.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "76f490ecbab1044e2fb350c1621c53e823a7aae1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4645905 | pes2o/s2orc | v3-fos-license | Preview : Published ahead of advance online publication Body mass index trajectories in childhood is predictive of cardiovascular risk : Results from the 23-year longitudinal georgia stress and heart study
This is a PDF file of an unedited peer-reviewed manuscript that has been accepted for publication. NPG are providing this early version of the manuscript as a service to our customers. The manuscript will undergo copyediting, typesetting and a proof review before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers apply.
INTRODUCTION
Obesity has been well established as a major risk factor for cardiovascular disease (CVD), 1 which are the leading causes of death and disability worldwide. 2 Obesity have their beginnings in childhood and then track overtime, and childhood obesity has reached epidemic levels in developed as well as in developing countries. 3,4 Body mass index (BMI) is widely used to define obesity. Previous studies indicated that beside BMI levels, more rapid gains in BMI were also associated with an increased risk of CVD, and the BMI gain during childhood might be more critical. 5,6 Several studies have demonstrated that varied BMI trajectory patterns exist in population. 7,8 For example, Li et al. found that life-course BMI trajectories (between 7-43 years old) were associated with blood pressure at 43 years old. 8 However, little is known about the BMI trajectory patterns during childhood and their effects on CVD risk in later life. Therefore, we aimed to identify subgroups of individuals with differential trajectories in BMI during childhood (5-18 years), and to determine the associations of childhood BMI trajectory patterns with the risk of subclinical CVD in young adulthood (mean age: 24 years), indexed by intima-media thickness (IMT) and left ventricular mass index (LVMI). The results of our study could shed some light on the prevention strategy for early identification of high CVD risk population.
METHODS
The participants were from the Georgia Stress and Heart (GSH) study, an ongoing longitudinal study designed to evaluate the development of CV risk factors in youth and young adults. Recruitment and evaluation of participants have been described in detail elsewhere. 9 The participants who had at least 3 BMI measurements prior to 18 years old, regardless of whether they had CVD subclinical measurements in young adults, were used to identify subgroups with similar underlying BMI trajectory patterns. The Institutional Review Board of the Medical College of Georgia have approved the study and informed consent of each participant was obtained. Participants' height and weight were measured with a Healthometer medical scale that was calibrated daily. Body surface area (BSA) was calculated as SQRT ([height (cm) * weight (kg)]/ 3600). 10 IMT and LVM were measured up to 3 times in 501 participants (a total of 1115 measurements were eligible for analysis) and 496 participants (a total of 1118 measurements were eligible for analysis) respectively at visit 12, 14 and 15. Hewlett-Packard Sonos 5500 (Andover, MA) equipped with a 7.5 MHz linear array probe was used to measure the common carotid artery IMT. Sector-guided Mmode echocardiograms were performed with a Hewlett Packard Sonos 1500 echocardiograph to measure the LVM. LVM index was calculated using the necropsyvalidated formula of Devereux et al and normalized to BSA to obtain LVMI. 11,12 Latent class modeling was used to identify subgroups that share a similar underlying trajectory in BMI. 13 A Stata plugin program (Traj) was used for estimating group-based trajectory model using maximum likelihood method. 14 Briefly, this method is designed to identify clusters of individuals following a similar developmental trajectory on an outcome of interest, based on a semipara metric, group-based approach. 13,15 Selection of the bestfitting trajectory model was assessed using the Bayesian Information Criterion. Linear and quadratic terms of age were considered and evaluated based on their significance level. Once the optimal latent class model was estimated, posterior probabilities would be calculated from the recruitment probabilities (which are the conditional probabilities of class membership for a given response pattern), then individuals were assigned to latent class with the highest posterior probability. 13 The baseline characteristics of participants, including age, race, sex, father's education level, systolic and diastolic BP, among BMI trajectories were compared using ANOVA test for continuous variables or Chi-squared test for categorical variables. The associations of the trajectory groups with IMT and LVMI were examined by using mixed linear regression model with unstructured covariance, meanwhile trend test was performed by treating BMI trajectories as a continuous variable. A two-sided P < 0.05 was considered significant. All data analyses were performed using Stata software version 12.1 (STATA Corp., TX, US).
RESULTS
The trajectory patterns in BMI were examined among 626 participants aged 5-18 (a total of 4146 measurements, up to 12 visits). Three trajectory groups in BMI were identified. As shown in the Figure, 387 (61.8%) participants started with a low level and maintained a low increase in BMI (considered as normal group); 171 (27.3%) participants started with a moderate level and experienced a moderate increase in BMI (moderate-increasing [MI] group); and 68 (10.9%) participants started with a high level and had relatively fast increase in BMI level during childhood (high-increasing [HI] group). The participants in the HI group tended to be male, black, higher systolic blood pressure level at the baseline, and more likely to have a father with a lower educational level (P < 0.05).
The mean IMT and LVMI in the HI and MI group were higher than in the normal group (0.55, 0.52 and 0.50 mm for IMT; 71.32, 71.60 and 68.52g/m 2 for LVMI). Increased rate of growth in BMI during childhood was significantly associated with increased LVMI in young adulthood (P for trend <0.05). Elevated BMI trajectory groups were independently associated with LVMI after adjustment for age, race, sex, blood pressure, father's education. Compared to the normal group, individuals in the MI and HI groups showed higher IMT (β=0.014, P=0.043 for the MI group; β=0.034, P =0.001 for the HI group) and LVMI (β=4.148, P<0.001 for the MI group; β=3.079, P =0.100 for the HI group), respectively. The associations for LVMI were not virtually changed after adjustment for BMI at baseline or in young adulthood, but the associations for IMT were not significant after adjustment for BMI at baseline (Table).
DISCUSSIONS
Three trajectory groups in BMI during childhood were identified. We for the first time found that childhood trajectories of BMI were significantly associated with mean IMT and LVMI in young adulthood, which are two accepted subclinical markers for cardiovascular disease. 16,17 The associations with LVMI were independent of the BMI at baseline or in young adulthood, while the associations with IMT were not significant after adjustment for baseline BMI.
Several studies have incorporated intercepts and slopes, or latent trajectory methods to identify heterogeneity in the development of body fatness and obesity across the life course, and found that BMI trajectories associated with cardio-metabolic risk factors. In the National Longitudinal Study of Adolescent Health study, Attard et al. found that for adults at equivalent BMI, odds of diabetes, hypertension and inflammation differed according to BMI trajectories from adolescence to adulthood. 5 Thompson et al. found that those with higher, steeper weight gains were more likely to have elevated high-sensitivity C-reactive protein regardless of initial weight across sex and age strata. The authors also found that steeper weight gains at younger ages (aged 18 to 30) may be riskier for the development of inflammation than weight gain at older ages (aged 31 to 66) in men. 6 Four BMI trajectory patterns from ages 18 to 49 were identified in the National Longitudinal Survey of Youth 1979 Cohort, in which higher BMI trajectories were associated with elevated risk of adverse health outcomes, measured through a self-rated health survey. 18 Consistent to our results, previous studies also found that participants being blacks, with lower socio-economic status and higher blood pressure had higher odds of being in the groups with faster increase of BMI, 18,19 suggesting that the identified BMI trajectories to some extent do capture the population with varied cardiovascular risk exposures. The Amsterdam Growth and Health Longitudinal Study demonstrated that the "progressively overweight" BMI trajectory, measured from adolescence into adulthood, was associated with higher arterial pressure and lower HDL cholesterol at age 42. 20 In this study, we confirmed that the differences of the BMI trajectories could start in early life, and these differences were associated with the increased risk of subclinical CVD, partially independent of the baseline level of BMI. More interestingly, individuals in the moderate increasing group had significantly increased risk of higher LVMI, indicating that regular screening and monitoring trajectories of BMI in children with relatively normal weight may assist in a more accurate identification of individuals with higher cardiovascular risk in early life. A major strength of the present study is that it involves up to 12 BMI measurements (median: 6, range: 3-12) in childhood, gives us a unique opportunity to examine the childhood trajectory patters of BMI. However, several limitations need to be noted. First, our cohort only includes European and African Americans. The trajectory groups identified may not be generalizable to other populations. In addition, not all participants who were used to identify the BMI trajectories have IMT and LVMI information available. However, we repeated the analysis using the participants with available IMT or LVMI data only. We identified similar childhood BMI trajectories, and the same associations of BMI trajectories with IMT/LVMI. More studies with large sample sizes should be performed to confirm our results.
In conclusion, our study confirmed that subgroups with different BMI trajectories exist in youth population, and found that trajectories of BMI in childhood were significant predictors of subclinical CVD. Our data suggested that regular screening and monitoring trajectories of BMI from childhood may assist in a more accurate identification of individuals with higher cardiovascular risk in early life.
Figure.
Trajectory groups identified for body mass index in childhood Their patterns by age, the number and percentage were shown for each group. The mean levels of IMT and LVMI in adulthood were also shown for each group. Dash lines are 95% confidence interval lines. IMT = intima-media thickness, LVMI = left ventricular mass index | 2018-05-31T17:40:47.468Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "567023e101150a5f00152c0a2614de083453cab6",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc5886821?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "fe64629527ee554087ca7ef82a218ef45b6267ab",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
216500791 | pes2o/s2orc | v3-fos-license | Learning to Rank Music Tracks Using Triplet Loss
Most music streaming services rely on automatic recommendation algorithms to exploit their large music catalogs. These algorithms aim at retrieving a ranked list of music tracks based on their similarity with a target music track. In this work, we propose a method for direct recommendation based on the audio content without explicitly tagging the music tracks. To that aim, we propose several strategies to perform triplet mining from ranked lists. We train a Convolutional Neural Network to learn the similarity via triplet loss. These different strategies are compared and validated on a large-scale experiment against an auto-tagging based approach. The results obtained highlight the efficiency of our system, especially when associated with an Auto-pooling layer.
INTRODUCTION
Many domains, such as music streaming services, make use of large music catalogs. To organize these tracks, it is necessary to provide efficient retrieval mechanisms. While browsing by tags (genre, mood, instrumentation) can be efficient for small-scale catalogs, it does not provide an efficient retrieval mechanism at scale. This is why most music streaming services rely on music recommendation. For this purpose, an algorithm is used to retrieve a ranked list of music tracks based on their similarity with a target music track. Provided a similarity metric is defined, the music ranking problem reduces to a music similarity problem. This task has been the subject of many publications (see [1] for an overview) and can be tackled in different ways.
In Collaborative filtering recommendation, two music tracks can be considered similar if they are listened to by the same audience [2] [3]. Obviously, if no one has ever listen to a music track (for example a new track in the catalog), it can not be recommended. This is known as the cold start problem [4]. However, when applicable, collaborative filtering has proved to provide very good results for recommendation.
In Tag-based recommendation, a tag-based similarity measure can be designed to compare music tracks based on their respective tags. Tags can be manually annotated (such as in Pandora [5]), crowd-sourced (such as in Last.fm), or automatically inferred from the audio content (auto-tagging case [6]).
In Direct recommendation, it is possible to compute directly a distance between two music tracks based on their audio similarity. For example, in one of the pioneer works [7], track MFCCs were represented by a Gaussian Mixture Model and a Kullback-Leibler divergence was used to compare two tracks. Since these methods use a costly pairwise comparison, they cannot scale to large catalogs.
In this work, we propose a method for direct recommendation based on the audio content. To this aim, we suppose we have access to a large music catalog professionally annotated with tags. We also assume we are given a function S which allows to compute a similarity score between two sets of tags. For a given target track, S allows us to retrieve similar tracks based on their tags. Our goal is to reproduce the similarity ranking given by S without explicitly tagging the tracks. We denote our approximation byŜ. For a given target track,Ŝ allows us to retrieve similar tracks based on audio directly. This allows to skip the long and expensive manual tagging step. In this work, we tackle this task using deep learning. We train a Convolutional Neural Network (CNN) with triplet loss to learn a projection of the audio signal such that the proximity between two projected music tracks accounts for their similarity S. We compare this similarity to the one obtained by a CNN trained to estimate automatically the related tags which are then used in S.
Paper contributions. The three main contributions of our work are the following. First, we propose several strategies to perform triplet mining from ranked lists. This allows to apply a triplet loss to the relative music similarity problem. Second, we compare and validate these different strategies on a large-scale experiment against the auto-tagging based approach. Third, we demonstrate the efficacy of the recently proposed Auto-pooling layer [8] for a music task.
Paper organization. In section 2, we review works related to ours. In section 3, we describe our proposed method to mine triplets from ranked lists. In section 4, we evaluate the proposed approach, along with an auto-tagger baseline, on a music retrieval task. In section 5, we draw the conclusions of our results.
Music auto-tagging
In Music Information Retrieval (MIR), we call music auto-tagging the task of predicting tags directly from audio signals. Tags are keywords that describe a music track in terms of genre, mood, instrumentation or any other high-level attributes.
Traditional approaches for music auto-tagging rely on the extraction of handcrafted features to feed a classifier (linear or nonlinear) [9] [10]. Recently, deep learning approaches have allowed to learn the features directly from the data (waveforms or spectrograms), leading to improved performances [11] [12] [13] [6]. Some of these systems have proven their capacity to learn useful information from audio [11] [14]. Therefore, we take inspiration from those for our CNN architecture, but we use this CNN to learn music similarity instead of tags.
Learning to rank
The task of retrieving items in a collection and to sort them by relevance arises in a variety of domains. In early works, Weston [15] and Usunier [16] proposed loss functions capable of optimizing the top of ranked lists in matrix factorization and information retrieval contexts. Another common approach is to learn a similarity notion from classes. In this case, it is assumed that the similarity of items within a same class should be higher than the similarity of items from different classes. As a result, the model's recommendations are associated to a binary relevance score: the recommended item does or does not belong to the expected class. Such problems have been studied for example by Weinberger and Saul [17] and Hoffer and Ailon [18], who employed CNNs with innovative loss functions to perform higher-accuracy image classification. Today, a widely used loss for similarity learning is the triplet loss, as introduced by [17] and used by Schroff and Philbin [19] for face recognition. In their work, they use the triplet loss to force the CNN to learn a projection of the image data such that the projections of images of the same person will be pulled together, and the ones from different persons will be pushed apart. The distance employed to compare the projections of the data is usually the Euclidean distance, squared Euclidean distance, or the cosine distance [20].
In our problem (learning a similarity from ranked lists), there are however no classes, nor binary relevance labels for each querydocument pair. Such a problem has already been addressed outside the music case. For example, Mcfee and Lanckriet [21] propose to use a listwise loss function to learn text recommendation from ranked lists. Wang et al. [22] propose to use the triplet loss to learn a ranking of similar images. Our proposal takes inspiration from the work of [22] but for the music case.
Music similarity
The literature on music similarity is vast (see Wolff and Weyde [23] for an overview). For example, Slaney et al. [24] propose a set of linear transforms to embed tracks into an Euclidean metric space and evaluate them on a nearest neighbor task for album, artist and blog recognition. Tag-based approaches to metric learning include a method by Weston et. al. [25] to project both audio features and music tags into a shared embedding space. Wolf and Weyde [23] insist on modeling relative music similarity (rankings) rather than absolute similarity ratings, in order to avoid consistency issues due to subjective user ratings. Following this idea, Lu et al. [26] employ a CNN with an improved triplet loss to predict relative music similarity. However [26] do not propose any mining strategies from ranked lists 1 . Our work takes inspiration from [26] but proposes a mining strategy from ranked lists.
Problem definition
Let D = {t1, . . . , tN } be a set of N tracks annotated with a taxonomy of m tags. The problem addressed in this study is the following: given a query track t, compute a ranked list of tracks from the dataset D ordered by descending similarity to t. Let S be an oracle similarity function that, given two sets of tag likelihoods t1 ∈ [0, 1] m and t2 ∈ [0, 1] m , returns a similarity score S(t1, t2) ∈ R+.
For any t ∈ D, let R S (t) = [r1(t), . . . , rN−1(t)] be the ordered list of the other tracks of D ranked by decreasing similarity according to the function S. This is the ground truth, against which our system will be evaluated. For any t ∈ D, let RŜ (t) = [r1(t), . . . ,rN−1(t)] be the estimated list of recommended tracks made by the system for the target track t.
In this paper, we formulate the music ranking problem as a nearest neighbor search problem in a d-dimensional Euclidean space. A model is trained to define a specific embedding space (a projection of the data) in which the Euclidean distance allows to retrieve tracks with the same ranking as with the oracle similarity function S.
Mining triplets from ranked lists
We denote by f (t) ∈ R d the embedding of the track t. f is obtained by training a CNN using a triplet loss.
This loss takes as input a triplet of tracks that consists of an anchor a, a positive example p and a negative example n. The CNN outputs the embedding vectors of those, f(a), f(p) and f(n) respectively. The triplet is created such that the positive is more similar to the anchor than the negative according to the ground truth S. The triplet loss then compares the squared Euclidean distances between the three embedding vectors and ensures that the same condition is respected in the embedding space: In this expression, α ∈ R * + is a margin parameter that enforces a minimal distance between the positive and negative pairs.
To train such a system, it is necessary to prepare the data in the form of triplets (a, p, n). Let t be a target track and R S (t) = [r1(t), . . . , rN−1(t)] the associated reference ranking. We define training triplets by using the track t as the anchor and by mining a positive and a negative element from R S (t). A triplet is considered valid if the index of the positive element is lower than the one of the negative element: (a, p, n) = (t, ri(t), rj(t)) ∀i < j.
In practice, for large datasets, it is infeasible to use all valid triplets for training, because their number grows cubically with N . Additionally, all triplets may not be useful for training. Figure 1 (top) shows an illustration of the average similarity scores [S(t, r1(t)), . . . , S(t, rN−1(t))] for each track t ∈ D. The curve shows that a few first tracks in the ranking are very similar to their target, while most tracks in the dataset are actually irrelevant: their similarity score is lower than 50%. Therefore, after a certain rank in R S (t), mining positive samples does not make sense.
Given these observations, we limit the set of possible positives per anchor to the Np first elements of the reference ranking: p ∈ [r1(t), . . . , rN p (t)]. We also limit the overall number of negatives per anchor-positive pair to Nn. Then, to select the Nn negative examples for a given anchor-positive pair (t, ri(t)), three different strategies are tested (see Figure 1 + 1)). • Distance-based (inspired by [28] and [22]): The negatives are sampled among the full R S (t) list after the positive with a probability that is proportional to its similarity with the anchor: n ∈ {rj(t)}j>i s.t. P (n) ∝ S(t, n). This way, in all three variants, the set of triplets can be preselected offline and we do not mine triplets during training. The resulting total number of triplets is N × Np × Nn.
Auto-pooling
In usual CNNs applied to audio signals, the time dimension of the audio signal is progressively discarded by a succession of max- pooling layers. This implies a strong assumption related to how information over time is processed: we only keep the maximum activation over successive time frames. Other choices have been made in the past such as the combination of Mean, Max and L2 Pooling [29]. A very elegant formulation has been proposed by McFee et al. [8] with the Auto-Pooling layer, which allows to interpolate between several pooling operators (such as min-, max-, and average-pooling) via a learned parameter. Auto-pooling has provided very good results for an audio event detection task. To our knowledge, it has not been used for music-related tasks. We do this here for the task of music similarity.
System architectures
In the following, the input representation used for all our architectures is the Constant-Q transform (CQT). For each track, we compute a CQT of 96 bins (12 bins/octave) with fmin=32.70 Hz, and a hop size of 1024 at 44.1 kHz. We then convert it to power amplitude and log-scale the magnitudes. The input of our CNN is a patch of 512 CQT frames (11.88s). This duration was chosen as a good compromise between memory efficiency and sufficient musical context. Since the annotations of our dataset are at the track level (see Section 4.1), we randomly select several of these (96×512) patches to represent a given track and assume that the annotations apply to each patch.
At test time, each track is represented by 8 randomly selected patches. When the network is used for auto-tagging, we pass each of them through our network to obtain the estimated likelihood vector. We then simply use the average vector over the 8 tag estimated likelihood vectors. When the network is used for embedding, we compute the average embedding vector over the 8 embedding vectors.
All models have been implemented using Keras with an Adam optimizer, a batch size of 42 patches and early stopping. proach is therefore to automatically estimate these tag likelihoods from the audio and then apply S directly to the estimated likelihood vectors. Auto-tagging is a multi-label classification problem (output activations are sigmoids, loss defined as the sum of binary cross-entropies). Preliminary experiments have shown that VGGlike architectures [6] were more suited to our dataset than musicallymotivated architectures [13]. Thus, we reproduce the FCN-5 architecture proposed by Choi et al. [6]. We adapt it to the shape of our inputs (96×512) and outputs (m tags). While the ground-truth annotations are likelihoods in [0, 1] m , we train the system with binarized outputs {0, 1} m . This system is referred to as AT Baseline in the rest of the paper. We give its architecture in Table 1, column 1 and provide the details (dropout, activations) in our code. We train it with a learning rate of 10 −4 .
Triplet loss system (TL):
Our objective here is to estimate directly an embedding such that applying the Euclidean distance between the embeddings of two tracks t1, t2 mimics S(t1, t2). The network we use to compute the embedding is similar to the AT Baseline one, but it differs in the output layer. The last fully-connected layer has now d units (the dimension of the embedding space) instead of the m units, and has linear activations instead of m sigmoid activations (see Table 1, column 2). After a short grid search in a pilot experiment, we set d to 128. The embeddings are L2-normalized to the unit sphere. The margin parameter α of the triplet loss is set to 0.5 and the learning rate to 10 −6 . Each mini-batch contains 42 triplets of patches. A given mini-batch represents one anchor track, one positive track and 42 negative tracks. Patches from these tracks are then randomly selected.
In the rest of the paper we denote this network as TL . We test it with the three sampling strategies presented in 3.2: Neighbors, Random uniform and Distance-based with Np=15 and Nn=250.
Triplet loss system with Auto-pooling (TL Autopool): In the AT Baseline and TL networks, the time dimension is progressively removed by a succession of max-pooling layers. We test here the use of the Auto-Pooling layer proposed by McFee et al. [8]. Two main adaptations were necessary to use Auto-pooling in our setup. First, the max-pooling sizes of the TL network need to be adapted to carry some temporal information until the last layer. Second, the number of filters needs to be divided by two due to GPU memory constraints. This network is refered to as TL Autopool in the rest of the paper (see Table 1, column 3). As for the TL network, we train it to output embeddings using the triplet loss. In the following, TL Autopool will only be tested with the Distance-based mining strategy.
Dataset
To test our proposal, we need a dataset for which tags have been annotated at the track level and a similarity metric has been designed. Such datasets exist in streaming services such as Pandora. In our case, we use an extract of N = 14, 246 tracks from the catalog of Creaminal, a music supervision company. This dataset is private and cannot be shared but the proposals made here are not specific to this dataset and can be applied to other ones.
Each track has a duration comprised between 45s and 5 minutes, and is sampled at 44100 Hz. The taxonomy used for this dataset is made of m = 488 tags, organized in 5 categories: Genre (e.g. Blues, Reggae, Electro-funk, Japanese Pop), Recording (e.g. Acoustic, Saturated, Guitar bass), Mood (e.g. Epic, Dancing, Nostalgic), Movement (e.g. Acceleration, Repetitive), and Lyrics (e.g. Death, Freedom, Nature). Each track was professionally annotated with a number of tags comprised between 5 and 35, the average number of tags per track being 16.8. The dataset features a majority of Pop tracks, along with Electro, Rock, Country and Movie soundtracks. The function S is also specific to this dataset and relies on a nonlinear combination of weighted tags. For our experiments, we split the dataset into a training, validation and test sets (60%, 20% and 20% respectively) 2 . The distribution of tags is approximately the same in training and test.
Evaluation metrics
To evaluate our systems, we used each track of the test set as a query and ask the systems to rank all the other tracks of the test set by decreasing similarity with the query.
Let R S k (t) = [r1(t), . . . , rr(k)] be the list R S (t), truncated at rank k. Without loss of generality, we consider here that the relevant tracks to recommend for a given test query t are the five first tracks of the ranking: R S 5 (t). Thus, for each target track t, we ask our system to retrieve the 5 relevant ground truth recommendations R S 5 (t) among its k estimated recommendation RŜ k (t). Here we set k to 20. We then use four of the evaluation metrics proposed by [1]: Since we consider only the top k, the reciprocal rank is set to 0 if the rank is higher than k; • Normalized Discounted Cumulative Gain (nDCG): this metric allows to have a relevance scale instead of binary relevance judgments (e.g., recommending r1(t) will produce a higher score than recommending r5(t) at the same rank in RŜ k (t)).
Results and discussion
In We first observe that the AT baseline is outperformed by all TL systems on all metrics. This shows that in our case, learning the ranking directly is more efficient than learning the tags and applying S to their estimates. For information, the AT Baseline system (which replicates [6] architecture) achieves a mean-over-tag AUC of 0.79 on its auto-tagging task.
We then see that among the various mining strategies of the TL systems, the Distance-based negative sampling performs best on all metrics. It should be noted that the Distance-based negative sampling was initially proposed by [28] which uses the learned embeddings to compute the distance online. In our case, the distance can be calculated offline since we use the ground truth S instead of the embeddings for the distance computation. The Neighbors and Random uniform sampling strategies have similar performances, but the first has a better recall while the latter has a better reciprocal rank.
The last row of Table 2 indicates the results of the TL Autopool system (with Distance-based mining). We observe a boost in performances due to the added flexibility of Auto-pooling. This system is able to retrieve one of the 5 relevant tracks with almost a probability of 1/5 (Recall@20 = 17.74). Note that in our case, neither the query nor the reference tracks have been seen during training. The first relevant track is on average at rank 5.6 (inverse of RR@20=24.68), among approximately 2,900 test tracks. This makes our system a promising approach to efficient music retrieval in large datasets. Additionally, an informal listening test reveals that some of the recommended tracks, although judged "irrelevant" by our evaluation system, actually share important characteristics with the target.
Reproducibility: Although we cannot distribute our private dataset and its oracle similarity function, to allow reproducibility of our work we provide the architecture and experimental code 3 .
CONCLUSION AND PERSPECTIVES
We propose here a method to learn a similarity ranking using a triplet loss network and a dataset of reference rankings. We show that using the triplet loss to learn the ranking gives better results than learning the tags used by the ground truth similarity function. This result is consistent across all our metrics. Finally, we show that Autopooling, proposed in the framework of audio event detection, also allows improvement in the case of music similarity.
Future works will focus on improving the training efficiency by mining on the fly useful triplets [20] [30] [31]. | 2020-04-16T09:12:28.173Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "bde85fd2655b237219c8d3aff53c4c5695b2ca52",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2005.12977",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "eac5d4f3ba189924a0f6d45a8986fca1ec13cc00",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
211071956 | pes2o/s2orc | v3-fos-license | Alcohol consumption in later life and reaching longevity: the Netherlands Cohort Study
Abstract Background whether light-to-moderate alcohol intake is related to reduced mortality remains a subject of intense research and controversy. There are very few studies available on alcohol and reaching longevity. Methods we investigated the relationship of alcohol drinking characteristics with the probability to reach 90 years of age. Analyses were conducted using data from the Netherlands Cohort Study. Participants born in 1916–1917 (n = 7,807) completed a questionnaire in 1986 (age 68–70 years) and were followed up for vital status until the age of 90 years (2006–07). Multivariable Cox regression analyses with fixed follow-up time were based on 5,479 participants with complete data to calculate risk ratios (RRs) of reaching longevity (age 90 years). Results we found statistically significant positive associations between baseline alcohol intake and the probability of reaching 90 years in both men and women. Overall, the highest probability of reaching 90 was found in those consuming 5– < 15 g/d alcohol, with RR = 1.36 (95% CI, 1.20–1.55) when compared with abstainers. The exposure-response relationship was significantly non-linear in women, but not in men. Wine intake was positively associated with longevity (notably in women), whereas liquor was positively associated with longevity in men and inversely in women. Binge drinking pointed towards an inverse relationship with longevity. Alcohol intake was associated with longevity in those without and with a history of selected diseases. Conclusions the highest probability of reaching 90 years was found for those drinking 5– < 15 g alcohol/day. Although not significant, the risk estimates also indicate to avoid binge drinking.
Introduction
Whether light-to-moderate alcohol intake is related to reduced mortality remains a subject of intense research and controversy, e.g. [1,2]. Whereas alcohol consumption has been studied frequently in relation to mortality (especially CVD), the findings were inconsistent. Many studies have reported J-shaped curves relating alcohol to mortality, suggesting the lowest risk for light-moderate drinkers [2][3][4][5], while others found non-significant associations or linear associations [1,6,7]. Many early cohort studies may have suffered from 'abstainer bias' where ex-drinkers are misclassified as abstainers and related inclusion of subjects with chronic diseases (sick quitters), and limited confounder adjustment [5,6,8]. A recent meta-analysis addressing these issues [6] found no protective effect of low-moderate drinking in the subset of studies that controlled for these biases, but this selection was criticized [9]. While mortality studies investigate risk factors for premature death (i.e. earlier than average), longevity studies investigate determinants of attaining exceptionally high ages (exceeding life expectancy). The relationship between alcohol and longevity has been investigated rarely, with survival cut-off ages of 85 [10,11] or younger [12] in early cohort studies, and 90 in recent studies [13,14]. Furthermore, most studies involved men only [10,11,13], did not exclude ex-drinkers and results were inconsistent.
We investigated the relationship between habitual alcohol intake in later life and the probability of reaching 90 years in men and women (because alcohol affects women differently from men [15]), within the Netherlands Cohort Study (NLCS). Given the controversies surrounding lightto-moderate alcohol intake and mortality, we concentrated on this category in dose-response modelling. We also aimed to investigate beverage types, stability of drinking over time and effect of excluding ex-drinkers, and binge drinking, because these factors were important in mortality studies.
Study design and population
For this study, data from the ongoing NLCS were used. The NLCS started in September 1986 as a large population-based prospective study, with detailed information on baseline alcohol use and many confounders available from men and women [16,17]. Eligible subjects were men and women living in 204 Dutch municipalities, aged 55-70 years at cohort baseline (1986). NLCS-participants born in 1916-1917 were selected to form the longevity cohort for the current analyses (i.e. aged 68-70 at baseline), because younger birth cohorts could not have reached age 90 at the end of follow-up [14,18]. Vital status follow-up consisted of record linkage to the Central Bureau for Genealogy and to municipal population registries from 1986 to 2007, yielding exact dates of death. Vital status follow-up of the longevity cohort until age 90 (2006-07) was 99.9% complete; seven participants were lost to follow-up due to migration. The resulting study population consisted of 3,646 men and 4,161 women (Appendix- Figure 1).
Exposure assessment
The 11-page baseline questionnaire measured dietary intake, detailed information on lifestyle factors and medical conditions [16]. Habitual consumption of food and (alcoholic) beverages during the year preceding baseline was assessed using a semi-quantitative food-frequency questionnaire (FFQ), which was validated against a 9-day diet record [19].
Consumption of alcoholic beverages was addressed by questions on beer, red wine, white wine, sherry and other fortified wines, liqueur types containing on average 16% ethanol, and (Dutch) gin, brandy and whiskey. Respondents who consumed alcoholic beverages less than once a month were considered non-users. Four items from the questionnaire (i.e. red wine, white wine, sherry and liqueur) were combined into one wine variable, since these items were substantially correlated [20]. Mean daily alcohol consumption was calculated using the Dutch food composition table [21]. The FFQ has been validated and tested for reproducibility [19,22]. For mean daily ethanol intake, Spearman correlation coefficients between the 9-day diet record and the questionnaire were 0.89 for all subjects and 0.85 for alcohol users [19]. The absolute amount of ethanol reported in the questionnaire by alcohol users was, on average, 86% of that reported in the 9-day diet record [19].
The baseline questionnaire also asked about the usual pattern of drinking alcoholic beverages (parties only/weekend and parties/throughout week). To measure binge drinking, subjects were asked how often they drank more than six alcoholic drinks per occasion during the half year preceding baseline. Finally, a question provided information on the subjects' drinking habits 5 years before baseline (Appendix Methods). Ex-drinkers were defined as participants who were not drinking alcohol at baseline, but who drank alcoholic beverages 5 years before baseline.
Statistical analyses
Subjects with missing data on alcohol and confounding variables were excluded. The associations of alcohol consumption, alcoholic beverages and drinking characteristics with the probability of reaching 90 years (longevity) were estimated in age(sex) and multivariable-adjusted analyses using Cox regression models with a fixed follow-up time [18,23], in categorical and continuous exposure analyses, correcting for potential confounders (related to longevity and alcohol (see footnotes in Tables)). Standard errors were calculated using the Huber-White sandwich estimator [24]. Ex-drinkers were excluded from the main analyses to avoid misclassification of ex-drinkers as abstainers. Beveragespecific analyses for beer, wine and liquor were additionally mutually adjusted to evaluate the association of each beverage with longevity independently of other alcoholic beverages. Analyses of the effect of pattern of drinking, and Alcohol consumption in later life and reaching longevity: the Netherlands Cohort Study binge drinking, were additionally adjusted for total intake of alcoholic beverages.
Tests for trends were assessed using Wald tests, by fitting median values of intake per intake category as continuous terms. Restricted cubic spline regression analyses using four knots (at the midpoints of the categories used in categorical analyses) and Wald test were performed to test for nonlinearity. We conducted sensitivity analyses, by restricting analyses to participants who reported to have had the same alcohol intake 5 years before baseline, including abstainers on both occasions (i.e. the stable subgroup). To evaluate potential residual confounding by other risk factors, and effect modification, analyses of alcohol and longevity were also conducted within strata of covariables. Interactions were tested using Wald tests and cross-product terms. Analyses were performed using Stata 14; presented P-values are twosided.
The NLCS study was approved by the Medisch-Ethische Toetsinngscommissie (METC), Maastricht University Medical Centre, Maastricht, the Netherlands.
Results
Amongst the 2,591 men, 433 (16.7%) survived until 90 years, and there were 994 survivors (34.4%) amongst the 2,888 women. In the total group, 40 men and 32 women were ex-drinkers. When excluding ex-drinkers, the proportion of alcohol abstainers was higher amongst non-survivors than survivors in both men (15.6% versus 10.6%) and women (37.4% and 30.1%). Amongst male alcohol consumers, mean intake (SD) was 16.5 (15.8) g/day in non-survivors and 15.9 (14.9) g/day in survivors. For women, these numbers were 8.0 (10.5) and 7.2 (9.0) g/day, respectively. Appendix Table 1 also shows these comparisons for beverage types (glasses/week), pattern of drinking, stable drinking and binge drinking. The proportion of binge drinkers was higher amongst non-survivors than survivors, and higher in men: 18.5% versus 14.2% in men, and 6.1% versus 4.0% in women, respectively. Alcohol consumption was positively associated with smoking, educational level and energy intake in both sexes, with physical activity in women, and with BMI and height in men (Appendix Table 2). There was no clear association with history of selected diseases. Ex-drinkers more often had a history of selected diseases than those in other drinking categories. Excluded subjects with missings had a lower likelihood of reaching 90, were less often smokers and less highly educated (Appendix Table 3).
Alcohol intake was positively associated with the probability of reaching 90 years in men and women in multivariable-adjusted analyses (Table 1). In analyses of men and women combined, those drinking 5-< 10 g alcohol/day had a RR of 1.41 (95%CI, 1.21-1.63) of reaching 90, compared to abstainers. This probability remained elevated at higher alcohol intake levels (P-trend = 0.014). Ex-drinkers had a decreased probability of reaching 90, when compared to abstainers. Ex-drinkers were excluded from subsequent analyses. When alcohol was analysed as continuous variable, the RR per increment of 10 g/d was 1.05 (95%CI 1.01-1.09). In analyses limited to the stable subgroup, similar associations were seen as in the overall group. There was no statistically significant interaction between men and women (P = 0.168). However, the estimated associations showed differences: whereas in men the probability of reaching 90 remained elevated at higher alcohol consumption levels (e.g. RR = 1.64 (1.15-2.34) for men drinking 30+ g/day compared to abstainers), this was not seen in women with RR = 0.99 (0.69-1.44). This difference in dose-response was also noticed in restricted cubic splines analyses, where a significantly non-linear relationship was observed in women (P for non-linearity = 0.004), but not in men ( Figure 1). We therefore continued with sex-specific analyses.
In beverage-specific analyses, we found no association with beer intake ( Table 2). Wine intake was associated with higher chances of reaching 90 amongst women, with RRs of 1.43 (95%CI 1.21-1.68) and 1.35 (1.14-1.59) for women drinking 3.5-< 7 and 7+ glasses/week, respectively, when compared to non-drinkers of wine (Ptrend <0.001, and P-trend = 0.049 amongst wine drinkers). For men, the weakly positive associations with wine were non-significant. Liquor intake was significantly positively associated with longevity amongst men in several drinking categories compared to non-drinkers of liquor, but the trend test and continuous analyses were not significant. In women, however, higher liquor intake was inversely associated with longevity (P-trend = 0.044, and P-trend = 0.018 amongst liquor drinkers).
There was no significant association with pattern of drinking (Appendix Table 4). Although binge drinkers seemed to have a lower probability of reaching 90 than non-binge drinkers, especially in women, the multivariable-adjusted associations were non-significant. This may be due to the small proportion of binge drinking women. When binge drinking was further categorized according to frequency, lower chances of longevity were found in more frequently binge drinking men, but the trend test was not significant.
In subgroup analyses of alcohol and longevity, categorical (or continuous) alcohol intake showed no significant interactions with smoking status, BMI, physical activity, level of education or history of diseases at baseline (Appendix Table 5). Significant associations between alcohol and probability of reaching 90 were seen in many subgroups, including never and current smokers, and those with or without a history of selected diseases. The highest RRs were generally observed in those drinking 5-< 15 g/day.
Discussion
In this large prospective study, we found statistically significant positive associations between alcohol intake and the probability of reaching 90 years in both men and women. Overall, the highest probability was found in those consuming 5-< 15 g/d alcohol, which corresponds to 0.5-1.5 glass of alcoholic beverage per day. The exposure-response relationship was significantly non-linear in women, but not in men. Whereas the probability of longevity was decreasing in women with alcohol intakes above 15 g/d, it remained elevated at higher alcohol consumption levels in men. In beverage-specific analyses, wine intake was positively associated with longevity (notably in women), whereas liquor was positively associated with longevity in men and inversely in women. Binge drinking was not significantly associated with longevity, but the risk estimates indicate to avoid binge drinking. In subgroup analyses, alcohol intake was associated with longevity in those with or without a history of selected diseases. Previous prospective studies on longevity from the US and France that reported on alcohol were rather limited (no alcohol focus) and found no significant associations using longevity cut-offs of 75 [12] and 90 years [13,25]. However, higher alcohol intakes were seen in survivors compared to non-survivors [25], and in subsequent analyses (85+ years) of the Framingham Heart Study [26]. The Physicians Health Study amongst US male physicians (survival cut-off 90) reported small and non-significantly increased chances of longevity for various drinking categories compared to rarely/never alcohol drinkers, with no dose-response relationship [13]. The association between alcohol drinking and longevity was studied twice in the Honolulu Heart Program (HHP) amongst Japanese-American men using 85 years as longevity cut-off [10,11]. Heavy alcohol intake, measured at baseline age 45-68 years, was significantly inversely related to longevity (OR = 0.63, for 3+ drinks/day versus drinking less) [10]. In the second analysis, moderate-heavy alcohol intake around 75 years was also significantly inversely related to longevity (OR = 0.66, for drinking >14.5 g/day versus less) [11]. The fact that the HHP study was conducted amongst men of Japanese ancestry may (partly) explain the more negative association of alcohol with longevity, and suggests a potential mechanism. It is known that East Asians are less efficient alcohol metabolizers due to a common lossof-function variant of the ALDH2-gene, which decreases breakdown of acetaldehyde, the first, toxic alcohol metabolite [27]. It could be that those who nevertheless drink experience a higher mortality risk.
Overall, the results of previous longevity studies seem quite limited. Our detailed analyses show significantly positive associations between alcohol and longevity in both men and women, which is in agreement with the PHS [13]. Overall in men and women combined in the NLCS, the highest probability of reaching 90 was found in those consuming 5-< 15 g/d alcohol, with a HR of 1.36 compared to abstainers. Women experience higher blood alcohol concentrations than men of similar weight due to lower total body water [15]. Thus, adverse effects of higher alcohol intakes may appear earlier in women. This might explain the non-linear exposure-response relationship in women and not in men. We also found that wine intake was positively associated with longevity, whereas liquor was positively associated with longevity in men, and inversely in women. Before speculating on reasons for these beverage differences, future longevity studies are needed to replicate these sexspecific findings, with those on pattern and binge drinking. In mortality studies, there was no clear indication for sex differences [2,5], and although beneficial associations with wine have been described for mortality, e.g. [2], this topic remains controversial.
As in observational studies on alcohol and mortality [1,2,8], studies on alcohol and longevity may be hampered by possible biases (selection and residual confounding biases). Here, selection bias can refer to abstainer bias (when the reference category of non-drinkers also includes sick quitters), the healthy drinker/survivor bias (when cohorts of older participants may be overrepresented by healthier drinkers who may have survived adverse effects of alcohol). Reverse causation may occur because health status may influence alcohol drinking [8], which could be addressed by restricting analyses to healthy people at baseline. Incomplete adjustment for confounding factors may lead to residual confounding. In our longevity analysis, we tried to address these possible biases by: (i) excluding ex-drinkers from the reference category; (ii) limiting analyses to stable drinkers and abstainers by taking alcohol consumption 5 years before baseline into account; (iii) restricting analyses to participants without prevalent diseases and (iv) adjusting for a large range of possible confounders with detailed information. These analysis strategies do not necessarily provide a full remedy against all possible biases [8], but these were the possibilities with the available data from our cohort. For example, we had no information on lifetime alcohol consumption or consumption on various ages during lifetime, so our analysis of past consumption was limited. After excluding exdrinkers from the reference category, the analyses in the stable subgroup were essentially similar to what was seen overall. We also found that alcohol intake was associated with longevity in the subgroup without a history of selected diseases. Still, other diseases might have affected alcohol use or longevity. Residual confounding by socioeconomic status is also possible, because we only controlled for educational level.
It should be noted that the percentages of never drinkers were relatively high in the NLCS: 15% in men and 35% in women, making this common behaviour a logical reference category. These percentages were substantially higher than in other cohorts, e.g. 8% in male and 16% in female PLCOparticipants [2], and 6% in male and 16% in female EPICparticipants [28]. Strengths of the NLCS are the prospective design and high completeness of follow-up, making information bias and selection bias due to differential followup unlikely. The validation study of the food frequency questionnaire has shown that it performs relatively well with respect to alcohol [19], but measurement error may still have attenuated associations. The lack of possibilities to update alcohol intake or other lifestyle data during follow-up may have resulted in some attenuated associations too. Our study was aimed at measuring alcohol intake at 68-70 years. Therefore, our study results are limited to alcohol drinking in later life; future longevity studies preferably include lifetime consumption. The alcohol measures in our study were not aimed to get an all-encompassing indication of risky drinking, like in the Alcohol Use Disorders Identification Test/AUDIT [29]. Our cut-off for binge drinking (>6 drinks per occasion) as used in the 1980s/1990s [29,30] is somewhat higher than current cut-offs [29]. Because we were interested in the association of late life drinking with longevity, our study likely examined a resilient population that survived already until 68 years despite possible earlier risky drinking.
While older people perceive themselves as controlled responsible drinkers, according to a recent thematic synthesis of qualitative studies, they consider alcohol use often as important part of social occasions, and report that alcohol helps creating feelings of relaxation [31]. A possible beneficial effect of light-moderate alcohol intake on longevity (with inverted J-shaped dose-response on longevity) may also be related to hormesis [32,33]. With higher consumption in older people, medication may be negatively affected by alcohol, and there is decreased physiological tolerance [34].
In conclusion, in this prospective study of men and women aged 68-70 years at baseline, we found the highest probability of reaching 90 years of age for those drinking 5-< 15 g alcohol/day. This does not necessarily mean that light-to-moderate drinking improves health. The estimated RR of 1.36 implies a modest absolute increase in this probability and should not be used as motivation to start drinking if one does not drink alcoholic beverages. Although no significant association was found, the risk estimates also indicate to avoid binge drinking.
Supplementary data: Supplementary data mentioned in the text are available to subscribers in Age and Ageing online.
Declaration of conflicts of interest:
None. | 2020-01-09T09:14:54.645Z | 2020-02-09T00:00:00.000 | {
"year": 2020,
"sha1": "f4b704c379759452b661be007c77ed0ea6d6ed23",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1093/ageing/afaa003",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "54a7d0ae6ca6c4223098af2863d8695b1b23be7b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
228958813 | pes2o/s2orc | v3-fos-license | Theatre as a Communicative Strategy for Teaching English as a Foreign Language to Primary Education Undergraduates: A Pedagogical Experience
As language teachers we are constantly trying to establish a suitable atmosphere in which communication skills can be fully developed through activities or exercises which involve a stimulus, as a language is learnt more easily when the elements used to teach are more appealing. Theatre can perform this role as it is an educational tool that has been explored for the instructions of languages since the emergence of the Communicative Approach. The purpose of this article is twofold: firstly, to examine the effectiveness of theatre in general with reference to the numerous arguments in favour of using this didactic tool in the classroom. Secondly, to draw attention to the benefits and challenges of using theatre to teach a foreign language at university level by presenting the performance of eight plays based on popular tales in the classroom through a series of practical activities using the task-based learning methodology.
Introduction
As language teachers we are always looking for new ways to improve our classes.We try to establish a suitable atmosphere in which communication skills are fully developed through activities or exercises which involve a stimulus, since a language is learnt more easily when the elements used to teach are more appealing.Theatre 1 can perform this role as it is an educational tool that has been explored for the instructions of languages since the emergence of the Communicative Approach2 .I believe, then, that its practise in the classroom could help students learn a foreign language as active learning and teaching procedures are combined.
The use of theatre in the classroom was introduced in schools to develop different capacities: (1) acting skills; (2) improvisation; (3) natural expressivity; (4) refinement of the language; or (5) confidence increase since "it is fun and entertaining and can therefore provide motivation to learn.It can provide varied opportunities for different uses of language and because it engages feelings it can provide a rich experience of language for the participants" (Fleming, 2006, p.1). Besides, theatre offers myriad possibilities for teaching and/or learning since it is a method that can be adapted in the classroom depending on students' needs.As such, its potential value ought not to be underestimated in the English classroom.In addition, whilst traditional methods are teacher-centred, theatre can be learner-centred as it requires active cooperation; hence, roles automatically change and pupils can develop their skills in a more autonomous and active manner, so that they can "explore and play with reality" (Motos, 2017, p.345): theatrical performances are emotional, vivid, and based on experience.Theatrical practice proposes a physical and, above all, psychological space, in which to inquire, explore and play with reality.Young people can re-evaluate and relive reality from their own personal attitudes and from that of the characters in the roles they play 3 (p.345).
Furthermore, the fact that language is applied in real situations makes the process meaningful as stated by Ausubel in 1963.This scholar carried out research into meaningful learning based on the premise that a student learns better the concepts that s/he perceives as useful or important for his or 1 Despite the fact that Fleming (2006) distinguishes between theatre and drama, the former referring to works performed on stage and the latter to improvised works, in this paper, I shall employ the term "theatre" to refer to both.her development, and stores these concepts in the permanent memory; if, on the contrary, s/he does not consider it relevant, the individual will store them in the short-term memory.Theatre can perform this role because the new information can be integrated into the old knowledge structure and students use the language in a real context.Nevertheless, regardless of its great potential, there are some pre-(conditions) to be taken into account before deciding to use theatre in the classroom.Theatre -as an instructional tool-implies a series of trials, as Fleming (2006, p. 3) highlights, not only for the student but also for the instructor who needs to be willing to accept that several difficulties may arise: (1) students' potential embarrassment when acting in front of a class; (2) problems of discipline due to excitement; and (3) fluency difficulties if the student has a low command of the foreign language, which can impede the communication of the play.It is crucial that teachers take these aspects into account before being immersed into this practice so as to be prepared to address the obstacles, but this should not discourage educators from using theatre to teach a foreign language4 .Theatre´s value resides in the fact that it permits the recreation of numerous situations for different language uses (e.g.learning a foreign language) (Fleming, 2006).Traditionally, most publications on theatre as a language teaching tool are addressed to primary or secondary education, and not many are focused on university level (Giebert, 2014, p.139).In this study, I shall attempt to focus on undergraduate students who are aiming to become EFL primary school teachers in Spain.Therefore, the purpose of this article is twofold: firstly, to examine the effectiveness of theatre in general with reference to the numerous arguments in favour of using this didactic tool in the classroom.Secondly, to draw attention to the benefits and challenges of using theatre as a means to teach a foreign language at university level by presenting the implementation of eight plays based on popular tales in the classroom through a series of practical activities using task-based learning to frame the activity.
A Brief Review on Theatre
Theatre has always been present in the classroom, as stated by Maley and Duff (2005) or Fleming (2006), but became more significant during the 1970s with the advent of the Communicative Paradigm.Furthermore, Maley and Duff (2005, p. 158-159) and (Fleming, 2006, p. 3) stated that theatrical practices had been always present in education in the form of varied activities, games or simulations.Bolton (1984) and Heathcote (1984) were the first scholars to highlight the importance of the presence of theatre in the first language classroom, and encouraged teachers to incorporate it in their lessons.Following Bolton's and Heathcote initiative, second and foreign language practitioners started to develop new strategies to integrate this potential tool into their foreign language (FL) classes as they viewed that theatre could effectively tackle the four basic skills: reading, speaking, writing and listening (Dodson, 2000, p. 139;Dundar, 2013Dundar, , p. 1425;;Aldavero, 2008, p. 43).As stated by Zyoud (2010, p. 1): [it] is a powerful language teaching tool that involves all of the students interactively all of the class period (…) Teaching English as a foreign language inevitably involves a balance between receptive and productive skills; here drama can effectively deal with this requirement (…).[it] also fosters and maintains students' motivation, by providing an atmosphere which is full of fun and entertainment.
In the case of Spain, the use of theatre in the classroom was included in the Spanish curriculum in 1970 with La Ley General de Educación (LGE) 14/1970, de 6 de Agosto [the General Education Law], but it was not until the end of the 1980s and during the whole 1990s when theatre became more widespread in educational practice.Nevertheless, besides having been part of the Spanish curriculum until the implementation of La Ley Orgánica (LOE) 2/2006, de 3 de mayo, de Educación [the Organic Law of Education], theatre had not been a very popular practice in the classroom in Spain probably due to the long tradition of textual hegemony privileging philology over performance (Wheeler, 2012, p. 54).
However, the many possibilities that theatre has for the language classroom should not be disregarded.As Horace observed: -Prodesse et delectare [to instruct and to delight]-as well as a privileged form of entertainment, theatre can also function in some, but by no means all cases, as a preferred pedagogical tool due to its literary value.I suggest that instructing and delighting be linked together so that the teacher can focus on both concepts in the classroom: on the one hand, theatre can provide academic benefits as well as keeping students entertained as participants can become motivated and excited about dressing-up, acting and performing in front of an audience.Thereby, both verbal and non-verbal signals can be explored jointly.Fleming (2006), on the other hand, discussed the importance of theatrical techniques and language teaching in the classroom to promote cultural awareness (p.5).Fleming (2006) also depictures the differences between teaching drama and theatre as established in the 1980s.The main dissimilarity will rely on the fact that drama, according to Fleming (2006), refers to shorter plays and requires more improvisation, and theatre is seen as a more literary product that requires more rehearsing in order to achieve a more refined product.
With this revision on theatre, a teacher can be informed on the practicalities and rejections of using it in the classroom.For instance, theatre could be seen as a static practice, and instructors should take it into account before planning their tasks.In addition, when used in the foreign language class, the narrative or script should not be too complicated since this is likely to distract from the primary purpose of the activity (p. 6).
Why theatre? Benefits and Challenges of using theatre in the classroom
It has been proved that the most efficient way to learn a language is to do so via a real context, with everyday situations in which the learner can feel identified in order to attain meaningful learning (Ausubel, 1963).Theatre could provide us with this scenario since it does not only imply performing a play but it engages students actively in the whole process by using the language in context which, as suggested by Savignon (1991, p. 270), will mean that their communicative competence will be enhanced.Furthermore, theatre "[…] presents student language teachers with a very different pedagogical model to that which is the norm within the Modern Language (ML) classroom" (Hulse & Owens, 2019, p. 18).Through the various series of tasks, students will be able to create a fictional world in which they could use the language in settings, something that will seldom occur in the habitual development of the class.Philips (2003, p. 6) also concurs that theatre is motivating and entertaining, and it could also be added that it opens the door to the imagination.Hence, there are many advantages we could think of as regards its use as a pedagogical tool to learn a foreign language.These include but are not limited to the following ideas outlined by Wessels (1987, p.10) andPhilips (2003, p. 7): the learner gains fluency; theatre develops improvisation; promotes motivation and encouragement; improves pronunciation and intonation; increases students' self-confidence and self-esteem; encourages social competence since it is a collaborative task; allows for the acquisition of new vocabulary; facilitates the acquisition of new grammatical structures; students explores different language registers; and also his/her inhibitions decrease (Wessels, 1987, p. 10;Philips, 2003, p. 7).To this, Lavery (2009, unpaginated) adds: theatre improves reading, writing, listening and speaking skills; inspires creativity; students experiment with body language; and learners help with the writing or rewriting of a play.
Broadly speaking, participants are learning because they are using the language in context through a series of communicative exercises that result in the enhancements of the abilities presented above.Wessels (1987, p. 53-54) also explains that theatre can help students learn "by making the learning of the new language an enjoyable experience; by setting realistic targets for the students to aim for; by the creative 'slowing down' of real experience; by linking the language-learning experience with the students' own experience of life" (p.53).She adds that a theatre play can prompt the necessity to learn the language because of the creation of situations whereby an immediate solution is required, and also by delegating more responsibility onto the learner instead of the teacher (p.54).The role of the teacher will be more as a facilitator or a supporter whilst the student will take charge of his/her learning process, acting as potential teachers themselves, which will give them the opportunity for independent thinking by expressing their own thoughts and by putting them into practice.
On the other hand, the use of theatre in the classroom has also been dismissed by some teachers who are reluctant to use this practice (Royka, 2002) because they believe: (1) there is not enough time in the curriculum to carry out this task; (2) there are limited resources; (3) they do not know how to focus the activity; (4) they feel they are being unprofessional and rather follow the textbook to teach; (5) it is ludic and, thus, cannot be considered instructive; and, finally, (6) they consider that the advantages are not sufficient to justify the risk.Nevertheless, these impediments often arise when working with adults since the classroom practices of primary school teachers ensure that they are generally more willing to explore through games, drama activities, and so on since children are known to discover new things through playing, and theatre will fit in their learning development unlike some adults.Fleming (2006, p. 3) suggested a series of different exercises which could take place in the class.
Among the different exercises that could be considered, the following stand out: (1) Warm-ups and games in which they can work in pairs or in small groups; (2) Improvised role-plays which could be seen as small performances in which students can get in pairs to act out extempore; (3) Less improvised situations in which they can write short dialogues; (4) Drama plays which can be short (it can last a few minutes) or long (it can take several months); and (5) Silent activities, which could help to calm those participants who are more anxious about these kind of activities (Gregersen, 2007, p. 62-63).
In sum, teachers of a foreign language had found the inclusion of theatre practices in the classroom a beneficial pedagogical tool that can provide students with dynamic situations in which they will be able to speak freely in a real context setting.As Colangelo and Ryan (2004, p. 375-376) describe, it is through this kind of setup when learners can "truly begin to explore sociopragmatic uses of verbal as well as nonverbal language".Based on the aforementioned arguments advanced by multiple scholars as regards the use of theatre in the classroom with young learners, I decided to implement it as a methodological tool to learn a FL at university level given what I perceive to be a gap in the existing scholarship: few studies have analysed the efficiency of this potential tool to learn a FL with adult learners (Chin Su, 2014;Giebert, 2014).Additionally, I shall use task-based learning to frame the activity.Thus, in the following sections, I shall describe the didactic experience of using theatre in the classroom with university students, who will represent eight short plays in front of their peers (see 3.2).
Task-Based Learning
Task-Bask Learning or TBL focuses on the development of communicative competence through a series of activities or tasks.It constitutes a useful methodology that can guide the teacher to design a lesson that can be easily followed by the students in the classroom.It involves consideration of three different phases proposed by various scholars such as Willis, 1996;Skehan, 1998;and Ellis, 2006.These authors propose having three phases in the activities: pre-task, during the task and post-task.
During the pre-task, students are asked to carry out activities previous to the start of the exercise itself as, for example "framing of the activity" (Ellis, 2006, p. 80) such as establishing the groups, choosing the drama play, altering the script if needed, and "planning time and doing a similar-task" (p.80).This usually involves a whole-class exercise and sets the context for the activity.For the second phase or during-task phase, students focus on the activity or main task itself.It is the longest of all phases and it can be considered the backbone of all three activities.During this phase, the teacher also acts as a guide or facilitator and provides the students with feedback.Learners, on the other hand, work together in order to construct the plays in an autonomous way.Finally, for the post-task phase, students are able to consolidate on the reviewed concepts and at the same time reflect on what they have learned through a series of different exercises proposed by the teacher, which also includes a review of possible errors committed by the learners during all three phases.Throughout this phase, students move from focusing on meaning to focusing on form, which involves reviewing some grammar points.
The determination of the methodology, thus, consists of "creat[ing] opportunities for language learning and skill-development through collaborative knowledge-building" (Ellis, 2006, p. 97).
Hence, by using TBL, a natural environment in which students can play an active role in the classroom and can carry out a series of meaningful tasks is intended since Task-based language pedagogy attracts students' attention; makes them work together as part of a team; puts emphasis on meaning and forces participants into producing extended utterances -both written and speaking-in authentic situations.In this way, thus, students learn by doing and the activity is student-centred instead of the traditional teacher-centred approach as previously stated.
Methodology
In order to carry out my project, the TBL was applied through the myriad activities students have to complete prior to the performance since in TBL the focus is on communicating meaning, which is a paramount component of this exercise.Therefore, as mentioned in the introduction, the activities students have to accomplish in order to complete these eight plays are in accordance with the Task-Based Learning (TBL) model.The performance of a theatre play can be a task based on students' specific needs (e.g. based on improvements on their pronunciation, grammar or lexicon), which can provide interaction in the foreign language, and can also establish connections with real situations as articulated by Nunan (2004).
The participants were a group of 51 students (50 Spanish and 1 German) in their third year who were studying to become primary school teachers specialising in English at the Faculty of Education (University of Valencia); the project took place during the academic year 2018-2019 and lasted three months.As suggested by Fleming (2006) students' degree of fluency was considered before undertaking a project of these characteristics.In our case, students had a good command of the language and their degree of fluency in English was appropriate to carry out the task5 .Hence, after reviewing the literature on the long tradition of theatre and examining the many possibilities that this could offer, I decided to include theatre as part of the syllabus for the subject "English Language I" where learners' level oscillated between B2 and B2+.Students were asked to carry out a project based on a theatre play -worth 25% of their final mark-which they will have to modify and deliver in front of the class.
To begin with, the list of B2 competences provided in the Common European Framework of Reference (CEFR) to base the project on was consulted, and the strategies the students should develop focusing on the value of the various uses of language: ludic and aesthetic (Council of Europe 2001, p. 55-56) were also specified: Ludic Aesthetic Social language games: • oral (story with mistakes; how, when, where, etc.); • written (consequences, hangman, etc.); • audio-visual (picture lotto, snap, etc.); • board and card games (Scrabble, Lexicon, Diplomacy, etc.); • charades, miming, etc.
• listening to, reading, writing and speaking imaginative texts (stories, rhymes, etc.) including audio-visual texts, cartoons, picture stories, etc.
• the production, reception and performance of literary texts, e.g.: reading and writing texts (short stories, novels, poetry, etc.) and performing and watching/listening to recitals, drama, opera, etc).
Selecting and Implementing a Theatre Play in the Classroom: A Pedagogical Experience
The main goal in creating this project was to encourage students to learn English through real-time use by creating a setting in which language situations that will not usually happen in the classroom could take place.In this manner, as stated before, language is applied in real situations and makes the process meaningful (Ausubel, 1963).Therefore, the learner will have to go through a number of phases before the final product was completed (the pre-task phase, the during-task phase, and the post-task phase following Ellis ' (2003, 2006) model).
Procedure
In order to get started, first of all, students were given simple instructions to gain a general overview of what was expected from them as shown below in the instructions: Read, rewrite, rehearse, setup, dress-up… perform!Get in groups of 5 to 10 participants and select a short play from the list provided by the teacher which you will learn to deliver in class; you are free to make any modifications to it.Alternatively, you can create your own original play.Remember that each individual has to interact in the performance, which should not exceed 20 minutes.
Thus, secondly, the teacher showed them a variety of relevant plays in English that thought could meet their expectations, and told them that the play had to be performed in front of an audience comprising all the members of the class.The teacher oriented them, but learners had the final say.
Secondly, they were told that the performance should not exceed 20 minutes and the number of participants had to range from 5 up to10 characters per play.Thirdly, students selected their script and informed the teacher about their choice.Students were encouraged to make modifications in the play chosen, or to create an original play themselves in which they could develop ideas of their own and, thus, improve their writing skills and enhance their critical thinking.They also had to carefully choose the music and the atrezzo for their chosen works, although they were told that the main focus had to be on the academic work.In addition, the story had to be suitable for performance in front of primary school students since one of the purposes is that they use the play as a didactic tool for their lessons when they had a class of their own.The table below condenses the guidelines: Step 1: Several websites and books with theatre plays were passed in the classroom.Students were asked whether they were familiar with the selected plays and whether they have read them.
Step 2: Students decided on the members of the group (5 minimum and 10 maximum) Step 3: Selection of the play/creation of an original piece that has to last 15-20 minutes.
Step 4: Selection of music and atrezzo.Next, I shall describe the different activities, based on TBL, students have to undertake in order to complete the tasks previous to the performance of the play: a) Pre-task Once ascertained that the working structure was clear, students started working on their project.
This first two sessions were carried out in the classroom under my supervision and guidance.During this pre-task sessions, students got into groups and selected the play they wanted to perform in class and altered it when necessary.Once the script was finished, they gave it to me to correct the possible mistakes, I returned it to them with feedback.Then, students practised the performance task together, and helped each other with it.They started to rehearse.Next, they chose the music, and the atrezzo.
In addition, two more hours during the term were dedicated to answering any questions in class relating to the play, but after those sessions finished, students had to carry on preparing their work outside the classroom.In this manner, the pre-task had hybrid sessions (in class and outside class), and we could continue with regular classes without leaving questions raised by the students related to the theatre play.Additionally, the teacher was always available to deal with matters relating to the theatrical task during her office hours or via e-mail.
b) The task phase First of all, before we started, and as mentioned in the introduction, I took the challenges mentioned by Fleming (2006, p.3) into account in order to be prepared to face such problems.Overall, students responded well and could overcome those challenges with the help of the teacher and their peers.
Students were required to perform in front of the class under time pressure since a limit of 15-20 minutes to carry out the performance was set.Learners were not permitted to keep text or notes during the performance.This increased the complexity of the task, but it also encouraged them to work harder.During this phase, students performed in front of the class while their peers and teacher observed them and made a few comments/observations about a series of aspects related to grammar, pronunciation, intonation, vocabulary, creativity, acting, and atrezzo by employing an assessment rubric (see Annex A).
c) The post-task phase This final phase consisted of giving feedback and asking the performers questions taking the aforementioned matters (e.g.grammar, pronunciation, etc.) into account.Students could ask questions to each member of the group regarding any particular topic that concerned them.Participants could also see their mistakes recorded.Permission to video-record the performances for didactic purposes was necessary since, in this manner, learners were able to observe themselves and learn from their errors.In addition, students were given a questionnaire previously elaborated by the teacher which contained open inquiries with reference to the whole experience of making a theatre project in which they could highlight positive and negative aspects of carrying out a play in class (see Annex B).
Performing phase
After three months of preparation and cooperative work, students delivered their scripts in front of their classmates.They were organized in eight groups which performed parodies and/or adaptations of plays based on popular tales: Hansel and Gretel; Cinderella; Little Red Riding Hood; The Princess and the Pea; Alice in Wonderland; Goldilocks and The Three Bears; Sleeping Beauty; and The Three Little Pigs respectively.Although some groups did alter the play substantially (e.g.The Princess and the Pea; Sleeping Beauty; Cinderella, and Little Red Riding Hood) none decided to create a script of their own; nevertheless, the modifications were significant and some plays could be considered as independent entities from the original script.Performances took place during four days and all learners attended their classmates' representations willing to help them by giving them feedback or supporting them in any technical or dress-up last-minute matters that may have spontaneously arisen.
Assessment of my Didactic Experience
The data collection tools employed to assess the didactic experience relied on the use of an assessment rubric (annex A), direct observation and open questions in the form of a questionnaire (annex B).
In order to assess students' work, I designed an analytic rubric (see Annex A), which contained eight components based on the criteria I wanted to base my assessment on (e.g.voice and pronunciation, grammar and vocabulary etc.).Moreover, under each criterion I introduced a scale to judge the quality (from 1-4).The designed rubric, thus, has provided my students with clear and directed feedback that will help them improve their learning at the same time that it will orient me to give them grades for the work they have done.
The next assessment instrument was direct observation.Informal assessment through direct observation was used by monitoring the students and by taking notes about their performances and progress in class.Watching the students in the classroom and making use of this observational study method, allowed me to collect evaluative information to add to the results of the rubric and award them a final grade.
Finally, a series of questions were elaborated in the form of a questionnaire (see Annex B) that was passed on to the students at the end of all phases.The questionnaire contained 6 open questions with reference to the whole experience of making a theatre project in which they could highlight positive and negative aspects of carrying out a play in class; questions were based on the degree of engagement and usefulness of the practice.Once students handed in their responses, each item was analysed individually; the results obtained, gave us enough information to link it with the aims established.
Regarding the results, after having done all the tasks and based on the information gather from the assessment rubric, I was able to determine the following outcomes: 1) As they were told that the duration of the performance should not exceed 20 minutes, students were able to condense their scripts in a coherent and concise manner, being able to retain the most relevant details and exclude the insignificant ones.
2) This resulted in well-structured and creative descriptions that held the audience's attention.
3) Students employed relevant and often new lexis throughout the representation, which implied that their vocabulary had expanded.
4) The use of different grammar constructions was also patent in the texts they delivered and the degree of fluency increased.They felt more confident at a speaking level as they normally are in regular lessons probably because they felt secure in the class context.
5) Students' feedback to other participants was meaningful and constructive and they all took notes of what their classmates were suggesting to use it as improvement measures.
6) Finally, as a result of the above-mentioned aspects, language learning motivation had increased probably, as stated by Giebert (2014, p. 143), due to "a more (physically) active learning including the learner's whole person, an experience of collaboration, a sense of achievement and taking joy in a creative approach".
Overall, students provided positive feedback about the way the project developed and how they benefited from it.As confirmed by Hulse and Owens (2019, p. 19), "learners respond very positively to opportunities to co-create the dramatic narratives that bring these worlds into being".
Hence, based on students' responses, the following can be stated: one student mentioned that the activity was helpful because she had learnt new lexis, had been able to interact with other peers, and had experimented new experiences too.She added that, additionally, it was also "funny": "You interact with more students and learn new vocabulary.You also have to do new things that make you go beyond yourself, so there are new experiences, and of course it is funny". 6Another student mentioned that it was "funny and exiting" and that she will put it into practice in her class with her future primary school's students.One participant mentioned the fact that it was the first time that she had studied a theatre play in class in English; however, she mentioned that she had done three more in the past probably in Spanish or Catalan, her mother tongues: "I had done three plays before, but none of them in English".The answer of one of the participants to the question: "Do you consider your confidence in speaking in class has grown after carrying out the performance with your classmates?" was also positive, although he claims not to have been specifically concerned with this matter, he added that it was positive: "I don't think that has make a very big change because I was already confident.But I think it is always good and makes a little difference".
Another learner mentioned the fact that they had to write or re-write the script of the play was useful to learn new words in context and added that they learnt a lot through the process: "I think it was useful because we had to write the script of the play so we had to look for the appropriated words for every moment and with that we learnt a lot".
All of them responded "yes" to the question of whether they would use this tool in class for the future: "I would do it because I think it is a funny activity and they can learn a lot from it".To which another participant added: "Yes, I will put it into practice".
Students also highlighted the challenges of working in groups, and how other peers had experienced problems due to the lack of commitment: "I enjoyed working with my group but I heard that other groups had problems because the members of the group were not involved in the same way and that is something that always happens".
When students where asked about their favourite part they gave different opinions: "I enjoyed everything but my favourite part was writing the play because writing jokes in paper and after check if makes people laugh or not is something that I find really thrilling".And another participant said: "The most enjoyable for me was decorating everything and dressing us up, and of course, the performance".
In general, the feedback received from the students, as shown above, indicated that participants not only enjoyed bringing out a play in English but they also emphasized: (1) the fact that they have learnt from each other; (2) that they have been aware of their linguistic progress while rehearsing for the play; and, as teachers-to-be, (3) they are now conscious of the many possibilities that this resource can offer for primary school students too.
Conclusions
The didactic proposal presented here consisted of creating a play within a task-based framework in which students read, (re)wrote, spoke, reasoned, interacted, and worked together with their peers in a dynamic group project with the support of a written script and the guidance of their teacher.This implied that students had to develop their abilities in a semi-independent7 way using theatre as a methodology.This meant that the process would be learner-centred, as established at the beginning.
Notwithstanding, in line with Fleming (2006, p. 6), it is important to highlight the fact that theatre has been tackled in this study as a complement to other approaches and not as a substitute of them.
Throughout this article, we have been able to ascertain that theatre could be a means to learn a foreign language since it can integrate both verbal (pronunciation, vocabulary, grammar, listening, fluency) on the one hand, and non-verbal aspects (cultural aspects, group work abilities and selfconfidence).It has also proven that theatre can provide motivation to learn, can enhance the teaching process and that it can strongly engage adult learners and reinforce their confidence, as stated by Fleming (2006, p.1), at the time it opens forms of communications with their classmates.Moreover, as stated by Wessels (1987) and Philips (2003), and as deduced from the questionnaires, the direct observation and from the rubric employed to assess learners, our students gained fluency; developed improvisation; improved their pronunciation and intonation; acquired new vocabulary and new grammatical structures; and their self-esteem increased at the time their inhibition decreased.
As a teacher of a foreign language, I keep seeking out new ways to teach English in the classroom in which constrains such as time and space are constantly present.I came across theatre after reading about the long tradition of theatre to teach both first and second languages in the United Kingdom and how this tool had become a resourceful strategy for British teachers, and, thus, decided to examine it more deeply and apply it in my classes in Spain; the results have shown that it was worth the effort.
In addition, this experience has encouraged me (and hopefully others) to consider exploring the efficacy of theatre as a didactic experience in the undergraduates' classroom to learn a foreign language in the future.Furthermore, after witnessing the benefits of using theatre in my classroom, I am able to state that the use of this tool to teach a FL should not be undervalued or displaced to an end-of-class activity but should rather be undertaken as a serious tactic that can provide us with a myriad of resources.On this grounds, I would like to pose the following question: Theatre in the University classroom?Why not… | 2020-11-05T09:08:36.211Z | 2020-10-31T00:00:00.000 | {
"year": 2020,
"sha1": "c6c162220f9215840a054779cd3f52ad8345a8db",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.37261/25_alea/5",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "05ebdf55c76d0416923535a505235632bc523edd",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
} |
252289353 | pes2o/s2orc | v3-fos-license | -41-Research on Network Moral Construction Facing Epidemic Normalization
: The network brings various conveniences to people and has a great impact on people's moral cultivation. To promote the effectiveness of citizens' moral construction in the new era, this study is based on the characteristics of citizens' network moral construction. It points out the difficulties and challenges faced by the construction of network morality from the aspects of false information raging in cyberspace, lack of citizens' network moral integrity and lack of citizens' network moral will, and puts forward the path of improving college students' network moral construction from three angles: giving full play to the value of educators leading the network moral construction, promoting the network moral education construction and optimizing the network moral environment construction. It provides a reference for colleges and universities to guide college students to abide by the norms of network behavior, shape an exact network moral system and establish a civilized and harmonious network home.
Introduction
The emergence of the novel coronavirus pneumonia epidemic has made countries busy coping with it for a period of time and threatened national security to a certain extent. [1] With the spread of the epidemic, public opinion about the epidemic is constantly evolving. If the network public opinion, especially the negative network public opinion, is allowed to spread, it will lead to a serious social management crisis, and then the overall security of the country cannot be realized. [2] Morality is a kind of social consciousness, which is the sum of a special behavior norm to adjust the relationship between people and society, embodying the behavior norm and general requirement of a certain society or class. The so-called network morality refers to the sum of ethical norms and behavioral norms that people should follow in cyberspace. [3] Citizen's network morality is a new form of citizen's morality in new practice places such as the Internet and artificial intelligence. In essence, citizen's network morality is the inheritance and inheritance of citizen's morality in cyberspace, a new practice place. Moreover, it is improved and innovated based on the new problems of network technology itself and cyberspace.
Characteristics of Citizens' Network Moral Construction
The peripheral technologies based on network technology are constantly updated at present. Based on the update of network technology, the technology terminal software that affects public life has formed a new network ecological moral field for the public, which enables the construction of citizens' network morality more challenging. [4]
Personalized Citizen Subject
With the rapid development of mobile Internet technology, the ability of big data collection and learning supported by network technology is stronger. Therefore, the preferences of mobile network subjects are infinitely enlarged and satisfied. Compared with the traditional network citizen subject, the network citizen subject with big data as the map has more characteristics of individuation and uniqueness.
Complicated Moral Construction
Compared with the traditional Internet era, the iterative update speed of network technology in the new era is faster with shorter cycles. How to identify the behavior of the subject using AI technology from the behavior of the machine? Should the subjects involved in moral construction include robots with independent thinking fusion algorithms? How to regulate moral anomie? The above new characteristics have to be considered during the construction of network morality.
Dualing Citizenship
The rise of We Media has greatly promoted the influence of citizens on other citizens. Citizens are not only participants in moral construction but also passive subjects influenced by others. Every citizen is not only a creator of information but also a disseminator of information, a participant in the turnover of network resources, and a direct decision-maker of network resources from birth to death. The development of We Media technology enables this feature more obvious in the new era.
Decentralization of Public Life
Under the background of blockchain technology, citizens can edit the data of what they are engaged in and make "book-entry" data editing for other citizens and things around them. The participants, objects, and even public life itself of the whole public life are decentralized. Since the construction and maintenance of public life order, the communication behavior has been transformed into the data mode edited by the subject.
False Information in Cyberspace
Because of the virtual nature of the Internet and the diversity of the main body of network communication, people's words and deeds in cyberspace are often more unscrupulous than those in real life. Cyberspace still presents weak demonstration characteristics at present. The real-name system of the network has not been implemented. The main role of network behavior is virtual and diversified, so people will naturally weaken the moral constraints and legal norms that they fear in real life. Therefore, driven by interests, the network presents the side of false information flooding and rumors spreading all over the sky.
Lack of Citizens' Network Moral Integrity
Internet users are increasing year by year, who are more closely connected with the network with the change of information dissemination channels. Under the pressure of heavy life and work, a considerable number of citizens rely too much on the Internet and lack subjective initiative, resulting in many behaviors of different knowledge and practice and lack of integrity. For example, the Internet has brought great convenience to academic exchanges and research, but it is precisely because of this convenience that researchers copy academic achievements to cope with scientific research. Even though the Internet, essay writing services are purchased or sold. Academic misconduct not only damages the reputation of citizens and universities but also has a positive influence on the whole society.
Undetermined Citizen's Network Moral Will
A citizen's network moral will is a psychological process in which citizens consciously adjust network behavior to overcome all difficulties and achieve network moral goals in the network moral situation. It is the spiritual qualities such as confidence, determination, and perseverance that citizens show in the process of deciding moral behavior in the network social life. Some citizens show that they are weak in morality and can't firmly resist the interference of bad information and abide by the truth due to the protection of anonymity in the open Cyberspace.
Persistent Cyber Violence
"Individuals in a group will show obvious herd mentality, which Le Bon called 'psychological law of Published by Francis Academic Press, UK -43-group spiritual unity.' This tendency of spiritual unity has caused some important consequences such as dogmatism, paranoia, the feeling that many people are invincible, and the abandonment of a sense of responsibility." Whenever a hot event occurs, people are easily guided by words and the rendering of public emotions, and carry out irrational behaviors, followed by cyber violence.
Personal & Property of Citizens under Threat
Network fraud often appears and with the continuous progress of network technology and the continuous improvement of people's anti-fraud awareness, the fraudsters' means of deception are also varied and hard to prevent. More than 200,000 online fraud cases were cracked nationwide in 2019, involving online shopping, online pyramid schemes, illegal account registration, and other fields, which led to personal information leakage and threats to personal and property safety. After the incident, it was difficult to pursue money. The police faced a series of difficulties in obtaining evidence and arresting.
Over-commercialization & Entertainment of Online Content
Negative factors such as money worship, egoism, and bloody violence in cyberspace cannot be avoided at present. These negative factors blur the moral bottom line and greatly impact mainstream values. The serious tendency of commercialization and entertainment of network information hurts teenagers' mental health and has a great impact on their personality and positive values.
Path of College Students' Network Moral Construction
College students are the backbone of Internet users. They are not only the disseminators of network information but also the acquirers of network information. It is of great significance for college students' all-around development and cyberspace governance to strengthen the construction of college students' network morality and consciously abide by the moral requirements in network life and become the backbone of building clear cyberspace. The construction of college students' network morality mainly starts from the following four aspects.
Give full play to the value of educators leading the construction of network morality
To give full play to the value of educators leading the construction of network morality, first of all, high-quality network moral education teachers should be established. Teachers in normal colleges should increase the teaching content of network literacy in the teaching process. Moreover, schools should strive for relevant policy support and financial assistance to increase the internship opportunities and practical experience of teachers to enhance the professionalism of network moral education teachers. Colleges and universities should pay attention to improving the professionalism and level of network moral education teachers so that they will combine the new problems and new characteristics of the new era, innovate the existing theoretical achievements of network moral education, be good at connecting the actual development of network society with the growth and changes of students, and enable the network moral education work alive via new media and new technologies. Secondly, the selection and management mechanism of network moral education teachers should be improved. In the selection, the principles of fairness, justice, democracy, and the rule of law need to be adhered to, to comprehensively investigate teachers' literacy, focusing on the evaluation of network literacy and the construction of the entry threshold system of network moral level. In terms of management, we should set up a series of professional and technical positions for teachers/workers of network moral education to provide teachers with training and further education opportunities to improve their network literacy and education ability. Third, teachers of network moral education should strengthen self-improvement. Internet thinking means that teachers should not only stick to the teaching of the existing contents of the teaching materials but also closely combine the new hot spots, new changes and the thinking dynamics of college students to get out of the teaching materials and integrate them into the development of the Internet society.
Promote the Construction of Network Moral Education
Network moral education, as an important part of network education, plays a pertinent role in the improvement of college students' network virtue. [5] Lead the content of network moral education with socialist core values: First, the curriculum proportion of great ideals and beliefs needs to be increased. Moreover, the basic knowledge of network ethics and guidelines should be instilled and educated. It should not only involve the significant education of network morality but also increase the basic operational teaching education such as cognition and judgment of network morality phenomenon. Second, the way of network moral education needs to be innovated. Network moral education needs ideological and political courses as the main channel. In addition, we should combine the new characteristics and changes of the network society closely with the moral needs of college students. For example, in the ideological and political class, interacting with students through Internet platforms, mobile apps, etc. should be learned to carry out network moral value communication education. Schools should build a platform for students' online participation, encourage them to participate in online examinations, online debates, online simulated applications, and other activities, and update their network moral cognition in practice to achieve network moral growth.
Optimize the construction of network moral environment
Whether it is the confusion of college students' network moral cognition or the distrust of network communication, it reflects the confusion of network moral environment and the disorder of moral management. First, the mainstream media's control and guiding role in public opinion information need to be brought into full play. We Media's network platform needs to be used to cultivate the moral meaning of the network, such as pushing articles through the WeChat Official Account. Second, perfect the network ethics to play its guarantee role. Network ethics is the basic compliance of college students' survival and development in the network society. The perfection of the content of network ethics should not only aim at and adapt to the realistic characteristics and idealized characteristics of college students but also bridge an effective connection and transformation between realistic ethics and network ethics. Third, strengthen the construction of campus network culture to play its role in self-restraint. For example, campus radio should enrich its broadcast content, which should not only spread the latest national policies but also tell revolutionary heroic deeds. In addition, the push content of the WeChat Official Account should be enriched and innovated to meet the development needs of college students.
Conclusion
Morality is unique to the human being and gradually formed and developed in the interaction between subject and object. For college students, the perfection of network virtue needs not only their subjective efforts but also the joint efforts of various external conditions for improvement. How to enable the objective conditions and subjective aspects of college students to form the best growth synergy needs further research and discussion. | 2022-09-16T15:31:19.575Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "c878d9511ef8a24ba9728d1c33666ca7b5f812be",
"oa_license": null,
"oa_url": "https://francis-press.com/uploads/papers/L1NY2rTyMTwRSodNJ33XVxCqbYJc4155givZ6t85.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bd60040b829775451024c8601f933b5d02ab9696",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
73660564 | pes2o/s2orc | v3-fos-license | Phytochemical and zootechnical studies of Physalis peruviana L . leaves exposured to streptozotocin-induced diabetic rats
1 Department of Pharmacotoxicology and Pharmacokinetics, Faculty of Medicine and Biomedical Sciences, University of Yaoundé I, P. O. Box 1634, Yaoundé Cameroon. 2 Department of Pharmacy, Faculty of Medicine and Pharmacy, Official University of Bukavu, P. O. Box 570 Bukavu, Democratic Republic of Congo. 3 Department of Pharmacognosy and Pharmaceutical chemistry, University of Bamenda, Cameroon. 4 Department of Medical Laboratory Sciences, University of Bamenda, Cameroon. 5 Department of Biochemistry, Faculty of Sciences; University of Yaounde I, P. O. Box 812 Yaounde, Cameroon. 6 Department of Pharmacology, School of Medicine and Health Sciences, University of Rwanda, Rwanda.
INTRODUCTION
Several pathophysiological processes are involved in the development of diabetes mellitus.These range from autoimmune destruction of the β-cells of the pancreas with a consequent insulin deficiency to abnormalities that result in resistance to insulin action (Armelle et al., 2008;Arika et al., 2016).
Diabetes mellitus is the most prevalent disease in the world, affecting 25% of the population and afflicts 150 million people and is predicted to rise to 300 million by 2025 (WHO, 2002;Babu, 2016).Conventional management of diabetes is expensive and therefore not affordable by many patients, especially in developing nations.More so, conventional drugs are not readily available and have been found to have side effects with long term use (Arika et al., 2015;Deeni and Sadiq, 2002).The distinctive traditional medical opinions and natural medicines have shown a bright future in the therapy of diabetes mellitus and its complications (Ekramul et al., 2002;Arika et al., 2016).
The World Health Organization (WHO, 2002) recommended the use of medicinal plants for the management of DM and further encouraged the expansion of the frontiers of scientific evaluation of hypoglycemic properties of diverse plant species (WHO, 2002;Kwete et al., 2002;Chikezie et al., 2015).
Plants have potential sources of hidden phytoconstituents which can be responsible for solving various potential health problems (Noumi and Yomi, 2001;Kwete et al., 2007).Medicinal plants have curative properties due to the presence of various complex chemical substances of different composition, which are found as secondary plant metabolites in one or more parts of plants (Li et al., 2007;Patil, 2016).They are also associated with reduced risks of cancer, cardiovascular disease, diabetes and lower mortality rates of several human diseases (Momeni et al., 2005;Ozkan et al, 2016).
In a survey conducted in the Eastern part of the Democratic Republic of the Congo (DRC), a number of traditional healers pointed out the use of P. peruviana L. leaves for this purpose (Kasali et al., 2013b).The objectif of this study was to analyze the phytochemical composition and evaluate zootechnical parameters of a hydroalcoholic extract and its fractions in diabetic rats.
Study sites
The present study was undertaken at the laboratory of Pharmacognosy, Faculty of Medicine and Pharmacy, Official University of Bukavu/Republic Democratic of Congo) and chemical study of medicinal plants, bacteria, fungi and endophytes was done at Faculty of Sciences, University of Yaounde 1/Cameroon; Phytochemical laboratory of Higher Teachers' Training College (Faculty of Sciences, University of Yaounde 1) and laboratory of Toxicological and Pharmacological Studies (Faculty of Medicine and Biomedical Sciences/ University of Yaounde 1).This study was conducted between September 2015 and April 2016.
Plant material
The leaves of P. peruviana L. (Solanaceae) were collected at Lwiro (Center for Research in Natural Sciences, Democratic Republic of Congo) situated at 50 km from Bukavu (South Kivu, Democratic Republic of Congo).They were identified a by Mr. Gentil IRAGI of Botany Department of this center and compared with voucher specimen No.2044.The leaves were air-dried and powdered for analysis.
Preparation of hydroalcoholic extract and its fractions 800 g of the powdered leaves of P. peruviana were macerated with 6 L of 70% EtOH (Jothi et al., 2015) for 48 h and the combined filtrate (using the Whatman filter paper No. 1) was evaporated under reduced pressure using a rotary evaporator.A dried extract with a yield of 28.95% was obtained.One part of the filtered hydroalcoholic extract was stored in a refrigerator at 4°C.Another part of this extract was soaked in hexane and decanted into a funnel.The hexane fraction was concentrated in a rotary evaporator (BÜCHI 461 water Bath).This operation was repeated several times until total exhaustion (the solution has become colorless).The same operations were carried out with ethyl acetate.The residue from this fraction was concentrated under reduced pressure using a rotary evaporator.The following yields were obtained: 3.19 and 25.06%, respectively, for hexane and ethyl acetate.
Animals
Healthy male albino Wistar rats (body weight 175 ± 10.6 g) aged 2-3 months, were used in the study.The rats were maintained under standard laboratory conditions at 27.75 ± 1°C, and normal photo period (12 h dark/12 h light) was used for the experiment.The rats were acclimatized to the laboratory conditions a week prior to the experiment.
The experimental protocol and the maintenance of the experimental animals was done in accordance with the regulations of the Organization for Economic Co-operation and Development (OECD) guide since in Cameroon, the ethics committee focuses only clinical studies.The animal experiment protocols were carried out in accordance with the guidelines of the ICH on preclinical pharmaceutical testing in mouse (OECD, 2001;Tsague et al., 2016).
Animal ethical regulatory consideration
Healthy male albino Wistar rats (body weight 150 to 250 g) were ethically required for use for the experiment according to *Corresponding author.E-mail: charlesfokunang@yahoo.co.uk.Tel: +237670902446 Author(s) agree that this article remain permanently open access under the terms of the Creative Commons Attribution License 4.0 International License the ICH guidelines.
The experimental protocol and the maintenance of the experimental animals was done in accordance with the regulations of the OEDD guide, the EU parliament directives on the protection of animals used for scientific purposes, since in Cameroon, the ethics committee focuses only on clinical studies.The animal experiment protocols was carried out in accordance with the guidelines of the ICH on preclinical pharmaceutical testing in mouse (OECD, 2001;Akbarzadeh et al., 2007).
Phytochemical screening
Qualitative phytochemical tests of R. heudelotti methanolic extract were carried out according to Odebiyi and Sofowora (1978) methods to identify some components such as alkaloids, saponins, tannins, flavonoids, polyphenols and anthraquinones.
Test for alkaloids: 0.5 g of the sample was stirred with 5 ml of 1% aqueous HCl on a steam bath and then filtered.1 ml of the filtrate was treated with a few drops of Mayer's reagent and a second 1 ml portion was treated similarly with Dragendroff reagent.Turbidity or precipitation with either of these reagents was taken as evidence for the presence of alkaloids in the extract.
Test for saponins:
The ability of saponins to produce frothing in aqueous solution and to haemolyse red blood cells was used for the screening test.0.5 g of plant extract was shaken with water in a test tube.Frothing which persisted on warming was taken as evidence for the presence of saponins.
Test for tannins: 0.5 g of dried extract was stirred with 5.0 ml of distilled water.This was filtered and ferric chloric reagent was added to the filtrate.A blue-black precipitate was taken as evidence for the presence of tannins.
Test for phenol and polyphenols: 0.5 g of plant extract was heated for 30 min in a water bath.3 ml of 5% FeCl2 was added to the mixture, then followed by the addition of 1 ml of 1.00% potassium ferrocyanide.The mixture was filtered and green (phenol) and blue (polyphenol) colours were observed.
Test for anthraquinones: 0.5 g of plant extract was shaken with 5 ml of benzene, filtered and 2 ml of 10% ammonia solution was added to the filtrate.The mixture was shaken and the presence of a pink or violet colour in the ammoniacal (lower) phase indicated the presence of free hydroxy anthraquinones.
Test for flavonoids: 0.5 g of plant extract was dissolved in 5 ml of NaOH at 1 N.The change of the yellow colour obtained after adding HCl 1 N indicated the presence of flavonoïds.
Experimental protocol
The rats were divided into six groups of five rats in each group.Group 1: Untreated rats (control), received vehicle alone (1% tween 20, 1 ml per orally); Group 2: Rats treated with 6.5 mg/kg of glibenclamide, positive control; Group 3: Rats treated with 100 mg/kg of hydroalcoholic extract of P. peruviana; Group 4: Rats treated with 100 mg/kg of the hexane fraction of P. peruviana; Group 5: Rats treated with 100 mg/kg of the ethyl acetate fraction of P. peruviana; Group 6: Rats treated with 100 mg/kg of the residue fraction of P. peruviana.
All rats were administered single dose of drug (orally) daily for 28 days.Daily administration was through a gastric gavage by inducing a gastric tube (Gatierrez et al., 2014).The day of administration of first dose was considered the zero day of treatment.
At the end of the experimental period, all animals were deprived of food overnight and then sacrificed by cervical decapitation after anesthetizing by either inhalation (Saini and Sharma, 2013).One touch electronic Glucometer (One Touch Ultra®) was used for glucose measurement.
Water consumption and food intake
The body weight of each rat was measured once each week and the total amount of food consumed was recorded 3 times per week (Gutierrez et al., 2005).
Body weight monitoring
Body weights of all animals in each group were monitored using a top loader weighing balance throughout the experimental period (Ofusori et al., 2012).
Statistical analysis
All results were expressed as mean ± standard error (SE) for each sample.Statistical analysis was performed using GraphPad Prism 5.02 statistical package (GraphPad Software, USA).The data were analyzed by one way analysis of variance (ANOVA) followed by Turkey's multiple comparison post test.Differences between groups were considered to be significant at P < 0.05.
Phytochemical analysis
The results in the Table 1 represent the phytochemical analysis of some fractions from the hydroalcoholic extract of P. peruviana leaves.According to these results, the phytochemical analysis showed the presence of tannins and saponins in all fractions except in the hexane fraction.However, the method used did not show resins and oxalates.In addition, the highest percentage of positive tests were obtained from the hydroalcoholic extract (37.5%) respectively, followed by fractions with ethyl acetate residue (25%) and ethyl acetate (25 %), and finally the hexane fraction (12.5%).
Water consumption and food intake
The water consumption showed a highly significant increase (*** = P ˃ 0.001) in all animals treated against healthy animals (control), however intake food variation showed a highly significant decrease (P ˃ 0.001) in all animals treated as compared to the healthy animals (Figure 1).
Body weight monitoring
There was a highly significant (p˂ 0.001) body weight changes in all treated rat groups as compared to the control the group (Figure 2).Between groups treated (with glibenclamide and plant extract/fractions), there was no significant difference (P<0.05).There was a significant difference (p˂ 0.05) between the control group and those treated with the aqueous alcoholic extract.
Assessment of weight variation of organs
From the study as shown in Table 2, there was no significant difference (p˂ 0.05) in the change of organ weight in all animals treated for heart, liver, brain, spleen and the two kidneys.However, a significant difference (p˂ 0.05) in weight variation of pancreas, liver, brain, lungs and testicles was noticed (Table 2).
DISCUSSION
There was an uneven distribution in this study of secondary metabolites in the hydroalcoholic extract and its fractions.Saponins and tannins tests were positive in the extract and in two of its fractions (ethyl acetate and its residue fractions), and having 25% of positive tests.Polyphenols compounds, flavonoids, anthocyanins, mucilage, cardiac glycosides and coumarin represented 9.4% in each category.This category is followed by grouping betalains (6.2%).Alkaloids represented 15.6% of positive tests for three different reagents used, which gave an average of 5.2% per reagent.Steroids and terpenes represented 3.1%.Quinones, resins and oxalates are not found (0%).Many preceding studies reported the presence of some secondary metabolites in the fruit or the leaves of P. peruviana.Some studies indicated that cardiac glycoside, alkaloid, saponins, tannins, steroids and terpenoids and flavonoids were present while anthraquinones were absent in the leaves (Moabe et al., 2013;Magambu et al., 2014).A phytochemical screening in regeneration plant, callus from seed, leaf and fruits from mother plants of P. peruviana, showed the present of alkaloids, glycosides, cardiac glycosides, saponins, phenol, sterol, tannins, flavonoids and diterpene (Lashin and Elhaw, 2016).A phytochemical investigation of the crude ethanolic extract of P. peruviana L. revealed the presence of phenols, flavonoids, phytosterols, glycosides, sterols, saponins, tannins and alkaloids (Ahmed, 2014).Previous phytochemical studies have isolated a number of compounds from P. peruviana, such as ticloidine, withanolides, phenolics and phytosterols (Gautam et al., 2015).P. peruviana contain the pseudo-steroids (physalines) and glycosides which show the anticancer activity.From the aerial parts of P. peruviana, various withanolide glycosides have been isolated.From the whole plant material, there is isolation of two withanolides (Sharma et al., 2015).Three new physalin steroids, physalinIII, physalin IV, 3-O-methylphysalin X, together with five known physalins were isolated from the 80% EtOH extract of calyces of P. alkekengi var.franchetii (Yu et al., 2013).
A number of phytochemical are known, some of which include: alkaloids, saponins, flavonoids, tannins, glycosides, anthraquinones, steroids and terpenoids.They do not only protect the plants but have enormous physiological activities in humans and animals.These include cancer prevention, antibacterial, antifungal, antioxidative, hormonal action, enzyme stimulation and many more.Phytochemicals are responsible for the medicinal activity of plants and they have protected human from various diseases (Savithramma et al., 2011).Many classes of plants secondary metabolites, such as alkaloids, terpenoids, polyphenols, flavonoids and many others show promising antidiabetic potentials.These natural constituents may act as a promising source of delivering oral hypoglycemic effect with minimal side effects (Singab et al., 2014).
According to the results, the administration of a single dose of Streptozotocin (50 mg/kg weight body) increased water consumption.Other studies have shown that diabetes mellitus is characterized by classical symptoms such as polyphagia, and polydipsia which are exhibited in HFD-STZ diabetic rats and this may be attributed to the impaired glucose homeostasis as a result of insulin inefficiency (Akbarzadeh et al., 2007).Water consumption was inversely related to food intake, this was an indication that the decrease in food intake in diabetic animals was linked to the significant amount of sugars in the blood that had an impact on the index of satiety.High levels of sugars were associated with decreased appetite and short-term food intake as has been reported also (Anderson and Woodend, 2003).Oral treatment with the fruit extract of P. peruviana to diabetic group of rats decreased food and fluid consumptions which could be due to improved glycemic status (Sathyadevi et al., 2014).
This study showed a significant decrease in final weight, weight gain at p< 0.001 and also food intake at p< 0.05 as compared to control group.Both Physalis powder and juice treated groups showed a significant decrease in final weight (p< 0.01 and 0.05, respectively), weight gain and FER at p< 0.05 as compared to the control group.Aqua Physalis extract and methanol Physalis extract treated groups showed non-significant difference in these parameters as compared to the control group.Physalis powder, juice, aqua Physalis and methanol extract treated groups showed a significant increase in final weight, weight gain, food intake as compared to reference group (Hafez et al., 2011).This study also showed differences in some changes in organ weight and body weight Diabetic condition has been known to be associated with weight loss as reported by Anderson and Woodend (2003).The weight loss recorded in untreated diabetic animals could be a symptom of ill health, which may have been caused by the release of free radicals (Abdelmoaty et al., 2010).
The streptozotocin-induced diabetes rats showed significant loss of body weight with respect to the extract treated and controlled groups.Kumar et al. (2011) reported that antidiabetic and antihyperlipidemic effects is best induced in rat models using streptozotocin-induced rat models for better comparison with test plants, and give a better result profile of the test battery.With respect to the reference group, the inability of the plant to improve the animals' weight, at the end of treatment was observed, although a stabilization of weight was recorded at the end of treatment.
Conclusion
The result of this present study showed that the hydroalcoholic extract of P. peruviana and its fractions contains many secondary metabolites which will be used against diabetes.It is interesting to isolate and characterize some compounds of this plant and to extend their antidiabetic potential investigations.
Figures 1 .
Figures 1. Water consumption and food Intake.
Figure 2 .
Figure 2. Body weight monitoring of test rats when compared with the control.
Table 2 .
Weight variation of organs.The values are expressed as mean ± SEM of the respective groups (n=5).The weight values of groups are compared with normal control animals, value *p< 0.05 and **p< 0.01.Exp.I: Group treated with 100 mg/kg of hydroalcoholic extract of the plant; Exp.II: Group treated with 100 mg/kg of the hexane fraction of plant; Exp.III: Group treated with 100 mg/kg of the ethyl acetate fraction of plant, Exp.IV: Group treated with 100 mg/kg of the residue fraction of the plant. | 2018-12-30T15:09:15.527Z | 2017-08-31T00:00:00.000 | {
"year": 2017,
"sha1": "d722e084024ce2ad287db3614247ecf2d40c4b1b",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/JPP/article-full-text-pdf/26682D065767.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d722e084024ce2ad287db3614247ecf2d40c4b1b",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
226518105 | pes2o/s2orc | v3-fos-license | Continuous spectra of light charged particles from interaction of 30 MeV en- ergy protons with cooper
This paper presents the experimental double-differential and integral cross sections of reactions (p,xp), and (p,xα) on natCu nucleus. The experiment with the protons, accelerated to energy of 30.0 MeV was performed at the isochronous cyclotron of Institute of Nuclear Physics (Kazakhstan). We investigated the adequacy of the theoretical models in explaining the measured experimental data and contributions of direct, preequilibrium and compound processes in the formation of the cross sections were determined. We assert that the traditional frameworks are valid for the description of the experimental data.
Introduction
In the middle of the last century, the idea of creating a nuclear power system was put forward, implemented to date as the Accelerator Driven System (ADS), consisting of a proton accelerator (deuterons), a neutron-producing target and a subcritical reactor (blanket) with a thermal neutron flux [1,2]. In addition to the energy production, the system allows the transmutation of long-lived radioactive waste from the nuclear industry [3]. According to the physical scenario of the ADS operation, high-energy protons during the passage of the target assembly generate not only a neutron flux, but also a spectrum of more complex nuclides of hydrogen and helium that act as initiating reaction agents with the emission of secondary neutrons. The range of nucleon composition and excitation energies in the ADS system is much more complicated than in traditional reactors.
There is need in new experimental data on nuclear reactions with hydrogen and helium nuclides occurring in the spallation target, fuel assemblies, and structural materials [3,4]. Reviews on available experimental data of reactions with nucleons are presented in [5]. Several experiments on double-differential cross sections measurements have been performed at energy about 30 MeV [6][7][8][9][10][11]. More over at this energy many channels of reactions are open, and the total cross section of the reactions for nuclei of such mass region reaches its maximum [12].
For this reason, the double-differential cross sections of light particles (protons and α-particles) emitted from proton induced reactions on nat Cu at the incident energy of * e-mail: zholdybayev@inp.kz 30.0 MeV were measured. The copper was chosen as object of investigation as widely used constructional material in different nuclear plants [13]. The energy spectra of secondary particles protons and α were measured earlier on 63 Cu target only at proton energy of 14 MeV [14]. The experimental data were analysed in frame of spin-dependent statistical theory and good agreement for (p,xα) and poor one for (p,xp) were obtained.
Experiment
The experimental data were obtained on the proton beam of isochronous cyclotron U-150M at the Institute of Nuclear Physics of Kazakhstan. A system of multiprogramming analysis was adapted for the measurement of the inclusive spectra of protons and alpha-particles in the maximum energy range of the secondary particles. The measurements of cross sections of nuclear reaction products were carried out using a scattering chamber, equipped with a rotary spectrometer of charged particles, target systems, collimation system and the Faraday cylinder to measure the number of particles passing through the target. The measurements have been done within the angle range of 30-135 • for the inclusive reactions (p,xp) and within the angle range of 30-120 • for (p,xα).
The standard ∆E-E technique for registration and identification of products of nuclear reactions was used. The detector telescope had a silicon ∆E detector a thickness of 30 microns and silicon E detector with a thickness of 2000 microns for the reaction (p,xα). For the reaction (p,xp), the thickness of the silicon ∆E detector was 100 microns, and the thickness of the stop detector of total absorptioncrystal CsI(Tl) was 25 mm. The solid angles of the tele- scopes were made equal to 5.34x10 −5 and 4.62x10 −5 sr, respectively. The self-supporting foil of thickness of 3.5 mg/cm 2 with natural cooper was used as target. The thickness was determined by the energy loose of α-particles from radioactive sources while passing through the target.
The kinetic energy of the particle of corresponding channel number X was determined from the known state of the residual nucleus (target 12 C, CH 2 ) in order to calibrate the E-detector.
Systematic errors of measured cross sections were mainly due to the uncertainty in the thickness of the target determination (<5%), the current integrator calibration (1%) and the solid angle of the spectrometer (1.3%). The energy of the beam of accelerated particles was measured with an accuracy of 1.2%. The angle of the registration was recorded with an accuracy of 0.5 • . The total systematic error did not exceed 10%.
The statistical error, the value of which depended on the type and energy of the detected particles, was 1÷8% for protons and 1÷15% for α-particles.
Integrated over an angle cross sections of reactions nat Cu (p,xp) and (p,xα), determined from the doubledifferential cross sections and averaged over the energy range of 0.5 MeV are shown in Figures 1 and 2. Table 1 contains the numerical values of the experimental partial cross sections of these reactions.
Results
An analysis of the experimental results of reactions (p,xp) and (p,xα) on the nucleus nat Cu at E p = 30.0 MeV was done within the exciton model of decay of nuclei, which was a statistical approach describing the transition of the excited nucleus to equilibrium state. This model is widely used in the interpretation of many experimental results. One of the advantages of the model was that the kinetic equations basically, could describe the process of relaxation of the excited nuclear systems, from the simplest quasiparticle configurations to establishment of statistical equilibrium. It is essentially a statistical model, where the excited states of a compound system are characterized by the number of excited particles (above the Fermi level) and holes (below the Fermi level). The model describes the energy spectra both nucleons and complex particles in exit channel simultaneously.
In the two component exciton model, proton and neutron degrees of freedom are considered separately [15] and it is assumed that the nucleus was characterized by parameters p π , h π , p ν and h ν , where p and h denote particle and hole, and π and ν -proton and neutron degrees of freedom, respectively. They are related with the parameters of a single-component model in the following relations p=p π +p ν and h=h π +h ν . They can also be combined to obtain the total number of excitons n=p+h=p π +h π +p ν +h ν =n π +n ν .
It was assumed that the compound nucleus is formed with the particle-hole configuration that allows only for the incident nucleon as particle degrees of freedom and did not include hole degrees of freedom. Such configuration is denoted as (p π , h π , p ν , h ν )= (Z a ,0,N a ,0), where a refers to bombarding particle.
The difference between the number of particles and holes in the transition to the equilibrium state remained constant for the compound nucleus p π -h π =Z a , p ν -h ν =N a and p-h=A a , where A a was mass number of the incident particle. This condition was not always true, especially when approaching a state of equilibrium, but it was adequate for the pre-equilibrium calculations. The theoretical analysis of the experimental results was made within the code PRECO-2006 [16], which had been optimized for this case. We chose the (Z a ,0,N a ,0)=(1,0,0,0) particle-hole configuration as our starting point. The normalization factor was taken equal to 15 MeV. The optical potential parameters of Huizenga [17] have been used for αparticles, and those of Becchetti and Greenlees [18] for protons. The excitation energy of compound nucleus and binding energy of the protons in the primary and secondary emission were calculated. In addition to the calculations within exciton model, the calculations in the framework of other mechanisms of nuclear reactions: direct processes (nuclon transfer, nucleon knock out, inelastic scattering) and equilibrium emission with the formalism of the Weisskopf compound nucleus decay were carried out. the nucleus nat Cu are presented in Figure 1 and 2. It was found that the main contribution to the integral cross sections of reactions (p,xp) in the energy range from 10 MeV to bump corresponding to the elastic and inelastic scattering, provided by pre-equilibrium mechanism (line 2 in Figure 1). In the low-energy part of the spectrum, in addition to the pre-equilibrium, the contribution of the compound processes was significant (line 3 in Figure 1). The contribution of single-step direct mechanisms in the (p,xp) was negligible.
When considering the contribution of the mechanisms forming inclusive cross sections of reactions (p,xα), it was observed that the formation of high-energy α-particles was due to direct single-step processes (line 1 in Figure 2). The contribution of emissions from the equilibrium state increased with the decrease of α-particle energy, and it was determinant in the cross section formation of the lowenergy range. The contribution of the components of the pre-equilibrium was negligible.
In addition to the exiton model, calculation based on quantum-mechanical theory of the preequilibrium decay have been carried out in frame of EMPIRE-II code [19]. The analysis of the experimental cross sections of the nat Cu(p,xp) reaction is carried out within the Hauser-Feshbach theory by considering the multiparticle emission of both single-charged (protons, deuterons) and twocharged (α-particles) fragments. In this code, the contributions of statistical direct and compound processes are described by multistep direct [20] and multistep compound [21] models.
The results of the calculations are shown in Figure 3. These results indicate that the form of integral spectra of reaction (p,xp) for emission protons energy from 5 MeV up to the kinematical limit is determined by the multi-step direct processes. The contribution of the multi-step compound mechanism is negligible. The emission of protons from 2.5 up to 10 MeV are determined by he Hauser-Feshbach theory.
Conclusions
We have presented new experimental data at E p =30.0 MeV within the angle range of 30-135 • for the inclusive reactions (p,xp) and within the angle range of 30-120 • for (p,xα) on nucleus nat Cu, which has not been investigated in detail so far. We have shown the extension of the preequilibrium reactions to this energy region and have interpreted the results of the experiments. We have also discussed the adequacy of the theoretical models in explaining the measured experimental data. In our theoretical analysis, we determine the contributions of direct, preequilibrium and compound processes in the formation of measured cross sections. The obtained experimental results complete the base of nuclear reaction cross section data and can be used in the designing of safe and wasteless hybrid nuclear power plants.
The work was supported by TUBITAK under the project 118R029 and Ministry of Education and Science of the Republic of Kazakhstan, grant BR05236494. | 2020-10-28T18:43:19.240Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "791886aeb7737a437c0973b8cff3c63f85f9a942",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2020/15/epjconf_nd2019_01033.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "89c850b2ee3a0f501487028b78e6fb6088ca343b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
56390091 | pes2o/s2orc | v3-fos-license | Research in Depolarization and Extinction Coefficient of Particles in Tibetan Plateau by Lidar
Vertical profiles of the depolarization ratio and the extinction coefficient of atmospheric particles in Tibetan Plateau were measured with the OUC Water Vapor, Cloud and Aerosol Lidar during the 3 Tibetan Plateau Atmospheric Expedition Experiment Campaign in 2013 and 2014. The cloud types and phases, the spatial temporal distribution of the aerosols and the boundary layer height in the Tibetan Plateau were obtained using polarization lidar technique. In this paper, the depolarization ratio was validated with CALIOP polarization simultaneous data, and the extinction coefficient was retrieved by the Fernald method. The result implied that the atmosphere in the Tibetan Plateau was quite clean with low aerosol load and serious pollution. The ice-water mixed cumulus, water cumulus or stratus clouds in Litang and Nagqu were occurred and classified, respectively. The boundary layer height in Nagqu at average altitude over 4600 m was obtained at around 200 m-300 m, which was commonly lower than that in other observed sites.
INTRODUCTION
Atmospheric aerosol particles have a complicated influence on the earth climate by directly absorbing and scattering the atmospheric radiation and by indirectly serving as cloud condensation nuclei [1] .The Tibetan Plateau lies at a critical and sensitive area, which influences the atmosphere in East Asia area and even the whole northern hemisphere.So it is significant to study the impact of aerosol on atmospheric condition and composition in Tibetan Plateau.The polarization lidar technique [2] has been an excellent method of atmospheric probing, which plays an important role in the detection of spatial and temporal distribution of aerosol and cloud phase.In general, it bases on the particle polarization effect, that is, the polarization orientation of backscatter signal of the spherical particles is the same as the linearly polarized laser pulse while the polarization orientation of the non-spherical particle is changed and consists of perpendicular and parallel signals.
The ratio of these two signals corrected by the calibration factor is referred to as the particle linear depolarization ratio.Liu et al. [3] indicated the feasibility of using backscatter coefficients and backscatter color ratios to distinguish aerosols from clouds, which showed clouds have larger backscatter coefficient ( 0.01 ) and higher color ratio around 1 than mostly aerosol types except for optically thin cloud, and the cumulus or stratus clouds generally have higher backscatter coefficients than cirrus.Sassen et al. [4] stated that different cloud phases correspond to differentiable depolarization ratio range, which can be an essential basis to distinguish cloud phases.With the help of particle linear depolarization ratio and extinction coefficient (or backscatter coefficient), the cloud types and phases, the spatial temporal distribution of the aerosols in observed site can be determined, supplying useful and synergistic data to other kinds of aerosol lidars.
POLARIZATION CHANNEL SETUP DESCRIPTION
The OUC Water Vapor, Cloud and Aerosol Lidar (WVCAL) is multifunctional, and the detailed introduction can refer to a parallel paper in this conference.Fig. 1 presents the schematic diagram of the polarization channel setup.The laser transmitter is a Nd:YAG laser, which produces a laser pulse at 532nm with energy per pulse of 120mJ and a repetition rate of 30Hz.The beam divergence angle is 0.05 mrad.The telescope has an aperture of 308mm with field-ofview of 1.3mrad.After the backscatter signals are collected by telescope and split by spectrophotometer system, the 532nm signal is transmitted to the polarizing beam splitter (PBS) and is separated to P ∥ and P .
RETRIEVAL OF DEPOLARIZATION RATIO AND EXTINCTION COEFFICIENT
The linear volume depolarization ratio v is commonly defined as the ratio of the total perpendicular-polarized backscatter power ( P ) to total parallel-polarized backscatter power ( P ∥ ), measured with polarization lidar by [5] : According to the lidar equation ( 2): Where C is the system constant, is the backscatter coefficient, is the extinction coefficient, the subscripts a and m represent aerosol and molecular, respectively, and r is the range resolution.The // P and P can be described as ( 3) and (4), respectively: Where // C and C are system constants for parallel-polarized channel and perpendicularpolarized channel, respectively. // C and C represent the different receiver efficiency on the two channels including the PBS cross-talk effect, and the lidar system depolarization effects including non-linear polarized laser source and optical device depolarization effect.As a result, v cannot be used directly to classify the aerosol types and cloud phases.To retrieve the depolarization properties of particles, the particle linear depolarization ratio p was defined, and the relationship between p and v can be described as ( 5) : The calibration factors a and b , which depend on the system itself, were obtained by WVCAL and CALIPSO validation experiment. [6]sed on the signal of the polarization lidar at 532nm, the aerosol extinction coefficient can be calculated by the Fernald method [7] , and can be written as equation ( 6): was less than 5% [8] .
The extinction coefficients of the lidar were compared with the CALIOP polarization data.
One study case measured on September 30, 2013 was shown in Fig. 2. The black dashed line represents the lidar data and the other color lines represent CALIOP polarization data, which shows good consistency.
RESULTS
In this section, some cases including information about particle liner depolarization ratio, extinction coefficient and boundary layer height is discussed.In Fig. 3, the P at the height of 2-5 km above ground level (in red dashed box) was in the range of 0.1 to 0.35.Meanwhile, a was bigger than 5 km -1 .It was implied that clouds existed and can be classified as mixed phase cumulus clouds [9] .Furthermore, the P near the ground was less than 0.1 and even less than 0.05, and a was about 0.05km -1 .So it was confirmed that the aerosol loading near the ground in Litang was rare, and the aerosols were free of biomass burning aerosols and pure dust.mixed phase clouds dominated by water clouds, and the cloud type can be classified as cumulus or stratus clouds [10] .Except the clouds, the atmosphere above 1 km was very clean and the depolarization ratios P were close to zero.The air above 1 km was free or contains tiny of the non-spherical particles.Furthermore, the P near the ground was less than 0.1 and even less than 0.05 and a was about 0.5km -1 .So it was implied that the air conditions in Nagqu were quite clean and had no other types of aerosols and serious pollution.Fig. 5 showed the boundary layer height in Nagqu at that period was about 200m-300m, which was commonly lower than that in other observed sites.
CONCLUSIONS
The key conclusions of the study were listed below: The extinction coefficients of WVCAL were compared with CALIOP polarization data, which showed good consistency.In the process of clouds classification, a can generally distinguish aerosols from clouds, and further classify cloud types.
p can then judge the cloud phases.In the Tibetan plateau experien, the ice-water mixed cumulus clouds, water cumulus or stratus clouds in Litang and Nagqu were observed and classified.Most of p near ground in Tibetan Plateau were less than 0.1 and even 0.05, showing that the aerosol loading near the ground was rare and the aerosols were free of biomass burning aerosols and pure dust.Moreover, the atmosphere above 1 km was very clean and the p was close to zero.
The air above 1 km was free or contains tiny of the non-spherical particles.The boundary layer height in Tibetan Plateau was about 200m-300m, which was commonly lower than that in other observed sites.
Fig. 1 .
Fig.1.Schematic diagram of the polarization channel of WVCAL system at 532 nm.
a constant and was independent of the range.The reference height 0 z was chosen where the atmospheric layer was relative clean with rare aerosols.In this paper, the Minimum Value Method was used to find out 0 z , that is, 0
Fig. 3 .
Fig.3.The depolarization ratio and extinction coefficient measured on July 28, 2013 (In Litang).(a) Profile of particle depolarization ratio (black line) and the error bar (pink line).(b) Temporal development of particle depolarization ratio.(c) Profile of a (black dashed line) and m ( pink dashed line).(d) Observation of aerosol extinction coefficient a | 2018-12-15T19:40:11.669Z | 2016-06-01T00:00:00.000 | {
"year": 2016,
"sha1": "a9ba9cec0c7a22a6199011881708f45f3333f4b2",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2016/14/epjconf_ilrc2016_23030.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a9ba9cec0c7a22a6199011881708f45f3333f4b2",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
216111126 | pes2o/s2orc | v3-fos-license | End-of-Life Care of Hospitalized Children with Advanced Heart Disease
Background Despite improvements in palliative care for critically ill children, the characteristics of end-of-life care for pediatric patients with advanced heart disease are not well-known. We investigated these characteristics among hospitalized children with advanced heart disease in a tertiary referral center in Korea. Methods We retrospectively reviewed the records of 136 patients with advanced heart disease who died in our pediatric department from January 2006 through December 2013. Results The median age of patients at death was 10.0 months (range 1 day–28.3 years). The median duration of the final hospitalization was 16.5 days (range 1–690 days). Most patients (94.1%) died in the intensive care unit and had received mechanical ventilation (89.7%) and inotropic agents (91.2%) within 24 hours of death. The parents of 74 patients (54.4%) had an end-of-life care discussion with their physician, and the length of stay of these patients in the intensive care unit and in hospital was longer. Of the 90 patients who had been hospitalized for 7 days or more, the parents of 54 patients (60%) had a documented end-of-life care discussion. The time interval from the end-of-life care discussion to death was 3 days or less for 25 patients. Conclusion Children dying of advanced heart disease receive intensive treatment at the end of life. Discussions regarding end-of-life issues are often postponed until immediately prior to death. A pediatric palliative care program must be implemented to improve the quality of death in pediatric patients with heart disease.
INTRODUCTION
Heart disease is one of the leading causes of death among children despite the fact that the mortality rate for pediatric patients with advanced heart disease has declined significantly owing to improvements in medical care and cardiac surgeries. [1][2][3] In addition, heart disease is a major cause of pediatric death due to complex chronic conditions, which account for a significant proportion of pediatric patients who die during hospitalization. 4,5 Children who died from advanced heart disease in hospitals usually received highly aggressive and technical treatment in intensive care units at their end-of-life. 6-8
Study population
We conducted a retrospective chart review of patients who died in the pediatrics department of our hospital between January 2006 and December 2013. Pediatric patients who were diagnosed with advanced heart disease were included in the study. Young adults between the ages of 18 and 35 years were also included to consider end-of-life care for young adults with complex congenital heart disease. To investigate the characteristics of patients with advanced heart disease with sufficient medical records and to minimize the confounding factors, patients who met the following criteria were excluded from the study: 1) patients who died outside the hospital or in the emergency department, 2) patients who died within 1 month after cardiac surgery, and 3) patients who had an extremely low birth weight, congenital diaphragmatic hernia, persistent pulmonary hypertension of the neonate, malignancy, or an immunodeficiency disorder.
Data collection
The following patient characteristics at the final hospitalization were recorded: age, sex, diagnosis of primary heart disease, comorbid genetic disease, length of intensive care unit treatment, and duration of hospitalization. Furthermore, we collected data on parent' education level, residence type, and national health insurance status to identify the socioeconomic status of each patient's family.
The mode of death was categorized as follows: 1) death after withdrawal or withholding of life-sustaining support, 2) death during cardiopulmonary resuscitation, and 3) brain death. The location at the time of death and the cause of death were also recorded. Furthermore, we investigated whether cardiopulmonary resuscitation had previously been performed during the final admission. The total length of the final cardiopulmonary resuscitation was analyzed in the group of patients who died during resuscitation.
We also reviewed the interventions performed within the 24 hours prior to death. These data included the presence and type of mechanical ventilation and the use of inhaled nitrogen oxide, dialysis, and parenteral nutrition support. The use of extracorporeal membrane oxygenation support, left ventricular assist device, pacemaker, implantable cardioverter-defibrillator, and cardiac resynchronization therapy device were also recorded. The presence of central and arterial lines and tracheostomy and gastrostomy tubes was evaluated, as well as the administration of inotropes, antibiotics, analgesics, sedatives, and neuromuscular blockers.
End-of-life care discussion between physician and patients/guardians were identified by searching for documented end-of-life discussions in medical records and the presence of written consent regarding a do-not-resuscitate order. End-of-life care discussions included explanations regarding life expectancy, withdrawal or withholding of certain life-sustaining support, and preferences regarding resuscitation or palliative care. Written consent regarding a do-not-resuscitate order that had been signed by the parents during the final cardiopulmonary resuscitation was not included. We analyzed the relationship between documented end-of-life care discussions and patients' characteristics and the socioeconomic status of their family. Finally, we determined the time interval between the end-of-life care discussion and death for the patients who had been hospitalized for 7 days or more and had documented end-of-life care discussions.
Statistical analysis
Descriptive data are presented as medians and ranges or means and standard deviations, whereas categorical variables are presented as numbers and percentages. The Mann-Whitney U test and Pearson's χ 2 test/Fisher's exact test were performed for continuous and categorical variables, respectively. A P value less than 0.05 was considered statistically significant. Data manipulation and statistical analyses were performed using SPSS 23.0 for Windows (IBM SPSS, Inc., Chicago, IL, USA) and Microsoft Office Excel 2013 (Microsoft Inc., Redmond, WA, USA).
Ethics statement
The present study protocol was approved by the Institutional Review Board of Seoul National University Hospital (approval No. 1510-050-710), and patient consent was waived because of the study's retrospective design.
Patient characteristics
Of the 652 patients who had died in the pediatric department from all causes of death during the study period, 136 patients with primary heart diseases were included in the current study. Their median age at death was 10.0 months (range, 1 day-28.3 years) ( Table 1). More than half of the patients (72, 52.9%) died within a year after their birth, and 24 patients of them died during the neonatal period. The median duration of the final hospitalization was 16.5 days (range, 1-690 days), and 90 patients (66.2%) were hospitalized for 7 days or more. Onequarter of the patients (33, 24.3%) were confirmed or suspected of having comorbid genetic disease. The majority of patients (130, 95.6%) received treatment in an intensive care unit at least once during the final hospitalization, and 74 patients (54.4%) received treatment in an intensive care unit for 7 days or more. Three-fourths of the patients (97, 71.3%) had congenital heart disease, and 36 patients of them had a single ventricle physiology. Most parents (118, 86.8%) had Korean national health insurance. One-third of the patients (46, 33.8%) lived in the capital where the hospital was situated.
Circumstances of in-hospital death of children with advanced heart disease
Seventy patients (51.5%) died following the withholding or withdrawal of life-sustaining treatment, whereas 66 patients (48.5%) died during cardiopulmonary resuscitation ( Table 2). (13.2) Data are presented as number (%) or median (range). ICU = intensive care unit. a Two heart transplantation, 3 primary pulmonary hypertension, 1 mycotic aneurysm, 1 infective endocarditis, and 1 cardiac tumor. Among the patients who died during cardiopulmonary resuscitation, the median duration of the cardiopulmonary resuscitation was 51.5 minutes (range, 4-257 minutes), and 25 of 66 patients received cardiopulmonary resuscitation for 1 hour or more. More than half of the patients (71, 52.2%) died from multi-organ failure. Most patients (128, 94.1%) died during care in the intensive care unit, and only 6 patients died in the general ward.
Interventions performed within 24 hours of death
Most patients (122, 89.7%) had received mechanical ventilation, and 14 patients (10.3%) had received high-frequency oscillatory ventilation within 24 hours of death ( Table 3). Extracorporeal membrane oxygenation was applied to 6 patients (4.4%). Of the 12 patients (8.8%) with cardiac devices, 10 patients had a pacemaker, 1 had an implantable cardioverter defibrillator, and 1 had a cardiac resynchronization therapy device. Of the 15 patients (11.0%) who had received renal replacement therapy for acute renal failure, 12 patients received continuous renal replacement therapy and 3 received peritoneal dialysis. Fifty-six patients (41.2%) had received parenteral nutrition support prior to death. Most patients (124, 91.2%) had required inotropic support. More than half of the patients (76, 55.9%) received sedatives, and one-third of the patients (42, 30.9%) received analgesics.
End-of-life care discussions for children with advanced heart disease
Seventy-four patients (54.4%) had documented end-of-life care discussions; of these, the discussions of 19 patients (19/74, 25.7%) occurred on the date of the patient's death. Seventy patients (70/74, 94.6%) died following the withholding or withdrawal of life-sustaining treatment and consent to a do-not-resuscitate order was written in 59 patients (59/74, 79.7%). The patient's age, sex, and comorbid genetic disease were irrelevant to the documented end-of-life care discussion ( Table 4). The patients who had documented end-of-life care discussions were hospitalized longer than those who did not (23 days [range, 1-366] vs. 12 days [range, ; P = 0.042). The former patients had also remained for longer in the intensive care unit (15.5 days [range, 1-300] vs. 6 days [range, ; P = 0.020). Parents' education level, residence type, and national health insurance status were not related 5/10 https://jkms.org https://doi.org/10.3346/jkms.2020.35.e107 Children with Heart Disease: End-of-Life Care to the discussion regarding patients' end-of-life care. All discussions regarding end-oflife care occurred between the physicians and patients' parents or guardians; thus, no patients participated in the discussion of their end-of-life care. Of the 90 patients who were hospitalized for 7 days or more, the parents or guardians of 54 patients (54/90, 60%) had end-of-life care discussions. The time interval from the end-of-life care discussion to death was 3 days or less for 25 patients (25/54, 46.3%) (Fig. 1)
DISCUSSION
This study explored the recent trends in the end-of-life care for pediatric patients with advanced heart disease in a tertiary center in Korea. First, most patients (128, 94.1%) received highly intensive treatment and died in the intensive care unit. Second, half of the patients (66, 48.5%) died following an unsuccessful cardiopulmonary resuscitation, and the duration of some cardiopulmonary resuscitations was prolonged. Third, discussions regarding endof-life care were often deferred until the day of the patient's death, despite the patient being hospitalized for 7 days or more. Finally, all discussions regarding end-of-life care occurred between the physicians and parents or guardians; the patients did not have the opportunity to confer with physicians about their end-of-life care and might not have been prepared for their death.
Most patients with heart disease were intubated, received highly advanced treatment, or died in the intensive care unit in our study. These findings are consistent with previous studies. 6-8 Some patients (25/66, 37.9%) died after prolonged cardiopulmonary resuscitation, despite the fact that survival rates are lower and neurological outcomes are poorer with longer cardiopulmonary resuscitation duration. 15 The frequencies of sedative drug and analgesic use at the end-of-life were lower than those reported in previous studies, including studies of general pediatric patients. This finding may reflect the invasiveness of the treatment for pediatric patients with heart disease in the current study. 6,16 There were 3 possible reasons for this reported high invasiveness of treatment. First, with the advancement of technology for surgery and medical treatment, pediatric cardiologists can use advanced medical equipment, such as extracorporeal membrane oxygenation and ventricular assist devices, to save the lives of pediatric patients. In some cases, cardiac transplantation is an option for pediatric patients with heart disease that is intractable to medical treatment or surgery. 17 Therefore, pediatric cardiologists are likely to be unfamiliar with the decision-making process of shifting from attempting to cure patients to performing palliative care. 18 Second, parents are more likely to prolong the lives of their children via invasive treatments, even when there is no chance that the child's life will be extended. They may believe that there are remaining treatment options or that their child will survive, or they may have a more positive view regarding the quality of life of their child despite physician opinions. 19 Third, parents might not have known that their child had little possibility of survival until death was near. 7 For more than half of the patients with documented discussions regarding end-of-life care, this discussion occurred immediately prior to death. This finding is consistent with that of a previous study. 7 Several studies have demonstrated that early integration of palliative care including end-of-life issues into the treatment plan is necessary to improve patients' quality of life and help patients prepare for their deaths. 20,21 However, there are several obstacles to early discussions regarding end-of-life care. First, some patients exhibit unpredictable courses and variable progression, which make it difficult for the physician to know when these patients will die and therefore, when the physician should start discussing the poor prognosis with the patient's family. 22 Second, a lack of in-depth conversation regarding the prognosis of heart disease can result in different understandings of the prognosis between patients or parents and physicians. Parents who care for pediatric patients with heart disease are likely to have more optimistic expectations regarding the prognosis of their children than physicians. 8, 18 Third, physicians may worry that some patients or parents misunderstand the concept of palliative and end-of-life care and believe palliative care is akin to abandonment of their children. 23 Finally, insufficient education regarding palliative care among pediatricians can make it difficult for them to understand when to initiate discussions regarding end-oflife care. 24 None of the patients in the present study had the chance to participate in a discussion regarding end-of-life care with their doctors, although some patients were adolescents and young adults. Most discussions regarding end-of-life care occurred between physicians and parents. Patients may have been too sick to participate in discussions regarding their endof-life care at this point. Physicians or parents may not have wanted to disclose the terminal illness to patients. In Asia, a child is considered a family member for whom the parents are responsible, and parents want to protect their child from highly negative information rather than considering the patient's autonomy. 25,26 However, adolescents and young adults can understand the concept of death, and some of these patients may be competent to make decisions regarding their lives. 27 More than half of adolescents with life threatening disease were reported being comfortable talking about their end-of-life issues. Indeed, not all decisions made by parents and physicians completely accord with the decisions of adolescent and young adult patients. 28 Adolescents and young adult patients should be given opportunities to participate in discussions of end-of-life care and express their wishes via early involvement of palliative care. To ensure optimal communication with adolescents and young adults regarding end-of-life care, physicians should take a gradual approach along with family support, considering the spiritual and cultural factors of the patient. 29 Educational programs, such as communication skills training, have been helpful for physicians to learn the skills required to deal with a challenging situation, including transitioning to palliative care and end of life. 30 Pediatric palliative care programs must be established to provide appropriate end-of-life care for pediatric patients with advanced heart disease. 9 Pediatric palliative care has evolved over the previous 2 decades such that more than half of the hospitals in the United States have pediatric palliative care programs; historically, pediatric patients with heart disease infrequently used palliative programs. 12,14 Pediatric palliative care teams can assist patients with complex congenital heart disease and their families by providing help with medical decision-making, advance care planning, and bereavement management. 31 A recent singlecenter study reported that pediatric palliative care teams are primarily involved in the goals of care, psychosocial support, symptom management, and advance care planning for patients with advanced heart disease and their families. 32 The present study had several limitations. First, as a retrospective study, only documented discussions regarding end-of-life care were collected; thus, discussions that had not been documented could not be identified. Second, this study included a small number of patients who died at a single tertiary referral hospital; thus, we could not identify patterns of end-oflife care among patients who died at home. Third, the subjective symptoms of the patients and the parents' preparedness for end-of-life were not explored; such factors could provide different perspectives of the death process in pediatric patients.
In conclusion, the present study is the first to explore the current status of end-of-life care pediatric patients with advanced heart disease outside of western countries. Our findings demonstrate that among most pediatric patients with advanced heart disease who died in our hospital, discussions with these patients regarding end-of-life care were postponed | 2020-03-05T10:11:08.362Z | 2020-03-04T00:00:00.000 | {
"year": 2020,
"sha1": "a077aa131d4a778624754d1fc3579b84dbc144b1",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3346/jkms.2020.35.e107",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a9fa35ac752934dfabb6062e83bc80c8e000a3ac",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55819834 | pes2o/s2orc | v3-fos-license | Progress in Pathogen Detection by Whole-Genome Sequencing Progress in Pathogen Detection by Whole-Genome Sequencing
the proper antibiotics to be administered at the outset, reducing the chance of developing antibiotic resistance, an important consideration in modern antibiotic stewardship. Molecular methods have opened up new opportunities. They can detect pathogens that are not cultivatable and many can be performed quickly. Although many methods still focus on the analysis of single pathogens, encouraging progresses have been made in the development of techniques that can detect a large number of pathogens in a single assay.
the proper antibiotics to be administered at the outset, reducing the chance of developing antibiotic resistance, an important consideration in modern antibiotic stewardship.Molecular methods have opened up new opportunities.They can detect pathogens that are not cultivatable and many can be performed quickly.Although many methods still focus on the analysis of single pathogens, encouraging progresses have been made in the development of techniques that can detect a large number of pathogens in a single assay.
Microarrays provide one example.Arrays such as the Virochip 1 uses tens of thousands of short sequences of DNA to detect a large number of viruses.A drawback of this approach is that a microarray misses pathogens that it is not designed to look for.It also uses only part of the genome of a microbe for detection -using the wholegenome sequence of a microbe could improve sensitivity.To this end, whole-genome sequencing provides an attractive solution.With highthroughput next-generation sequencing technology, this technique can potentially detect many pathogens from a minimally processed metagenomic sample in a single assay.Although this approach has not yet been widely examined, encouraging results are emerging.In 2008, Nakamura and colleagues 2 demonstrated the feasibility of using this technique to detect Campylobacter in the stool sample of a patient.This bacterium was missed by several other traditional techniques that they used before.This approach did not assume what pathogens might be present in a sample.Instead, it unbiasedly searched a large database containing many pathogens for sequences that matched with the sample.
Later, Loman et al., 3 applied next-generation sequencing to analyze the stool samples of patients during the outbreak of Shigatoxigenic E. coli O104:H4 in Germany in 2011.With the benchtopsequencing platform Illumina MiSeq, they were able to obtain good coverage of the genome of this outbreak strain from the metagenomic stool samples.More recently, Chiu and coworkers have sped up the use of metagenomics to detect pathogens by developing the SURPI program. 4This program brings this approach closer to practical uses by significantly reducing the computational time required for diagnosis, which requires mapping many short DNA fragments from next-generation sequencing experiments to large databases of microbes.Chiu and co-workers have demonstrated the utility of this technique in identifying the causative agents of several illnesses, including neuroinvasive astrovirus infection, 5 Balamuthia mandrillaris encephalitis, 6 and neuroleptospirosis. 7e impact of this technique should increase further with the introduction of faster and portable sequencers such as the MinIon being developed by Oxford Nanopore TechnologiesTM.Greninger et al., 8 have already demonstrated that this platform can produce useful diagnostic results in even shorter time.Because this sequencer is small and portable that can be plugged into a laptop computer, it can potentially be used in remote areas without sending the data to remote high-performance computing facilities for processing if fast computer programs for diagnosis can be run directly on the laptop computer.Gontarz & Wong 9 made a step in this direction by using several strategies: I. Focus on checking microbes that are pathogenic such as those in the PATRIC database. 10.Develop compact genome representations that can fit into the smaller RAMs of portable computers.
III. Develop short-read aligners such as SRmapper 11 that requires less computer memory.
Thoughtful developments of compact genome representations have not only made it easier to check many pathogens in computers with smaller memories but can also improve the sensitivity of detection.Although it is easier to pick out a pathogen from sequencing experiments of metagenomic samples when its genome is covered to a large extent by sequencing reads, it is harder when the microbial load or the sequencing depth is low.Using marker genes, such as from MetaRef, 12 is not effective as these genes are not necessarily covered when only a small number of reads from a microbial genome is produced in a next-generation-sequencing experiment.Using the whole-genome sequence of a microbe improves the odds that reads originating from the microbe can be detected.However, some parts of the genomes are more useful for detection than the others.In particular, the parts that do not overlap with the genomes of other organisms known to be present in a specific type of samples-e.g., saliva -are most useful as the presence of a microbe can be deduced even when only a small number of reads are aligned to these regions.Using only the unique regions of the genomes of microbes could improve signal/ noise (S/N) ratio significantly.Consider an example when 50% of a reference genome is known to overlap with other known species, one may estimate the S/N ratio by 100%/50%=2.However, removing the overlapping regions gives a S/N ratio of 50%/ (A number close to zero), which could be several orders of magnitudes larger than two.
Editorial
Methods such as staining/microscopic examination and culturing have served us well for a long time and are still commonly used today for pathogen detection.However, these methods mostly focus on looking for one pathogen at a time and are not effective when a large number of potential pathogens need to be considered.They fall short when the proper diagnosis of a seriously ill patient needs to be achieved quickly to decide treatment options.They are also ineffective when it is necessary to identify the causative agent of a disease outbreak quickly to suggest strategies to control the outbreak.Identifying the right causative bacterial pathogen early also allows Gontarz and Wong 9 showed that the presence of Mycobacterium tuberculosis and bacteria in the Human Oral Microbiome Database from metagenomic samples could be deduced when only 0.2% of their unique genomes were covered.It will be interesting to see whether such a compact representation of genome can be developed for most pathogens in the PATRIC database.Although practical tools for medical diagnostics require stringent validation for approval by the US Food and Drug Administration, the outlook is bright for using whole-genome sequencing of metagenomic samples for this purpose. | 2019-03-31T13:45:49.695Z | 2016-02-01T00:00:00.000 | {
"year": 2016,
"sha1": "4001c7691cad96d79bdcfefc9b905b2d6a572c29",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/MOJPB/MOJPB-03-00077.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5b31fff042fb88e3ba83571be6a7eea037216846",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
210473015 | pes2o/s2orc | v3-fos-license | Spin splitting with persistent spin textures induced by the line defect in 1T-phase of monolayer transition metal dichalcogenides
The spin splitting driven by spin-orbit coupling in monolayer (ML) transition metal dichalcogenides (TMDCs) family has been widely studied only for the 1H-phase structure, while it is not profound for the 1T-phase structure due to the centrosymmetric of the crystal. Based on first-principles calculations, we show that significant spin splitting can be induced in the ML 1T-TMDCs by introducing the line defect. Taking the ML PtSe2 as a representative example, we considered the most stable form of the line defects, namely Se-vacancy line defect (Se-VLD). We find that large spin splitting is observed in the defect states of the Se-VLD, exhibiting a highly unidirectional spin configuration in the momentum space. This peculiar spin configuration may yield the so-called persistent spin textures (PST), a specific spin structure resulting in protection against spin-decoherence and supporting an extraordinarily long spin lifetime. Moreover, by using k.p perturbation theory supplemented with symmetry analysis, we clarified that the emerging of the spin splitting maintaining the PST in the defect states is originated from the inversion symmetry breaking together with one-dimensional nature of the Se-VLD engineered ML PtSe2. Our findings pave a possible way to induce the significant spin splitting in the ML 1T-TMDCs, which could be highly important for designing spintronic devices.
I. INTRODUCTION
Since the experimental isolation of graphene in 2004 1 , significant research efforts have been devoted to the investigation of two-dimensional (2D) materials with atomically-thin crystals 2 . Here, growing research attention has been focused on the monolayer (ML) transition metal dichalcogenides (TMDCs) family due to a high possibility to be used in future nanoelectronic devices 3 . Most of the ML TMDCs families have graphene-like hexagonal crystal structures where transition metal atoms (M ) are sandwiched between layers of chalcogen atoms (X) with M X 2 stoichiometry. However, due to the local coordination of the transition metal atoms, they admit two different stable forms in the ground state, namely a 1H-phase structure having trigonal prismatic symmetry, and a 1T-phase structure that consists of distorted octahedral symmetry 4 . The different coordination environments in the ML TMDCs lead to distinct crystal field splitting of the d-like bands. However, depending on the transition metal atom species, the ML TMDCs display metallic, semiconducting, or insulator behavior 5 . Therefore, various physical properties such as tunability of bandgap 6,7 , high carrier mobility 8,9 , and superior surface reactivity 10 are established, evidencing that the ML TMDCs is an ideal platform for next-generation technologies.
Of special interest is the promising application of the ML TMDCs for spintronics devices due to the strong spin-orbit coupling (SOC), which is particularly noticeable in the ML 1H-TMDCs such as ML (Mo/W)X 2 (X = S, Se) 6,11,12 . Here, the lack of the crystal inversion symmetry together with the strong SOC in the 5d orbitals of transition metal atoms leads to the large spin splitting in the electronic band structures. This effect is conspicuously apparent in the valence band maximum exhibiting the spin splittings ranging between 150 meV (ML MoS 2 ) up to 400 meV (WSe 2 ) 6,11,12 . Due to the well separated valleys at the K and K points in the hexagonal Brillouin zone, this splitting gives rise to the so-called spinvalley coupling 13 , which is responsible for the appearance of valley-contrasting effects such as spin Hall effect 14 , spin-dependent selection rule for optical transitions 15 , and magnetoelectric effect in the ML TMDCs 16 . Furthermore, an electrically controllable spin splitting and spin polarization in the ML 1H-TMDCs has been reported 17 , making them suitable for spin-field effect transistor.
Compared to the well-studied ML 1H-TMDCs, the effect of the SOC in the ML 1T-TMDCs is equally interesting. Especially the ML PtSe 2 has attracted much scientific at-tention since it has been successfully synthesized by a direct selenization at the Pt(111) substrate 8,18 . Moreover, this material has been predicted to exhibit the largest electron mobility among the widely studied ML TMDCs 8,9 . Recently, Yao et. al. 18 , by using spinand angle-resolved photoemission spectroscopy (spin-ARPES), reported spin-layer locking phenomena in the ML PtSe 2 ; that is, the spin-polarized states are degenerated in energy but spatially locked into two sublayers forming an inversion partner. Similar phenomena has also been theoretically predicted on other ML 1T-TMDCs such as ML (Zr/Hf)X 2 (X = S, Se) 19 . This phenomena, which is a manifestation of the global centrosymmetric of the crystal and the local dipole-induced Rashba SOC effect, may provide a disadvantage for spintronics applications. Since the ML 1T-TMDCs possesses superior transport properties due to the high electron mobility 8,9 , lifting the spin degeneracy in the ML 1T-TMDCs could be the important key for their realization in the spintronics devices. Therefore, finding a feasible method to induce the significant spin splitting in the ML 1T-TMDCs is highly desirable.
In this paper, by using density-functional theory (DFT) calculations, we show that the significant spin splitting can be induced in the ML 1T-TMDCs by introducing the line defect.
By using the ML PtSe 2 as a representative example, we investigate the most stable form of the line defects, namely Se-vacancy line defect (Se-VLD). We find that a sizable spin splitting is observed in the defect states of the Se-VLD, exhibiting a highly unidirectional spin configuration in the momentum space. This peculiar spin configuration gives rise to the so-called persistent spin textures (PST) 20,21 , a specific spin structure that protects the spin from decoherence and induces an extremely long spin lifetime 22,23 . Moreover, by using k · p perturbation theory supplemented with symmetry analysis, we clarified that the emerging of the spin splitting maintaining the PST in the defect states is originated from the inversion symmetry breaking and one-dimensional (1D) nature of the Se-VLD engineered ML PtSe 2 .
Finally, a possible application of the present system for spintronics will be discussed.
II. COMPUTATIONAL DETAILS
We performed first-principles electronic structure calculations based on the DFT within the generalized gradient approximation (GGA) 24 implemented in the OpenMX code 25 . Here, we adopted norm-conserving pseudopotentials 26 with an energy cutoff of 350 Ry for charge density. The wave functions are expanded by the linear combination of multiple pseu-doatomic orbitals (LCPAOs) generated using a confinement scheme 27,28 . The orbitals are specified by Pt7.0-s 2 p 2 d 2 and Se9.0-s 2 p 2 d 1 , which means that the cutoff radii are 7.0 and 9.0 Bohr for the Pt and Se atoms, respectively, in the confinement scheme 27,28 . For the Pt atom, two primitive orbitals expand the s, p, and d orbitals, while, for the Se atom, two primitive orbitals expand the s and p orbitals, and one primitive orbital expands d orbital.
The SOC was included in the DFT calculations by using j-dependent pseudopotentials 29 .
The spin textures in the momentum space were calculated using the spin density matrix of the spinor wave functions obtained from the DFT calculations as we applied recently on various 2D materials 30,31 .
To model the VLD in the ML 1T-TMDCs, we considered the ML PtSe 2 as a representative example. Here, we constructed a supercell of the pristine ML PtSe 2 from the minimum rectangular cell [ Fig. 1(a)], where the optimized lattice parameters are obtained from the primitive hexagonal cell. As a consequence, the folding cell from the hexagonal to rectangular cells in the FBZ is expected as shown in Fig. 1(b)-(c). Here, we used the axes system where the ML is chosen to sit on the x − y plane, where the x (y) axis is taken to be parallel to the zigzag (armchair) direction. We considered two different configurations of the VLD, namely In our DFT calculations, we used a periodic slab where a sufficiently large vacuum layer (20Å) is used to avoid interaction between adjacent layers. The 3 × 12 × 1 k-point mesh was used, and the geometries were fully relaxed until the force acting on each atom was less than 1 meV/Å. To confirm energetic stability of the VLD, we calculate vacancy formation energy (E f ) through the following relation 32 : In Eq. (1), E VLD is the total energy of the VLD, E Pristine is the total energy of the pristine system, n i is the number of atom being removed from the pristine system, and µ i is the chemical potential of the removed atoms corresponding to the chemical environment mirror symmetry M yz are indicated by the red point and lines.
surrounding the system. Here, µ i obtains the following requirements: Under Se-rich condition, µ Se is the energy of the Se atom in the bulk phase (hexagonal Se, µ Se = 1 3 E Se−hex ) which corresponds to the lower limit on Pt, µ P t = E P tSe 2 − 2E Se , where E P tSe 2 is the total energy of the ML PtSe 2 in the primitive unit cell. On the other hand, in the case of the Pt-rich condition, µ P t is associated with the energy of the Pt atom in the bulk phase (fcc Pt, µ P t = 1 4 E P t−f cc ) corresponding to the lower limit on Se, µ Se = 1 2 (E P tSe 2 −E P t ).
III. RESULT AND DISCUSSION
First, we briefly discuss the structural symmetry and electronic properties of the pristine Defective systems To assess the stability of the proposed VLDs, we calculate their formation energy E f . As shown in Table 1, we find that the calculated E f of the Se-VLD is much smaller than that of the Pt-VLD under the Se-rich and Pt-rich conditions, indicating that the Se-VLD is easily formed in the ML PtSe 2 . In contrast, the formation of the Pt-VLD is highly unfavorable due to the required energy. Since the Pt atom is covalently bonded to the six neighboring Se atoms, removing the Pt atom in the supercell is stabilized by increasing the E f . For a comparison, we also calculate E f of a Se-single vacancy defect (Se-SVD) and a Pt-single vacancy defect (Pt-SVD) by using the same supercell model. We find that the calculated E f of the Se-VLD is comparable to that of the Se-SVD, but is much smaller than that of the Pt-SVD [see Table 1], suggesting that the formation of the Se-VLD is energetically accessible. The found stability of the Se-VLD is consistent with the previous report that where k and σ are the electron's wavevector and spin vector, respectively, and α g k = det(g)gα k , where g is the element of the point group characterizing the small group wave vector G Q of the high symmetry point Q in the FBZ. In Eq. (4), we have implicitly assumed that all of the orbital characters at the Q point is invariant under the symmetry transfor-mation in the G Q . Therefore, the spin vector, σ, can be considered as a pseudovector. By transforming k and σ as polar and axial vectors, respectively, and sorting out the components of these vectors according to irreducible representation (IR) of G Q , we can decompose again their direct product into the IR. According to the Eq. (4), only the total symmetric IR from this decomposition contributes toĤ SOC . Therefore, by using the corresponding tables of the point group, one can easily construct the possible term ofĤ SOC . namely identity operation E : (x, y, z) → (x, y, z) and mirror symmetry operation M yz : (x, y, z) → (−x, y, z)). Two one-dimensional irreducible representations (IR) are shown. Table II], we short out the component of k and σ according to the IR as A : k y , k z , σ x and A": k x , σ y , σ z . Moreover, from the corresponding table of direct products [Table III], we obtain the third order terms of k as A : k 3 y , k 3 z , k 2 y k z , k 2 x k z , k 2 z k y and A": k 3 x , k 2 y k x , k y k x k z , k 2 z k x . However, due to the 1D nature of the defect, all the terms containing k x and k z should vanish. Therefore, according to the table of direct products, the combination of the first order of k as well as the third order of k 3 with σ that belongs to A IR are k y σ x and k 3 y σ x . This combinations can be generalized for the higher odd nth-order of k, where the only nonzero term is k n y σ x . By collecting all these terms, we obtainĤ SOC near the Γ point as where α n is the odd nth-order SOC parameter. of the wave vector, k y , and the spin vector, σ x . Therefore,Ĥ SOC preserves an ideal 1D SOC, as recently predicted on the 1D topological defect induced by screw dislocation in semiconductors 46 . Accordingly, the spin splitting is expected to occur only along the k y direction (Γ − Y line), which is in agreement with the band dispersion in Fig. 3 x-direction whatever the order of k y is. Therefore, the PST maintains even at the larger k y , giving advantages to the significantly higher degree of spin coherency than that of the 2D
RD-SOC.
For a quantitative analysis of the predicted PST in the Se-VLD engineered ML PtSe 2 , we calculate the SOC parameter α 1 associated with the linear term ofĤ SOC given in Eq. (5), and compare the result with a few selected PSH materials. By fitting the DFT bands of the defect states along the Γ − Y line, we find that the calculated α 1 for the DS-1 state is 1.14 eVÅ which is larger than that for the DS-2 state (0.20 eVÅ) and DS-3 state (0.28 eVÅ) [see Table IV]. Remarkably, the associated SOC parameters found in the defect states of the Se-VLD engineered ML PtSe 2 are sufficient to support the room temperature spintronics functionality.
The observed PST in our defective system may result in a spatially periodic mode of the spin polarization emerging in the crystal known as a persistent spin helix (PSH) 21 . The corresponding spin-wave mode is characterized by the wavelength of λ = (π 2 )/(m * Γ−Y α 1 ) 21 , where m * Γ−Y is the carrier effective mass along the Γ − Y direction. By fitting the band dispersion in the defect states along the Γ − Y line, we find that the calculated m * Γ−Y is -0.21m 0 for the DS-1 state, while it is found to be 0.25m 0 and 0.19m 0 for the DS-2 and DS-3 states, respectively, where m 0 is the free electron mass. The negative (positive) value of m * Γ−Y characterizes the effective mass of the hole (electron) carriers in the occupied (unoccupied) defect states. The resulting wavelength λ is 6.33 nm for the DS-1 state, which is one order smaller than that for the DS-2 (29.47 nm) and DS-3 (28.12 nm) states [see Table IV]. Specifically, the calculated λ for DS-1 state is comparable with that reported on the bulk BiInO 3 51 and ML group IV monochalcogenide 30,31 [see Table IV], rendering that the present system is promising for nanoscale spintronics devices.
Thus far, we have found that the PST is achieved in the defect states of the Se-VLD engineered ML PtSe 2 . In particular, the PST with the largest strength of the spin splitting (α 1 = 1.14 eVÅ) is observed in the DS-1 state, indicating that the PSH will be formed when the hole carriers are optically injected into the occupied defect state of the Se-VLD engineered ML PtSe 2 . Since the wavelength of the PSH in the DS-1 state (λ = 6.33 nm) is substantially small, it is possible to resolve the features down to the tens-nm scale with subns time resolution by using near-filled scanning Kerr microscopy 57 . In addition, due to the sizeable spin splitting in the DS-1 state, the two states with opposite spin orientation at k y and -k y are expected to induce large Berry curvature with opposite sign. By using polarized optical excitation technique, it is possible to create different hole population between these two states. Therefore, a charqe Hall current can be measured similar to the valley Hall effect recently discovered in TMDCs 58 . As such, our findings of the large spin splitting in the Se-VLD engineered ML PtSe 2 maintaining the PST is useful for spintronic applications. which is mainly derived from the strong hybridization between the in-plane p − d orbitals.
Importantly, we have observed a highly unidirectional spin configuration in the spin split defect states, giving rise to the so-called persistent spin textures (PST) 20,21 , which protects the spin from decoherence and induces an extraordinarily long spin lifetime. Moreover, by using k · p perturbation theory supplemented with symmetry analysis, we have demonstrated that the emerging of the spin splitting maintaing the PST in the defect states is subjected to the inversion symmetry breaking together with the 1D nature of the Se-VLD engineered ML PtSe 2 . Recently, the defective ML 1T-TMDCs has been extensively studied 33,34,37,59 . Our study clarifies that the line defect plays an important role in the spin-splitting properties of the ML 1T-TMDCs, which could be highly important for designing spintronic devices.
We emphasized here that our proposed approach for inducing the large spin splitting by using the line defects is not only limited on the ML PtSe 2 but also can be extendable to other ML 1T-TMDCs systems such as the ML PdX 2 (X= S, Se, Te) 59 , ML SnX 2 60 , ML ReX 2 61 , and ML (Zr/Hf)X 2 19 , where the structural and electronic structure properties are similar. Recently, manipulation of the electronic properties of these particular materials by introducing the defect has been reported 59 . Therefore, it is expected that our predictions will stimulate further theoretical and experimental efforts in the exploration of the spinsplitting properties of the ML TMDCs, broadening the range of the 2D materials for future spintronic applications. | 2020-01-15T02:01:00.117Z | 2020-01-14T00:00:00.000 | {
"year": 2020,
"sha1": "7bbe4773ec6f4c8b24b0239fa3a1033503a1ebb0",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2001.04613",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d7136b8cea234a43a3acc7b4abcae30ce8585b21",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
191151154 | pes2o/s2orc | v3-fos-license | Extraction of dyes contained in glow sticks using liquid CO2
ABSTRACT Separation of glow stick dyes adhered to a cotton swab using liquid CO2 provides an engaging demonstration of several chemical concepts including polarity, kinetics, chemiluminescence, and the importance of CO2 a green solvent. The simple protocol allows access to a broad range of students. Differential polarities of the glow stick dyes allow certain dyes to be preferentially dissolved in liquid CO2, leaving other dyes adhered onto cotton. Both TCPO (bis(2,4,6-trichlorophenyl)oxalate) and H2O2, which together provide the necessary reaction for chemiluminescence, are present in both the extracted liquid and cotton upon dissipation of CO2. Thus, it is possible to compare the emission spectrum of the extracted fluid to that of the original glow stick and the residue left on the cotton swab. Emission peaks resulting from the presence of polar dyes in the original glow stick and on the cotton are routinely observed to be missing in the extracts. GRAPHICAL ABSTRACT
Introduction
Both liquid CO 2 and supercritical CO 2 can be used as a substitute for volatile organic solvents in extraction and chromatographic methods (1)(2)(3)(4)(5)(6)(7). CO 2 can be obtained from and returned to the air in a cyclic fashion, it is neither toxic nor flammable, and its use as a solvent avoids the production of large quantities of waste solvent (7). In fact, in a survey of academic researchers, CO 2 was identified as the solvent mostly likely to reduce environmental damage (8). Thus, the use of CO 2 as a solvent in separation science can provide avenues to introduce students to the importance of green chemistry and sustainability. Indeed, several experiments that involve the use of liquid CO 2 during extractions and chromatographic separations have appeared in the chemical education literature. For example, the use of liquid CO 2 to extract various organic compounds from orange peels (1,2), fennel seeds (3), cloves (4), and essential oils (5) has been previously reported.
Chemiluminescence is another topic that has been extensively covered in the chemistry curriculum (9)(10)(11)(12)(13)(14). In fact, chemiluminescent reactions have even been referred to as the most "exocharmic" reactions known, based on the high interest they generate in students and other observers (9). Chemiluminescent reactions have been used to introduce students to topics as varied as fluorescence, kinetics, thermodynamics, chromatography, catalysis, organic synthesis, and principles of green chemistry (9)(10)(11)(12)(13)(14). Many of these experiments have focused on the chemistry of glow sticks, which provide very easily obtained sources of chemiluminescent reactions. The use of such simple materials in experiments is attractive for many chemical educators who work at institutions that do not have large budgets or access to expensive equipment. In a previous publication, it was demonstrated how materials contained in an activated glow stick could be separated using column chromatography (9). In some cases it was possible for students to observe distinct, actively glowing bands on the column as the separation took place. This experiment was light-heartedly termed "glowmatography" on account of its combination of glow sticks and chromatography.
In this letter, we build upon the body of previously reported work to describe how liquid CO 2 can be used to separate various dyes contained in glow sticks using materials as simple as dry ice, plastic centrifuge tubes (1), glow sticks, and cotton swabs. The development of laboratory exercises that use liquid CO 2 rather than supercritical CO 2 is important, because many chemical educators and their students do not have the resources or infrastructure necessary to carry out experiments using supercritical CO 2 . However, extractions involving liquid CO 2 are quite easy to perform (1)(2)(3)(4)(5). Thus, laboratory experiments that incorporate liquid CO 2 as a solvent allow a wider audience of students to learn about the potential benefits of CO 2 as a solvent. Indeed, the experiments reported in this letter introduce students to the same chemical principles taught in the original glowmatography experiment, but without the use of hexane solvent and silica gel stationary phase. In addition, over ten times less glow stick material is required when using this new extraction method. Thus, this new method described represents a substantial move toward a more environmentally friendly study of separation of dyes contained in glow sticks.
In the first step of the reaction mechanism, TCPO (11) reacts with hydrogen peroxide to form a reactive dioxetanedione intermediate, C 2 O 4 . This intermediate collides with a fluorescent dye molecule contained in the glow stick, and upon doing so promotes an electron in the dye to an excited state (dye*). As the electron relaxes back to the ground state, a photon of light is concomitantly emitted. Because the wavelength of the photon emitted depends upon the chemical structure of the dye, various colors of light emission are obtained by placing different dyes or mixtures of dyes within glow sticks (9). For example, various rhodamine dyes emit red or orange light, while bis(phenylethynyl)anthracene (BPEA) dyes emit yellow, green, or blue light. In addition to these differences in colors of light emitted, rhodamine dyes tend to be more polar than BPEA dyes (9). The experiments reported herein allow students to use liquid CO 2 , a non-polar solvent, to extract actively glowing non-polar dyes from an activated glow stick mixture. Direct visualization of the extraction process is possible because it is easy to recognize that the extracted material glows a different color than the original mixture.
Materials and methods
A glow stick (both Supreme Glow and Super Glow brands work, available at https://www.partycity.com/) is activated by breaking the inner glass ampoule. The glow stick is cut open using a PVC pipe cutter. A cotton swab with a plastic handle is cut in half, and the cotton end is dipped into the activated glow stick mixture. Excess liquid is dabbed from the cotton. The swab is placed cotton side up into a Corning 15 mL polypropylene centrifuge tube (Fisher Scientific, manufacturing number 430052). Dry ice is powdered with a hammer, and then added to the centrifuge tube until it is roughly ¾ full. The cap is sealed tight and the assembly is immediately placed into hot water (60-70°C) contained in a 355 mL plastic soda bottle with its top removed. As the dry ice sublimes, pressures sufficient to liquefy CO 2 build inside the centrifuge tube, melting usually occurs within 30 s. Even so, pressure does vent from the tube as evidenced from considerable hissing noise observed as CO 2 gas escapes. On occasion, the dry ice does not liquefy. If not, it is helpful to tighten the seal of the cap on the centrifuge tube. Non-polar dyes contained in the glow stick mixture are preferentially dissolved into the liquid CO 2 that rinses through the cotton, and these dyes collect in the bottom of the centrifuge tube (Figure 2). If available, an emission spectrometer is used to collect the emission spectra of the glowing extract and the glowing residue remaining on the cotton. In addition, the emission spectrum of a second cotton swab that has been dipped into the activated glow stick mixturebut that has not gone through the CO 2 extraction processis also recorded. All processes are carried out by students except the collection of emission spectra. Emission spectra were carried out in the dark using an Ocean Optics USB4000 spectrometer with an optical resolution of 1.0 nm FWHM; integration time was set at 5 s. Light emitted from extracts and material adhered to cotton swabs were collected via a fiber optic cable attached to the spectrometer. To collect spectra of material on cotton swabs, the end of the fiber optic was held ∼3 mm above the surface of the cotton swab. To collect spectra of residues in a centrifuge tube, the fiber optic was inserted into the centrifuge tube and held ∼3 mm above the surface of the extract (after removal of the cotton swab).
Results
Fluid from an activated, orange light stick is applied to a cotton swab and placed in a centrifuge tube as described in the Materials and Methods. This is an elegant manner of providing polar material that remains suspended above the bottom of the centrifuge tube. The stalk of the cotton swab is made of polypropylene, analysed by FTIR, and therefore neither of the dyes absorb onto the stalk, making analysis of both extract at the bottom of the centrifuge tube and the residue remaining on the cotton tip quite easy and convenient. The centrifuge tube is filled with powdered dry ice and sealed. When the liquid CO 2 forms, it drips through the cotton swab. The glowing stops due to the decrease in temperature, and the liquid CO 2 takes on a faint orange-yellow color from dyes that dissolve in the liquid CO 2 (Figure 2, left). Dyes extracted from the cotton collect on the bottom of the centrifuge tube as the CO 2 escapes. After all the CO 2 escapes, the cap is removed and the contents of the centrifuge tube are allowed to warm up. Upon reaching about room temperature, both the residue on the cotton and the extract begin to glow again (Figure 2, right). The light emitted from the cotton swab is a noticeably different color than the light emitted from the extract (Figure 2, right), due to differential solubility of dyes in the liquid CO 2 . While the emission spectrum of the glowing residue adhered to the cotton (Figure 3, thin line) is somewhat dimmer than fresh glowing glow stick mixture adsorbed onto a cotton swab (Figure 3, bold line), the spectra are otherwise quite similar. Both spectra show peaks at about 550, 620, and 675 nm. On the other hand, the peaks at 620 and 675 nm are clearly diminished in the emission spectrum of the extracted material on the bottom of the centrifuge tube ( Figure 3, dotted line). These results indicate that liquid CO 2 tends to extract yellow-green emitting dye (s), but not red emitting dye(s) from the cotton. Given the non-polar nature of liquid CO 2 and the polar nature of the cellulose in the cotton, it is likely that the redorange emitting dye(s) are more polar than the yellowgreen emitting dye(s). When the experiment is repeated using a pink light stick, the emission spectra of fresh glow stick mixture (Figure 4, bold line) and residue adhered to (Figure 4, dotted line). These results suggest that liquid CO 2 extracts a substantial amount of non-polar, blue emitting dye(s) from the pink glow stick fluid, leaving red emitting dye(s) adsorbed on the cotton. The spectra are consistent with the more pronounced blue color observed in the extract than on the residue adhered to the cotton ( Figure 5). This extraction method can be successfully applied using glow sticks of other colors, but orange and pink glow sticks tend to provide the greatest contrast in color between the extract and residue adhered to the cotton. Results from experiments using glow sticks of other colors, as well as a student laboratory sheet, can be found in the supporting information.
Discussion
Using liquid CO 2 to extract dyes from activated glow stick mixtures provides a simple, colorful, and motivating way to introduce students to the importance of sustainable solvents. The process is easy to carry out, allowing students at a wide variety of institutions to participate in these experiments. Nevertheless, the process allows for the discussion of several chemical topics including chemiluminescence, molecular polarity, solubility, kinetic mechanisms, emission spectroscopy, and intermolecular forces.
During the extraction process, it is interesting to note that the glow stick material adsorbed on the cotton loses its glow. This is not surprising, given that it is well known that the kinetics of the chemical reaction in glow sticks substantially decreases with temperature (14,15). However, it is unexpected that both the residue on the cotton swab and the extract begin glowing again after being warmed back to room temperature following the extraction process (Figure 2, right and Figure 5). In order for chemiluminescence to occur, TCPO, H 2 O 2 , and dye(s) must be present (Figure 1). Therefore, both the cotton residue and extract contain TCPO and H 2 O 2 . The presence of TCPO in the extract is likely due to the fact that both TCPO and liquid CO 2 are non-polar. The presence of H 2 O 2 on the cotton is probably due to hydrogen bonding and other dipole-dipole interactions between the peroxide and cotton cellulose. That some H 2 O 2 ends up in the extract implies that H 2 O 2 dissolves in liquid CO 2 . This may occur to some extent if O-H groups on the peroxide act as hydrogen bond donors, while the oxygen atoms on CO 2 act as hydrogen bond acceptors. Similarly, TCPO adhered to the residue could occur by O-H groups on cellulose and oxygen atoms on TCPO acting as hydrogen bond donors and acceptors, respectively. In any event, it is fortuitous that these compounds are present in both the cotton residue and extract, allowing both to recover easily visible chemiluminescence upon warming.
In conclusion, the experiments reported here provide an enlightening and colorful way for students to gain experience using CO 2 as an environmentally friendly solvent. Dyes in glow stick mixtures can be extracted somewhat selectively using simple and inexpensive materials. The protocol is straightforward enough that it can be carried out by non-science majors. On the other hand, interpreting the results of these experiments requires a knowledge of a rich assortment of physicochemical topics. As such, we have used this experiment in settings that range from outreach events to undergraduate research projects. | 2019-06-14T14:20:47.955Z | 2019-04-03T00:00:00.000 | {
"year": 2019,
"sha1": "e49037eec742012dc3277b14bb485a999bb56ea3",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/17518253.2019.1609594?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "1e3fb5c80b0854765e00c761ecc51fa87bab8490",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
118402580 | pes2o/s2orc | v3-fos-license | 50 M31 black hole candidates identified by Chandra and XMM-Newton
Over the last ~5 years we have identified ~35 black hole candidates (BHCs) in M31 from their X-ray spectra. Our BHCs exhibited 0.3--10 keV spectra consistent with the X-ray binary (XB) hard state, at luminosities that are above the upper limit for neutron star (NS) XBs. When our BHC spectra were modeled with a disk blackbody + blackbody model, for comparison with bright NS XBs, we found that the BHCs inhabited a different parameter space to the NS XBs. However, BH XBs may also exhibit a thermally dominated (TD) state that has never been seen in NS XBs; this TD state is most often observed in X-ray transients. We examined the ~50 X-ray transients in our Chandra survey of M31, and found 13 with spectra suitable for analysis. We also examined 2 BHCs outside the field of view of our survey, in the globular clusters B045 and B375. We have 42 strong BHCs, and 8 plausible BHCs that may benefit from further observation. Of our 15 BHCs in globular clusters, 12 differ from NS spectra by>5 sigma. Due to improvements in our analysis, we have upgraded 10 previously identified plausible BHCs to strong BHCs. The mean maximum duty cycle of the 33 X-ray transients within 6' of M31* was 0.13; we estimate that>40% of the XBs in this region contain BH accretors. Remarkably, we estimate that BHCs contribute>90% of those XBs>1 E+38 erg/s.
INTRODUCTION
In recent years, we have identified a number of black hole candidates (BHCs) in M31 from their Xray spectra from XMM-Newton or Chandra, using various techniques to exclude active galactic nuclei (AGN) that may be spectrally similar (Barnard et al. 2008; Barnard & Kolb 2009;Barnard et al. 2011aBarnard et al. ,b, 2012Barnard et al. , 2013bBarnard et al. , 2014a. We have recently discovered a method of quantifying the strength of our BHC identifications (Barnard et al. 2013b(Barnard et al. , 2014b) that involves using a double thermal emission model (disk blackbdody + blackbody) to compare our BHC spectra with the spectra of bright Galactic neutron star (NS) X-ray binaries (XBs).
The Galactic NS low mass X-ray binaries (LMXBs) were long thought to be separated into the highly luminous Z sources and the lower-luminosity atoll sources; the Z sources were further split into those that resembled Cygnus X-2, and those that resembled Scorpius X-1 (Hasinger & van der Klis 1989). Muno et al. (2002) showed that the two populations exhibited dramatically different variability, with Z source luminosities varying by a factor of a few while their spectra evolved over timescales of a few days, while atoll source luminosities varied by 1-2 orders of magnitude during spectral evolution over several months.
However, three recent Galactic X-ray transients have exhibited the full range of NS LMXB behavior, going from Cyg-like to Sco-like then atoll behavior as their luminosities decreased: XTE J1701−462 , IGR J17480-2446 (e.g Chakraborty et al. 2011), and MAXI J00556−332 (e.g. Sugizaki et al. 2013). Therefore, it is clear that NS LMXB behavior is governed by the accretion rate (which translates into luminosity). Lin et al. (2009) examined ∼900 RXTE observations of XTE J1701−462, carefully subdividing each observation so that they could study the spectral evolution in detail. They fitted each of the thousands of spectra with a double thermal model (disk blackbody + blackbody), that they developed when they examined the spectral evolution of two transient atoll LMXBs (Lin et al. 2007). They found that their double thermal model was successful, except for two types of spectra: hard state spectra, which are exhibited by NS and BH LMXBs at relatively low Eddington fractions and dominated by Comptonization (van der Klis 1994), and spectra from the Z source "horizontal branch" (Hasinger & van der Klis 1989), which required a Comptonized component in addition to the two thermal components. Lin et al. (2012) also obtained very similar results from their analysis of the the Sco-like Z source GX17+2. The work of Lin et al. (2007Lin et al. ( , 2009Lin et al. ( , 2012 covers the full range of NS LMXB behavior, which they modeled in a consistent way. Furthermore, Lin et al. (2010) successfully applied their model to Beppo-SAX and Suzaku spectra of the persistently bright NS XB 4U 1705−44; the energy ranges were 1-150 keV and 1.2-40 keV respectively, meaning that the usefulness of the double thermal model is not confined to the RXTE pass band.
The hard state is observed in BH and NS LMXBs (van der Klis 1994) only at luminosities 0.1 L Edd , where L Edd is the Eddington luminosity (Gladstone et al. 2007;Tang et al. 2011). We have identified 36 BHCs that exhibit apparent hard state spectra at 0.3-10 keV luminosities too high for NS LMXBs ( 3×10 37 erg s −1 , see Barnard et al. 2013bBarnard et al. , 2014a, and references within). When we plotted disk blackbody tem-perature vs. 2.0-10 keV disk blackbody luminosity for our BHCs, we found that none of our BHCs resided in the region occupied by NS LMXBs (gleaned from the analysis of thousands of spectra by Lin et al. 2007Lin et al. , 2009Lin et al. , 2012, although some were consistent within 3σ (Barnard et al. 2013b(Barnard et al. , 2014a. We classified those BHCs ≥3σ away from the NS LMXB region as strong BHCs, and those within 3σ as plausible BHCs; for some BHCs, the double thermal model was unconstrained, and we labeled these as plausible BHCs also. BH XBs also exhibit a thermally dominated state that is never seen in NS XBs (Done & Gierliński 2003). The thermally dominated state is most often observed in X-ray transients, where ∼1 keV disk blackbody emission contributes >75% of the 2-20 keV flux (Remillard & McClintock 2006). We have been monitoring the central region of M31 for the last 13 years, averaging ∼1 observation per month; we have a total of 175 Chandra observations including our monitoring survey, deeper observations of M31* and public data from other programs. We have identified ∼50 transient Xray sources in our Chandra observations (Barnard et al. 2014a).
In this work we examine those M31 transients with spectra consistent with the thermally dominated state, and compare double thermal fits to these spectra with the NS LMXBs in order to expand our BHC sample. We apply an improved BHC classification method to all of our BHCs, with the hope of upgrading some of the plausible BHCs to strong BHCs.
We also examine two BHCs in globular clusters (GCs) outside of the area monitored by our Chandra survey. The first is located in the GC B045 (also known as Bo 45), following the naming convention of the Revised Bologna Catalogue v. 3.4 (Galleti et al. 2004(Galleti et al. , 2006(Galleti et al. , 2007(Galleti et al. , 2009. Barnard et al. (2008) identified the X-ray source as a BHC because its hard spectrum and high variability indicated that it was in the canonical BH hard state, but the 0.3-10 keV luminosity exceeded the Eddington limit for a 1.4 M ⊙ NS. The second BHC resides in the GC B375 (Bo 375), and was identified as a BHC by Di Stefano et al. (2002). In Barnard et al. (2008) we mistakenly said that the spectrum (well described by a 0.90±0.10 keV blackbody and a power law with photon index 1.73±0.18) was typical of a bright NS XB; however, bright NS XBs fitted with such models yield considerably higher blackbody temperatures (e.g. ∼2 keV for Sco X-1, Barnard et al. 2003a).
OBSERVATIONS AND DATA ANALYSIS
An overview of our survey of 528 M31 X-ray sources in 174 Chandra observations is presented in Barnard et al. (2014a). We refer the reader to that paper for the details of the analysis. In this work, we concentrated on the 112 ACIS observations in our survey because there is no way to extract reliable spectra from the 62 HRC observations. For some BHCs, we also examined XMM-Newton observations following the procedures outlined in Barnard et al. (2013b). Chandra observations were analyzed with CIAO v4.5 while XMM-Newton observations were analyzed with SAS ver. 13.0; X-ray spectra were modeled with XSPEC v12.8.
The Chandra observations are susceptable to pile-up (2 or more photons arriving in the same detection cell within a particular exposure). Piled-up events can either result in a single photon with an energy equivalent to the sum of the energies of the two real events, or the event can be rejected because it doesn't look real (see e.g. Davis 2001). To estimate the degree of pile-up, we created a natively binned image of each X-ray source with no filtration, and found the highest number of counts in a 3×3 pixel area (the size of a Chandra ACIS detection cell); from this we obtained the number of counts per frame, n. The pileup fraction, f p , is then given by: f p ≃ n/2 − (1/12) * n 2 according to ACIS documentation.
For this study, we only considered transients with >200 counts in their X-ray spectra for at least one observation. We refer to these X-ray sources by the source number in our catalog (S1-S528, Barnard et al. 2014a). We have already highlighted BHCs that appear to exhibit hard state spectra at luminosities that are too high for NS LMXBs; our new sample exhibits spectra consistent with a disk blackbody with inner disk temperature kT DBB 1 keV.
We estimated the duty cycle of each transient in two ways. The first of these was to assess the percentage of observations where the target was detected at a >3σ Significance. Since the roll angle was unconstrained, each observation only contained a subset of all the X-ray sources; hence, we only considered observations where the transient could be observed when making this estimate of the duty cycle (DC1). The second duty cycle estimate was made by comparing the duration of the outburst with the total observing time. To do this we measured the time between the last observation before the outburst was detected at >3σ to the the first observation in which the transient detection goes below 3σ; this estimate of the duty cycle (DC2) is an upper limit.
For each object in our sample, we identify the observation that provides the highest quality BHC spectrum; this can be a Chandra ACIS observation (Ob-sID 303-14198) or a XMM-Newton observation (Ob-sID 0112570101-072960401); for XMM-Newton observations, we only analyzed the pn data. We fitted a double thermal model to the best spectrum for each object (WABS*(DISKBB+BB) in XSPEC); if the fit was unconstrained, then we classified the object as a plausible BHC unless the spectrum was too soft to be a NS LMXB (e.g. with no emission above 2 keV). We also fitted more traditional models to these spectra: absorbed power law, absorbed disk blackbody, and absorbed disk black body + power law to represent the hard, thermal and steep power law states respectively (Remillard & McClintock 2006).
We estimated the uncertainties in each fit by generating 1000 spectra from the best fit model using the XSPEC command fakeit; random variations were introduced to each simulated spectrum that were governed by the statistical properties of the original spectrum. We found the best fit for each simulated spectrum, and ranked the values of each parameter from lowest to highest; the 1σ uncertainties were obtained from the 16 th and 84 th percentiles.
For the best double thermal fit to each spectrum, we examined the temperature and 2-10 keV luminosity for each component. Each spectrum was assessed according to three criteria, following Barnard et al. (2013bBarnard et al. ( , 2014b. Table 1 List of properties for our Chandra BHCs. First we give identifications in previous papers and angular distance from M31*, followed by maximum and minimum 0.3-10 keV luminosity along with the best fit Γ for those observations ( a indicates mean Γ for all spectra of that source with >200 counts). We then give the number of outbursts for transients (P for persistent), then present our two estimates of the duty cycle. Finally we indicate variability with χ 2 /dof for constant luminosity. Numbers in parentheses indicate 1σ uncertainties on the last digit. Upper limits to luminosities are quoted at the 3σ level. The minimum disk blackbody temperature, kT DBB , for NS LMXBs depends on the luminosity: 1.0 keV / 1.2 keV for luminosities below / above 2×10 37 erg s −1 respectively; the minimum NS LMXB blackbody temperature, kT BB , is ∼1.5 keV; finally, the disk blackbody contribution to the 2-10 keV spectrum, f DBB , is >45% for NS LMXBs. For our BHC spectra, we expect kT DBB , kT BB and f DBB to be substantially lower than these minima; a cooler disk blackbody naturally leads to a smaller contribution to the 2-10 keV luminosity. For parameters that are below the NS minimum, we calculate the probability that the observed value is consistent with a NS LMXB: P DBB = erfc[(1.0−kT DBB )/σ/2 0.5 ] or erfc[(1.2−kT DBB )/σ/2 0.5 ] depending on L DBB (see above); The probability that the BHC is consistent with being a NS LMXB, P NS , is then P DBB × P BB × P f . If a parameter exceeds the NS LMXB threshold, then the probability of that parameter being consistent with a NS LMXB is 1. We assign a Rank to the BHC based on the probability of being consistent with a NS LMXB: Rank = −log(P NS ). A Rank >2.6 indicates >3σ deviation from a NS spectrum, while a Rank >6.2 indicates a >5σ deviation. This approach to identifying strong BHCs is an improvement upon the one used in Barnard et al. (2013b); therefore we applied this analysis to our BHCs previously identified by their hard state spectra. Table 2 For each BHC we give the observation number, degree of pile-up, then the difference in χ 2 between different emission models, i.e (H)ard, (T)hermal, and (S)teep power law: ∆1 = χ 2 T − χ 2 H and ∆2 = χ 2 H − χ 2 S ; U indicates that the steep power law model was unconfined. We then give the possible spectral states (probability >0.05), with boldface indicating the preferred state; "H (dp)" indicates a hard state where both components are detected; we note that the spectrum for S339 was piled up, and we quote the spectral results found by Nooraee et al. (2012 In addition to the 35 BHCs discussed in Barnard et al. (2013b), we obtained usable spectra from 13 X-ray transients from our 13 year Chandra monitoring campaign; these include 2 ultraluminous X-ray sources (ULXs) that exhibited 0.3-10 keV luminosities >2×10 39 erg s −1 (Kaur et al. 2012;Nooraee et al. 2012;Middleton et al. 2013;Barnard et al. 2013a). The other ∼40 transients in our Chandra survey had insufficient counts in their spectra. With the addition of XB045 and XB375, which lie outside the Chandra survey, our sample contains 50 BHCs. In Table 1 we present a summary of our Chandra results for 48 BHCs (XB045 and XB375 were not included in our Chandra survey); this table is described in the following three paragraphs.
Basic properties of the BHCs
For each source (named following Barnard et al. 2014a), we provide its identity in previous papers; BH1-BH35 are BHCs that were analyzed in Barnard et al. (2013a), while T1-T9 are transients discussed in Barnard et al. (2012); T13 was discovered later (Barnard et al. 2014b). U1 and U2 are ultraluminous transients 2013b), and B128 is a GC BHC identified in Barnard et al. (2014a). We also provide the angular distance from M31*.
We then present the maximum and minimum 0.3-10 keV luminosity observed in our Chandra observations of that source, along with the photon index from the best fit power law model; thermally dominated spectra are indicated by Γ 3. If a source produced less than 200 photons during either of these observations, then we assumed the mean Γ for all observations of that source with >200 counts; these values are indicate by a . If we were unable to fit a hard state spectra for a given source, we assume Γ = 1.7 for the minimum luminosity.
Finally we present the timing properties of each source. First we give the number of outbursts, if any; persistently bright X-ray sources are indicated with "P". If the number of outbursts is unclear, we simply say the source has many. Next are the two estimates of the duty cycle, DC1 and DC2; these are only provided for the transients. Finally we provide the χ 2 /dof from best fitting a constant intensity to the lightcurve (taken from Table 1 of Barnard et al. 2014a), to indicate the level of variability.
Fitting canonical BH models
We summarize our modeling of the BHC spectra with canonical BH models (hard state, thermally dominated state, steep power law state, Remillard & McClintock 2006) in Table 2. All three states consist of thermal and Comptonized emission components; however, the hard state is dominated by the Comptonized component and may be approximated by a power law for lower quality spectra, while the thermally dominated state may be approximated by a disk blackbody; for BHs in the steep power law states, the photon index of the Comptonized component is >2.4 (Remillard & McClintock 2006). For each source we first give the observation that best supports our case for a BH accretor. We then compare the χ 2 values for three spectral models (WABS*DISKBB; WABS*POWERLAW; WABS*[DISKBB + POWER-LAW]): ∆1 shows the difference in χ 2 between the disk black body model and the power law model while ∆2 shows the difference between the power law model and disk blackbody + power law model.Next we show the possible states, with our preferred model indicated in boldface. Finally we give the best fit parameters: absorption, temperature and luminosity of the disk blackbody component (if applicable), photon index and luminosity of the power law component (if applicable), and χ 2 /dof for our preferred model.
In most cases, the preferred model is the one with the lowest χ 2 /dof. However, if there is no significant difference between the fits, then we consider whether the BHC is persistent or transient: we favor a hard state for a persistent source, and a thermally dominated state for a transient. Furthermore, a hard state is preferred to a steep power law state if the disk temperature is higher and Γ is lower than expected for the steep power law state. S179, S300, S345, S415, and S487 have sufficiently good spectra to constrain the thermal components in the hard state spectra.
We see examples of BHCs consistent with all three canonical states. S151, S287, S386 and S396 are consistent with the steep power law state, but have Γ > 3, and are therefore particularly soft; also, Γ = 2.6±0.4 for B045, meaning that it could be very soft too. B375 appears to be rather hot and rather hard for the SPL, but are consistent within uncertainties; the simple power law model does not yield an acceptable fit. Table 3 summarizes our results. We first give the source number of each BHC in our survey paper (S1-S528, Barnard et al. 2014a). Then we present the temperature and 2-10 keV luminosity for the disk blackbody and blackbody components respectively, along with the fractional contribution of the disk blackbody to the 2-10 keV emission; luminosities are normalized to 10 37 erg s −1 and assume a distance of 780 kpc (Stanek & Garnavich 1998). These data are followed by the BHC Rank (i.e. −log(P NS )), and the class of the BHC: strong (S) or plausible (P). Finally we present any comments. Globular clusters are indicated with "G", and the GC name in parentheses, following the Revised Bologna Catalog v. 3.4 (Galleti et al. 2004(Galleti et al. , 2006(Galleti et al. , 2007(Galleti et al. , 2009); transients are indicated by "T", and include ULXs labeled "U"; "P>S" shows that the BHC was previously classified as a plausible BHC in Barnard et al. (2013b); "Soft" indicates a spectrum with little flux above ∼2 keV. Sources where the disk blackbody + blackbody model was unconstrained are indicated by dots.
Fitting double thermal models
We find that 42 BHCs exhibited a Rank >2.6, and differed from the NS LMXB spectra by >3σ; these are classed as strong BHCs, and include 10 systems that have been promoted from plausible BHC classification (Barnard et al. 2013b). Previously, we considered each criterion separately, but now we combine the probabilities for each criterion into one. Furthermore, 36 BHCs exhibited Rank >6.2, with a >5σ difference from NS LMXB spectra. Figure 1 shows kT DBB vs. L DBB for 46 BHCs; the double thermal model was unconstrained for 4 BHCs. Circles represent persistent X-ray sources, while triangles represent transients; red symbols indicate BHCs in globular clusters. None of our BHCs had best fits inside the NS LMXB region of kT DBB vs. L DBB parameter space, although some BHCs were consistent within 3σ. We see a natural systematic correlation between temperature and 2-10 keV luminosity: lower temperature emitters contribute less to the 2-10 keV flux. The transients tended towards lower temperatures than the persistent sources; this is consistent with the transients exhibiting thermally dominated states rather than hard states (Remillard & McClintock 2006). Of the 15 GC BHCs in our sample, 13 differ from NS LMXB spectra at confidence levels >3σ (2 transients, 11 persistent Xray sources); 9 differ from NS LMXBS spectra at >5σ confidence levels. These GC BHCs are particularly interesting because there are no confirmed GC BH XBs in our Galaxy, and there are no known persistent GC BHCs anywhere outside M31.
Comparison with a Bright NS XB in M31
RX J0042.6+4115 is usually the brightest X-ray source in M31 (0.3-10 keV luminosity ∼5-6×10 38 erg s −1 ), and was classified as a Z-source (NS LMXB) after exhibiting an apparently tri-modal color-intensity diagram (Barnard et al. 2003b). We decided to model the X-ray emission with double thermal models to see if it was consistent with the findings of Lin et al. (2007Lin et al. ( , 2009Lin et al. ( , 2010Lin et al. ( , 2012. The highest quality spectrum for RX J0042.6+4115 came from the 2002 January 6 XMM-Newton observa- Table 3 Best fit double thermal models to our BHCs, and comparisons with the Galactic NS XBs. We first provide the name of the BHC, following Barnard et al. (2013b); we additionally include B045 and B375, which are outside the region covered by our Chandra survey. We then give the observation with the best spectrum. Next we provide the temperatures and luminosities of the disk blackbody and blackbody components, along with the disk blackbody contribution to the total flux. Finally, we give the Rank (i.e. −log[P NS ]), class of BHC (strong or plausible), and comments. These comments indicate transients (T), ULXs (U), sources that have been promoted from plausible to strong BHCs (P>S), and BHCs residing in globular clusters (G); the name of the cluster is given in parentheses. "Soft" indicates a spectrum that has little flux above 2 keV. Numbers in parentheses indicate 1σ uncertainties in the last digit.
3.4. Estimating the BH population within 6 ′ of M31* In Barnard et al. (2014a), we found that the number of sources consistent with AGN in our Chandra survey of sources within 20 ′ of M31* was well below the number predicted from the 0.5-10 keV AGN flux distribution obtained by Georgakakis et al. (2008). However, when we restricted our sample to those within 6 ′ of M31* we saw a surplus. Hence, the observed deficit is dominated by instrumental effects. With this in mind, we decided to estimate the black hole contribution to the X-ray population within 6 ′ of M31*, from the duty cycles of transients within this region.
Our survey contains 216 X-ray sources within 6 ′ of M31*, of which 126 are probably XBs, 66 are consistent with AGN, and 22 are known stars, novae etc; the 0.3-10 keV detection limit is ∼ 10 35 erg s −1 , although it is not complete at this level. The 126 probable XBs include 33 X-ray transients. We found 34 of our BHCs in this region, 20 persistent, and 14 transient. To date, we have no information on the accretors in the other 19 transients; they could contain black holes or neutron stars.
We estimated the maximum duty cycle for the unclassified transients from the intervals when transient was not detected at the 3σ level (i.e. like DC2 for our transient BHCs in Table 1). The mean maximum duty cycle for transients >10 35 erg s −1 was 0.13; this would suggest a total of 254 transients within 6 ′ of M31*, 108 of which containing BHs. As a result, we expect >40% of XBs within 6 ′ of M31 to contain BHs. Assuming the median duty cycle for the transients within 6 ′ of M31* (0.07) yielded essentially the same results. The BH fraction would be higher if the actual duty cycle was smaller, or if some of the unclassified transients contain BHs.
The 2007 MW X-ray binary catalog (Liu, van Paradijs & van den Heuvel 2007) contains 103 X-ray transients, of which 83 are classified with NS or BHC accretors. BHCs represent ∼50% of the total MW transient population with known accretors, but ∼70% of transients with known distances and luminosities >10 37 erg s −1 . We found that 31 of the 33 transients within 6 ′ of M31 exceeded 10 37 erg s −1 at some point during our 13 year survey; if ∼70% of these transients contain BHCs, then >60% of the XBs within 6 ′ of M31* could contain BHCs.
DISCUSSION
In this work, we expand upon Barnard et al. (2013b) where we compared the spectra of 35 BHCs with the full range of neutron star spectra. Lin et al. (2009) have applied a double thermal emission model to a transient Z source that exhibited all types of NS LMXB behavior; this model gave good fits except for hard state spectra, and horizontal branch spectra (where a Comptonized component is requred, which dominates hard state spectra). Lin et al. (2007) first presented this model as a NS analog to the BH thermally dominated state; the main strength of the model when applied to the two original tranisents was that luminosity followed T 4 for both thermal components (Lin et al., 2007). However, the temporal and spectral evolution of high inclination LMXBs indicates a substantial extended comptonized component in LMXBs throughout the luminosity range (Church & Ba lucińska-Church 2004, and references within). Nevertheless, the work of Lin et al. (2007Lin et al. ( , 2009Lin et al. ( , 2012 has been extremely useful because it allows us examine the gamut of NS LMXB behavior in a single parameter space. BH LMXBs exhibit a thermally dominated state that has never been observed in NS LMXBs; this state is usually observed in X-ray transients (Remillard & McClintock 2006). We examined ∼50 X-ray transients identified in our Chandra survey (Barnard et al. 2014a), and found 13 suitable for spectral fitting. The remaining transients are possible BHCs, but may also contain NS accretors; further observations may clarify the identities of these systems. We used an improved method for comparing our BHC spectra with NS LMXB spectra for these 13 transients, our 35 original BHCs, and the GC BHCs B045 and B375 for a total of 50 BHCs. We found that 42 exhibited spectra that differed from the NS spectra at a >3σ level, and 36 at a >5σ level; these were all classed as strong BHCs, except for S330 which exhibited a luminosity consistent with a NS XB hard state. The spectrum of S117 was unable to constrain the double thermal model, but was too soft to be a NS XB, and S117 is also considered a strong BHC. The remaining sources were classed as plausible BHCs; 10 BHC that were previously identified as plausible in Barnard et al. (2013b) were promoted to strong BHCs.
We expected hard state and thermally dominated BHCs to be inconsistent with NS spectra. However, we also found some steep power law spectra that were inconsistent with NS spectra; these were particularly soft, with Γ 3. We certainly do not infer that all BH spectra should be separable from NS spectra.
Using this method, we may identify BHCs in many galaxies, including our own. The known Galactic BH LMXBs are all transient, or turned on recently (see Remillard & McClintock 2006, and references within). This is because they were identified with a method that requires observations of optical emission lines in the donor spectrum; however, the optical emission of persistently bright X-ray sources is dominated by reprocessed X-rays from the disk (van Paradijs & McClintock 1994). Our X-ray method has no such biases, and may reveal further BHCs in the known Galactic LMXB population.
We also examined a bright M31 XB (>10 38 erg s −1 ) that is expected to contain a NS accretor (S209 Barnard et al. 2014a). A double thermal model fit yielded parameters consistent with the NS systems studied by Lin et al. (2007Lin et al. ( , 2009Lin et al. ( , 2012. Hence, the observed differences between our BHCs and the Galactic NS XBs studied by Lin et al. (2007Lin et al. ( , 2009Lin et al. ( , 2012 is not due to differences between the RXTE, Chandra, and XMM-Newton observatories. We have identified 126 probable X-ray binaries within 6 ′ of M31* (Barnard et al. 2014a), of which 34 are BHCs; 33 of these systems are transient, including 14 BHCs. The mean maximum duty cycle of the transient systems was 0.13, suggesting that > 40% of XBs within 6 ′ of M31* contain BHs. However, our results suggest that BH XBs contribute 90% of XBs exceeding 10 38 erg s −1 in this region. This result provides further substantial difference in the evolution histories of M31 and the Milky Way, since the majority of MW X-ray sources exceeding 10 38 erg s −1 are NS XBs (Grimm, Gilfanov & Sunyaev 2002). We already know that M31 contains ∼30 times as many bright GC X-ray sources (>10 37 erg s −1 ) as the MW (Barnard et al. 2014a), and could contain ∼4-5 times as many XBs over all (Stiele et al. 2011;Barnard et al. 2014a). | 2014-06-23T20:58:49.000Z | 2014-06-23T00:00:00.000 | {
"year": 2014,
"sha1": "7cdee28cf94ce103c563021eee3e6532cc587b19",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1406.6091",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7cdee28cf94ce103c563021eee3e6532cc587b19",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
14383742 | pes2o/s2orc | v3-fos-license | 18FDG PET-CT imaging detects arterial inflammation and early atherosclerosis in HIV-infected adults with cardiovascular disease risk factors
Background Persistent vascular inflammation has been implicated as an important cause for a higher prevalence of cardiovascular disease (CVD) in HIV-infected adults. In several populations at high risk for CVD, vascular 18Fluorodeoxyglucose (18FDG) uptake quantified using 3D-positron emission-computed tomography (PET-CT) has been used as a molecular level biomarker for the presence of metabolically active proinflammatory macrophages in rupture-prone early atherosclerotic plaques. We hypothesized that 18FDG PET-CT imaging would detect arterial inflammation and early atherosclerosis in HIV-infected adults with modest CVD risk. Methods We studied 9 HIV-infected participants with fully suppressed HIV viremia on antiretroviral therapy (8 men, median age 52 yrs, median BMI 29 kg/m2, median CD4 count 655 cells/μL, 33% current smokers) and 5 HIV-negative participants (4 men, median age 44 yrs, median BMI 25 kg/m2, no current smokers). Mean Framingham Risk Scores were higher for HIV-infected persons (9% vs. 2%, p < 0.01). 18FDG (370 MBq) was administered intravenously. 3D-PET-CT images were obtained 3.5 hrs later. 18FDG uptake into both carotid arteries and the aorta was compared between the two groups. Results Right and left carotid 18FDG uptake was greater (P < 0.03) in the HIV group (1.77 ±0.26, 1.33 ±0.09 target to background ratio (TBR)) than the control group (1.05 ± 0.10, 1.03 ± 0.05 TBR). 18FDG uptake in the aorta was greater in HIV (1.50 ±0.16 TBR) vs control group (1.24 ± 0.05 TBR), but did not reach statistical significance (P = 0.18). Conclusions Carotid artery 18FDG PET-CT imaging detected differences in vascular inflammation and early atherosclerosis between HIV-infected adults with CVD risk factors and healthy HIV-seronegative controls. These findings confirm the utility of this molecular level imaging approach for detecting and quantifying glucose uptake into inflammatory macrophages present in metabolically active, rupture-prone atherosclerotic plaques in HIV infected adults; a population with increased CVD risk.
Background
Atherosclerosis is initiated by a series of proinflammatory events that occur in the arterial wall causing endothelial smooth muscle disruption, macrophage activation and infiltration, oxidized lipid accumulation, and plaque formation [1][2][3][4][5][6]. Several circulating and imaging biomarkers for these proatherogenic processes have been assessed clinically [7][8][9][10][11][12][13][14]. Unfortunately, few are specific for early molecular level events involved in atherogenesis. Thus, they are not predictive biomarkers of subclinical atherosclerosis that identify people at early risk for developing vascular plaques. Moreover, none of these biomarkers provides information about the risk for plaque rupture or thrombosis that result in infarct or stroke.
People living with human immunodeficiency virus infection (HIV) have a 2-fold greater risk for experiencing a stroke or myocardial infarction than the general population [15][16][17]. Evidence suggests that chronic low-grade inflammation associated with the host immune response to HIV infection and ongoing viral replication contributes to greater cardiovascular disease (CVD) risk and the higher incidence of CV events in HIV infected adults [18][19][20][21][22][23]. However, the evidence is based primarily on circulating biomarkers for inflammation (hsCRP, D-dimer, cytokines) which are neither sensitive, nor specific molecular-level predictive biomarkers for early proatherogenesis or vascular plaque in/stability [21,24].
Several groups have pioneered the use of 18 Fluorodeoxyglucose ( 18 FDG) uptake by proinflammatory macrophages present in the arterial wall as a noninvasive, sensitive, specific, and reproducible molecular level biomarker for early atheroma formation in metabolically active, rupture-prone atherosclerotic plaques . Proinflammatory macrophages utilize glucose at a high rate [30,40,49], and 3-dimensional positron emission-computed tomography imaging (PET-CT) detects 18 FDG uptake by macrophages in the vascular wall of animals and humans. 18 FDG PET-CT imaging has been used to detect and quantify vascular inflammation in early atherogenesis and in vulnerable plaques in human aorta and carotid arteries, but not in people living with HIV and asymptomatic CVD.
The purpose of this proof-of-concept pilot study was to determine if 18 FDG PET-CT imaging detects greater aortic and carotid inflammation and early atherosclerosis in HIV-infected adults with mild CVD risk and known carotid plaque than in healthy controls without significant CVD risk. If confirmed, 18 FDG PET-CT can be used to monitor atherogenesis, determine the independent contributions of HIV infection and anti-retroviral therapy to vascular inflammation, and evaluate the antiinflammatory, anti-atherogenic effectiveness of therapeutic interventions in people living with HIV.
Participants
Healthy men and women were recruited from Washington University Institute of Clinical and Translational Sciences Research Participant Registry. Inclusion criteria were: 35-60 yrs old, confirmed HIV seronegative status, fasting plasma glucose <100 mg/dL and <140 mg/dL two hours after ingesting a 75-gr glucose beverage, fasting serum triglycerides <150 mg/dL, HDL-cholesterol >40 mg/dL (men), >50 mg/dL(women), resting blood pressures <130/85 mmHg), carotid intima media thickness <0.8 mm and no evidence for carotid plaque, waist circumference (at the umbilicus) <102 cm (men, <88 cm (women). Exclusion criteria were: known cardiac or cerebrovascular disease, kidney or liver disease (active hepatitis B or C), certain medications (e.g., glucose-or lipid-lowering agents, anti-hypertensives, low dose aspirin, or other anti-inflammatory agents), illegal drug use (cocaine, methamphetamines, opiates detected on urine drug screen), pregnancy, cognitive impairment that limited their ability to provide voluntary informed consent, incarcerated or otherwise unable to provide informed consent. We excluded younger adults because the prevalence of atherosclerosis is rare in people <35 yrs old. Before screening, healthy controls were informed that participation required a test for HIVinfection and the implications of a positive HIV test.
HIV infected men and women were recruited from the Washington University AIDS Clinical Trials Unit and Infectious Diseases Clinics. Inclusion criteria were: 35-60 yrs old, documented HIV seropositive status, stable antiretroviral therapy for at least the past 4 months, CD4+ T-cell count >200 cells/μL, plasma HIV RNA <50 copies/mL, fasting plasma glucose 100-126 mg/dL, or 140-200 mg/dL two hours after ingesting 75-gr glucose beverage, or fasting triglycerides >150 mg/dL, HDL-cholesterol <40 mg/dL (men), <50 mg/dL (women), or resting blood pressure ≥130/85 mmHg, carotid intima media thickness >0.8 mm or evidence of carotid plaque, or waist circumference ≥102 cm(men), ≥88 cm(women), or BMI 25-35 kg/m 2 . Exclusion criteria were identical to those used for the healthy controls. By enrolling two groups with distinctly different cardiometabolic phenotypes, we optimized our chances of detecting a difference in arterial inflammation.
We enrolled 9 HIV infected adults with cardiovascular disease (CVD) risk factors and documented (ultrasound) carotid intima media thickening or non-obstructive plaque, and 5 HIV seronegative adults with no CVD risk factors (Table 1). Right and left carotid 18 FDG PET-CT studies were conducted on all 14 participants. Aorta 18 FDG PET-CT studies were added after the first five participants were enrolled, so aorta 18 FDG studies were conducted on 4 controls and 5 HIV participants. All participants provided verbal and written informed consent.
Carotid ultrasound imaging
Carotid artery intima-media thickness (CIMT) was measured by a single vascular sonographer using B-mode images of both carotid arteries expressed as the average thickness of the far walls of the right and left common carotid arteries; each site represents the average of 3 separate measurements [7]. The intra-class correlation coefficient for repeated measures of the CIMT is 0.91 at our laboratory [52]. The presence of carotid plaque was defined as described [7]; focal wall thickening that was ≥50% of the surrounding vessel wall or as a focal region with CIMT >1.5 mm that protruded into the lumen and was distinct from the adjacent boundary. Carotid wall thickening that did not meet this definition (<50% or <1.5 mm) was classified as a non-obstructive plaque.
FDG PET-CT imaging
After an overnight fast, 18 FDG (~370 MBq; 9.9 ± 0.5 mCi) was administered intravenously and 3.5 hr later, 3D-PET-CT images of the thoracic ascending, arch, and descending aorta and carotid arteries were obtained. Prior to 18 PET images were obtained at 2 bed positions (15 min per position) and both attenuation corrected and nonattenuation images were reconstructed in a 168 x 168 pixel matrix. Attenuation corrected PET images were reconstructed with ordered subset method of the expectation maximization (OSEM) using 4 iterations, 8 subsets and Gaussian filter 5 mm Full Width Half Maximum (FWHM). Only CT attenuated PET volumes were used for analysis. Contrast CT images were obtained immediately after the PET images were obtained. Contrast CT used 150 mAs (eff.), 120 kV, 0.5 sec rotation, 1.0 pitch, 28.8 mm collimation, 3 mm slice thickness, 500 mm transaxial FOV. Immediately prior to acquiring contrast CT images, 70 mL of Isovue-370 (Bracco Diagnostics, Princeton, NJ) were infused intravenously. The contrast agent clearly delineated the narrow carotid arteries and jugular veins, and optimized CT and PET image coregistration and analysis.
PET-CT image analysis
Image analysis was conducted using custom extensions of MATLAB (The Mathworks Inc., Natick, MA). All images were analyzed by the same analyst (blinded to group assignment) using a standardized workflow. In general, MATLAB functions were used to co-register PET, CT attenuation and contrast CT images/datasets, quantify maximum 18 FDG uptake (SUV) within the vessels from 1 cm above to 1 cm below the right and left carotid bifurcation, and through the ascending, arch, and descending aorta. Background 18 FDG uptake in corresponding regions of the jugular veins and superior vena cava were used to calculate the maximum target-to-background ratio (TBR max ) in both carotid arteries and the aorta.
Specifically, PET volumes were converted to body weight standardized uptake values (SUVbw), where, 1 mL pure water = 1gr of body weight. In both regions of interest (neck, upper chest), axial PET and CT volumes were cropped to focus and reduce the dataset sizes. Within each axial image, rigid anatomical landmarks were identified (neck = cervical vertebrae; upper chest = sternum) and used to align the contrast CT volume with corresponding PET and CT attenuation volumes. Affine volume transformation by normalized cross correlation that allowed for rotation and shift in all three planes was used to register the contrast CT scan with the CT attenuation scan using the nearest rigid (bone) landmark corresponding to the vessel of interest. The contrast CT and CT attenuation co-registration matrix obtained was then used to align the corresponding PET volume. This provided optimal PET volume co-registration with contrast CT-enhanced vascular anatomy. MATLAB functions then quantified 18 FDG SUV metrics (mean, median, standard deviation, min, max) along the arterial and venous regions, and these were used to derive carotid and aorta TBR max [28].
Serum lipids and lipoproteins
Blood was collected from an antecubital vein after a 10-12 hr overnight fast. As described [53], serum triglycerides, total-and HDL-cholesterol were measured and LDL-cholesterol estimated in the Core Laboratory for Clinical Studies at Washington University Medical Center. The accuracy of these methods is verified and standardized by participation in the Centers for Disease Control (CDC) Lipid Standardization Program, the CDC Cholesterol Reference Method Laboratory Network, and the College of American Pathologists external proficiency program [54]. Framingham 10-yr coronary heart disease (CHD) risk was determined using an American Heart Association calculator (http://hp2010.nhlbihin. net/atpiii/calculator.asp?usertype=prof ).
Blood hormones and metabolites
As described [55], plasma glucose concentration was quantified using an automated YSI glucose analyzer (Yellow Springs Instruments, Yellow Springs, OH), and plasma insulin and C-peptide concentrations were determined using chemiluminescent immunometric methods (Immulite; Siemens, Los Angeles, CA). The homeostasis model assessment of insulin resistance (HOMA) was calculated as described [56].
Systemic inflammatory biomarkers
D-dimer and highly sensitive C-Reactive Protein (hsCRP) concentrations were quantified using particle enhanced immuno-turbidimetric assay kits according to manufacturer's instructions (Roche Diagnostics, Indianapolis, IN). Human plasma CRP and D-dimer complex with latex particles coated with a monoclonal antibody directed against CRP or D-dimer epitopes, and the precipitate was assayed for turbidity on a Roche/Hitachi Cobas c system. Adequate amounts of archived plasma from 4 healthy controls and 5 HIV participants were available for these assays.
Data analysis
Mean ± standard error (SE), median, minimum and maximum values are reported for the participant characteristics. Physical and demographic characteristics were compared using a non-parametric Fishers' exact test or Kruskal-Wallis one-way analysis of variance by ranks test. Carotid and aorta maximum TBR max were compared between groups using an unpaired t-test. P < 0.05 was accepted as significant.
Results
The two groups had similar demographic characteristics (age, sex, ethnicity; Table 1). Mean and median age for HIV + participants (Mean ± SE; 52 ± 3 yrs, median = 52 yrs) were numerically, but not statistically higher (p = 0.07) than controls (44 ± 3 yrs, median =46 yrs). Immune (CD4+ T-cell count = 771 ± 132 cells/μL) and virologic (all <50 copies HIV RNA/mL) status were stable and controlled in the HIV + participants. By design, cardiovascular disease risk profiles were worse among HIV + participants than controls ( Table 1). The Framingham 10-yr coronary heart disease (CHD) risk score was greater (9 ± 2%) in HIV + participants than in controls (2 ± 1%; p < 0.01), but by definition these represent low (<10%) risk for CHD events (MI, stroke). Four of the 9 HIV participants had Framingham 10-yr CHD risk scores between 10-20%. The higher CHD risk score among HIV + participants was attributed to current tobacco use and higher systolic blood pressure (history of hypertension).
The mean intima media thickness of the common carotid arteries was greater (p < 0.01) in HIV + participants (0.78 ± 0.02 mm) than controls (0.54 ± 0.03 mm), and eight of nine HIV + participants had ultrasound detectable non-obstructive plaques in at least one carotid artery, while no plaques were detected in controls. Glycemic control parameters (fasting glucose, insulin, Cpeptide, HOMA) were not different between control and HIV + participants. Fasting triglycerides were greater (p = 0.04) in HIV + participants (149 ±35 mg/dL) than controls (63 ± 6 mg/dL), but total-, HDL-, and calculated LDL-cholesterol levels were not different between groups. D-dimer and hsCRP levels were numerically, but not statistically (P > 0. 16), higher in HIV + participants than controls.
Discussion
Carotid artery 18 FDG PET-CT imaging detected differences in vascular inflammation and early atherosclerosis between HIV-infected adults with CVD risk factors and healthy HIV-seronegative controls. These findings confirm the utility of this molecular level imaging approach for detecting and quantifying glucose uptake into inflammatory macrophages present in metabolically active, rupture-prone atherosclerotic lesions or early nonobstructive plaques in HIV infected adults; a population with increased CVD risk.
Vascular inflammation has been implicated in the underlying pathophysiology for the higher incidence of myocardial infarction and stroke in HIV infected adults taking anti-retroviral medications. However, this suggestion is based on indirect, non-specific measures of circulating pro-inflammatory or pro-oxidant stress biomarkers, or static vascular imaging methods (carotid intima media thickness, coronary calcium deposition). We provide the first direct, molecular-level evidence for the presence of metabolically active, inflammatory vascular lesions/plaques in people living with HIV infection. It is important to note that the HIV participants studied had modest clinical evidence of CVD risk; characterized by carotid intima media thickening or small non-obstructive plaques and low Framingham 10-yr CHD risk profiles (9 ± 2% risk). However, carotid 18 FDG PET-CT imaging and greater TBR max provided clearer evidence of atherosclerotic vascular disease in HIV + participants.
We specifically selected participants with ultrasounddetected carotid intima media thickening and nonobstructive plaques in order to assess the ability of vascular 18 FDG PET-CT imaging to detect an expected difference in vascular inflammation between healthy controls and HIV-infected men and women with CHD risk. In agreement with others, we found that not all calcified lesions are metabolically active, inflammatory, rupture-prone plaques [25,26]. The current study was underpowered to definitively assess the relationship between 18 FDG uptake and carotid thickness. The relationship between plaque morphology and 18 FDG uptake should be further investigated in future studies.
This imaging method may be useful for addressing several critically important clinical questions in HIV infected people, e.g., Is there vascular inflammation in untreated HIV-infection? Does highly active antiretroviral therapy reduce or worsen vascular inflammation? Is vascular inflammation worse in older HIV-infected adults than in age-and CVD risk factor-matched HIV-seronegative adults? Do effective treatments for insulin resistance, dyslipidemia, or antiinflammatory agents reduce vascular inflammation in HIV? In the long-term, does a reduction in vascular inflammation translate to fewer clinical events (stroke, MI) in HIV? This non-invasive method is ideal for examining interactions among vascular inflammation and CVD risk factors (insulin resistance, central obesity, dyslipidemia) on early atherosclerotic progression in HIV and other autoimmune disorders where inflammatory stimuli are implicated (e.g., systemic lupus erythematosus, rheumatoid arthritis, Crohn's disease).
Circulating inflammatory biomarker levels (hsCRP, Ddimer) were variable, but on average, were 3-4 times higher in HIV than healthy controls. This supports the generalization that even well-controlled HIV (using contemporary anti-viral agents) is associated with a chronic, low-grade, pro-inflammatory state, but the stimulus, source, location and severity of the inflammation cannot be discerned from these plasma biomarkers. HIV related inflammation can be caused by multiple factors, including chronic replicating virus, anti-HIV medications, gut microbial translocation, obesity, diabetes, tobacco/alcohol/illegal drug abuse, hepatitis co-infection, or other co-morbidities. 18 FDG PET-CT specifically revealed vascular inflammation in the carotid arteries as a quantifiable source for molecular-level, pro-inflammatory events that are biochemically related to early atherogenesis, and if left unrestrained, can precipitate a CV event. This study has limitations. We had a small sample size and we may have been underpowered to detect certain between group differences. But, these are expensive imaging studies, and therefore we intended these data to show proof-of-principle for larger clinical studies. On average, the HIV infected adults tended to be older than the healthy controls. Advanced age is associated with more vascular inflammation and 18 FDG uptake. But, the focus of this study was not on "what causes vascular inflammation in HIV?" Instead, the focus was on a dichotomous outcome; i.e., can we detect vascular inflammation using 18 FDG uptake in HIV with mild CVD risk? The intent was not to address the question "is vascular inflammation worse in age-matched HIV vs healthy controls?" This is an excellent follow-up study, now that we have developed the technique and we understand the usefulness of 18 FDG PET-CT imaging for detecting early, low-level vascular inflammation in people living with HIV. We did not attempt to quantify 18 FDG uptake in the coronary vessels; these are very narrow, in motion, and surrounded by the glucoseconsuming heart muscle. Attempts to image inflammation in the coronary vessels using 18 FDG have been made [57]. We cannot determine the specific risk factor that caused greater 18 FDG uptake in HIV participants (e. g., tobacco use, hypertension, glycemic control, higher triglycerides) because the two groups were selected based on their distinctly different cardiometabolic phenotypes. Likewise, the study was not designed to determine whether HIV-infection per se, or which specific anti-HIV medication caused greater 18 FDG uptake in the HIV + participants. However, these pathogenesis questions can be addressed given the proof-of-principle findings reported here. Indeed, recent studies have begun to investigate these questions using 18 FDG PET-CT [58].
Conclusion
Carotid 18 FDG PET-CT imaging detected significant vascular inflammation in HIV-infected men and women with low Framingham CHD risk scores, suggesting that this molecular imaging method is sensitive to early proatherosclerotic processes in a clinical population suspected of having chronic, low-grade inflammationinduced cardiovascular disease.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions KEY conceived, designed, and coordinated the study, acquired and interpreted data, performed the statistical analyses, and drafted the manuscript. EL participated in the design and coordination of the study, and acquired data. ETO and DNR participated in the design and coordination of the study, acquired and interpreted data, monitored participants, and helped draft the manuscript. MH and SB participated in the design and coordination of the study, acquired PET-CT images, assisted with PET-CT image analysis, and helped draft the manuscript. VGD-R participated in the design and conduct of the study, acquired, analyzed, interpreted carotid ultrasound images, and helped draft the manuscript. All authors read and approved the final manuscript. 18 FDG uptake (Mean ± SE) was greater (P < 0.03) in the HIV group (n = 9; 1.77 ± 0.26, 1.33 ± 0.09 target to background ratio-max (TBR max )) than in the control group (n = 5; 1.05 ± 0.10, 1.03 ±0.05 TBR max ). Aorta 18 FDG uptake tended (P = 0.18) to be greater in HIV (n = 5; 1.50 ±0.16 TBR max ) vs control group (n = 4; 1.24 ±0.05 TBR max ). | 2014-10-01T00:00:00.000Z | 2012-06-22T00:00:00.000 | {
"year": 2012,
"sha1": "6a972002408efbf8b92e8f2b070d98deb755a8c5",
"oa_license": "CCBY",
"oa_url": "https://journal-inflammation.biomedcentral.com/track/pdf/10.1186/1476-9255-9-26",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ac61f19f99840cad96301f4e085a53618c535a7d",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
227066505 | pes2o/s2orc | v3-fos-license | Applying a Model of Teamwork Processes to Emergency Medical Services
Introduction Effective teamwork has been shown to optimize patient safety. However, research centered on the critical inputs, processes, and outcomes of team effectiveness in emergency medical services (EMS) has only recently begun to emerge. We conducted a theory-driven qualitative study of teamwork processes—the interdependent actions that convert inputs to outputs—by frontline EMS personnel in order to provide a model for use in EMS education and research. Methods We purposively sampled participants from an EMS agency in Houston, TX. Full-time employees with a valid emergency medical technician license were eligible. Using semi-structured format, we queried respondents on task/team functions and enablers/obstacles of teamwork in EMS. Phone interviews were recorded and transcribed. Using a thematic analytic approach, we combined codes into candidate themes through an iterative process. Analytic memos during coding and analysis identified potential themes, which were reviewed/refined and then compared against a model of teamwork processes in emergency medicine. Results We reached saturation once 32 respondents completed interviews. Among participants, 30 (94%) were male; the median experience was 15 years. The data demonstrated general support for the framework. Teamwork processes were clustered into four domains: planning; action; reflection; and interpersonal processes. Additionally, we identified six emergent concepts during open coding: leadership; crew familiarity; team cohesion; interpersonal trust; shared mental models; and procedural knowledge. Conclusion In this thematic analysis, we outlined a new framework of EMS teamwork processes to describe the procedures that EMS operators employ to convert individual inputs into team performance outputs. The revised framework may be useful in both EMS education and research to empirically evaluate the key planning, action, reflection, and interpersonal processes that are critical to teamwork effectiveness in EMS.
INTRODUCTION
Despite improvements in quality and effectiveness in emergency medical services (EMS), 1-2 improving patient safety remains an important, ongoing concern. 3 As an integral component of the healthcare system, significant work has been done in EMS to improve patient safety by adopting evidence-
Fernandez et al.
Applying a Model of Teamwork Processes to EM Services
Population Health Research Capsule
What do we already know about this issue? Teamwork processes, critical to organizational success, may be grouped into performance episodes: planning, action, reflection, and interpersonal processes.
What was the research question?
Can the model of teamwork processes in emergency care be extended to the EMS context?
What was the major finding of the study?
This study provides early empirical support to applying a model of teamwork processes in emergency care to EMS.
How does this improve population health?
The revised model may be useful to guide future "deliberate practice" training or focused evaluation of key teamwork processes to improve teamwork performance in EMS.
Conceptual Framework
We define teamwork as the interaction of two or more individuals to perform a given task. 19 Teamwork is the interrelated set of team member's thoughts, beliefs, and feelings needed for the team to function as a unit. 12 Team members see themselves-and are seen by others-as belonging to a specific social entity within an organization. 20 Teamwork processes are the cognitive, verbal, and behavioral activities directed toward organizing tasks (inputs) to achieve collective goals (outputs), and form the basis for team competencies (eg, knowledge, skills, and attitudes) that are crucial for effective healthcare team performance. [18][19] One of the foundational models of teamwork is the input-process-output (IPO) model. [18][19]21 In this model, inputs are the individual characteristics of employees, the available organizational resources, and the demands of the task to be done. Processes are the interdependent actions and behaviors that convert inputs to outputs. Outputs include objective outcomes such as overall team performance and mission completion, as well as less tangible outcomes such as patient and employee satisfaction. [18][19][20][21][22] Building on the IPO model, Marks, Mathieu, and Zaccaro proposed a temporally based model of teamwork processes. [18][19]23 In this framework, teamwork processes are thought to occur in interacting performance episodes: transition processes; action processes; and interpersonal processes. Further refinements to the model were proposed by Fernandez et al, 18 who separated transition processes into planning processes (eg, setting goals and prioritizing tasks to be completed) and reflection processes (eg, feedback on areas of improvement), as these domains were thought to occur in distinct episodes of time ( Figure 1). [18][19]23 In the revised model, planning, action, and reflection processes inform one another over time, while interpersonal processes contemporaneously affect the success of the other processes. 19,23 A list of teamwork processes and their definitions appear in Table 1.
METHODOLOGY Study Design
This was a qualitative study of EMS personnel (ie, key informants) regarding teamwork in EMS. We approached individual EMS providers for enrollment via purposive sampling of personnel to complete a semi-structured, audiotape-recorded phone interview.
Study Population
The study population was a convenience sample of fire department-based EMS agency in Houston, TX, which responds to over 225,000 911 calls annually. All firefighters in the agency have been certified at the emergency medical technician (EMT) level of training, while approximately 10% are paramedic-certified. The enrollment criteria were as follows: 1. A valid state EMT license, and 2. Full-time employment in the agency.
Study Procedures
We conducted confidential, one-on-one telephone interviews among participants to identify barriers and enablers of effective teamwork in their organization. Interviews were scheduled in advance and were conducted by calling into a conference call service (FreeConferenceCall.com, Long Beach, CA) that allowed for interviews to be recorded on a secured, password-protected site. Prior to commencing the study, we piloted interview questions with members of a separate, hospital-based EMS agency.
Recruitment of Study Participants
Study participants were recruited through the following means: 1) recruitment email from the agency's medical director; 2) visits to fire stations to promote the study; and 3) announcing the study at a training conference. We explained the purpose of the study, as well as identified the enrollment criteria. Those interested were contacted to set up a phone interview. We recruited participants until we achieved the point of theoretical saturation. "Theoretical saturation" occurs when additional data collection does not produce additional knowledge or understanding with respect to the study questions. [24][25][26] In other words, this is the point at which an interviewer is able to predict the answers that participants would provide given a certain question (ie, when no new perspectives on a topic are gained).
To estimate the sample size necessary for saturation, we anticipated a baseline of 15-20 interviews. [24][25] Given the degree of segmentation within the organization by professional certification (ie, paramedic vs EMT) as well as by rank (officers vs firefighters), we anticipated that we would need to sample approximately 30-40 key informants to reach theoretical saturation. Also, due to the time lag between participant enrollment and completion of phone interviews, we estimated a 50% dropout rate among enrollees. To account for this, we planned to recruit between 60-80 EMS personnel to satisfy our ultimate participation goal of 30-40 participants who would complete the telephone interview.
Phone Interviews
Phone interviews followed a semi-structured format. Key informants were asked "grand tour" questions, that is, broad open-ended queries about the general characteristics of a given setting or role, regarding typical EMS runs during a typical shift (eg, "Can you walk me through a typical ambulance run during a typical shift?"). These "ice-breaker" questions are thought to encourage participants to feel more comfortable sharing during the interview. [26][27] These were followed up with questions about specific teamwork processes (ie, planning processes -"What are you thinking/saying to your partner on the way to the scene?"; action processes -"During a typical 911 call, how are tasks divided up between partners?"; "When you're on the way to the hospital with a patient, what sort of things are you thinking/doing?"; "Can you describe a typical interaction between the EMS crew and the hospital staff?"); reflection processes -"What sort of things happen after you've handed off care at the hospital and you're on your way back to the station?"; and interpersonal processes -(eg, "How often are there disagreements about what should be done?"), routine task activities (eg, "What sort of tasks are typically required during a typical call?"), as well as task activities that required teamwork (eg, "What tasks are better done by groups of two or more, rather than by just one person?").
Additionally, officers in the fire department were asked about supervisory/coordination activities (eg, "What makes your job managing a critical event such as a multi-casualty incident go more smoothly?"), or the role of senior leadership/ management in promoting teamwork (eg, "What can senior leadership/management do to promote teamwork?"; and "How does scheduling crews for 24 hours at a time affect teamwork?"). Finally, participants were asked about enablers and barriers to teamwork in their typical work day. The complete interview protocol is available in Appendix A. The
Fernandez et al.
Applying a Model of Teamwork Processes to EM Services lead author conducted all interviews. No personal identifiers were included during the interviews. All interviews were audio-recorded, transcribed verbatim, and reviewed for accuracy. The institutional review board approved this study.
Coding
We used a commercially available software program designed for qualitative data management to code data for later analysis (NVivo 11 Student Version; QSR International, Victoria, Australia). We created a codebook where the transcribed data were systematically sorted into separate, individual "chunks" of data, or codes. [26][27] In this initial round of coding, the first author categorized coherent thoughts identified within the textual data using deductive, "theorybased" codes. A key part of this process was the use of "memoing" in which observations were made during the data analysis, including annotation of interesting, unique, and recurrent patterns in the text, and preliminary coding decisions were recorded. Additionally, the lead author identified inductive codes by reviewing data that was not captured within the theory-based coding; this resulted in six emergent concepts.
Data Analysis
We used a thematic analytic approach [27][28] to identify themes within the coded data. The first author conducted all data analyses by reviewing transcripts 27 in an iterative process to engage closely with the data. Two authors combined codes into candidate themes that depicted the data accurately. Unlike codes, themes consist of ideas and descriptions that identify what the data is about and/or what it actually means. 27 In other words, themes are distinct units of meaning that are observed in the textual data. Several candidate themes emerged from this process. Finally, all authors reviewed the candidate themes to determine how they supported the data, and how they aligned with the Marks teamwork-processes framework, as modified by Fernandez et al. [18][19]29 All authors iteratively selected themes that were most relevant and made the most meaningful contribution to understanding what was going on within the data. The result of this deliberative process was the revised model of teamwork processes applied to EMS.
RESULTS
We reached a point of saturation once 32 respondents completed phone interviews. Participants were selected from across the organization, from firefighter-EMTs with one year of experience in EMS to senior fire captains with 40 years of experience; the median work experience was 15 years. The sample consisted of substantially more males than females (30 vs 2), which is consistent with the percentage in the organization as a whole. The sample consisted of substantially more paramedic-certified firefighters (28 vs 4) than those certified as EMT. The data provided general support to the
Concept Definition
Planning processes
Mission analysis
Interpretation and evaluation of the crew's overall mission, including the key tasks to be performed, the operating environment that will be encountered, as well as the human and material resources necessary to accomplish the pending mission Goal specification Identification and prioritization of goals that are aligned with, and necessary to accomplish, the overall mission
Reflection processes
Debriefing A critical evaluation of the events that transpired during the team's performance
Conflict management
Processes that assist with interpersonal disagreements among team members
Motivation and confidence building Processes that increase confidence and motivation among team members
Affect management Regulating team members' emotions to accomplish team goals Table 2. (See supplementary content online.) The revised model illustrating the relationships between the emergent concepts and teamwork processes are illustrated in Figure 3.
DISCUSSION
In this theory-driven study, we sought to apply a model of teamwork processes 18 to EMS. Our analysis provided support to distinct teamwork processes, which were grouped into four domains: planning; action; reflection; and interpersonal processes. 18 The data also uncovered several emergent concepts that respondents felt were central to effective teamwork in EMS: leadership 30-31 ; crew familiarity 32 ; team cohesion [32][33] ; interpersonal trust 23,30-31 ; shared mental models ; and procedural knowledge [36][37] .
Leadership was revealed as influencing both action and interpersonal processes. [30][31] In other words, effective leadership is critical to ensuring that "things get done" 38,39 and to creating conditions that facilitate team effectiveness. 40 These behaviors can be broadly separated into task-focused and person-focused behaviors. 41 Task-focused behaviors are activities that foster understanding of task requirements and the procedures for task completion. 21,39,41 Person-focused behaviors are those that facilitate behavioral interactions, cognitive structures, and attitudes so that members can work effectively as a team. 21,40,41 In a recent meta-analysis, both task-focused (understanding/accomplishing tasks) and person-focused behaviors (promoting norms) were important correlates of team performance. 41 The current study shows how leadership affects EMS teamwork processes.
Additionally, shared mental models were linked to coordinated action. 34 A study of primary care teams revealed a similar relationship, which was helpful for managing unexpected situations. 23 Alonzo and Dunleavy 30 showed that teammates with a shared understanding of collective tasks to be done are more likely to interpret situational cues similarly, improving coordination. 42 Procedural knowledge, the tacit information gained from hands-on task-specific training (ie, "know-how"), was important to team monitoring and backup. [36][37] Marks et al found a similar association between procedural knowledge and the development of backup behaviors through crosstraining, which may improve team effectiveness. 42 Crew familiarity was found to influence the teamwork process of affect management in our study. 32 Crew familiarity is an aspect of team design (ie, the work schedule) that results in cohorts of individuals maintaining a stable work group over an extended period of time.
Patterson et al showed that crew familiarity can influence both interpersonal and action processes. 32 Patterson reports that EMTs work with their most frequent partner only 35% of the time. 32 Unfamiliar EMS teams might be "unclear about their partner's expectations and may be hesitant to speak up when necessary." 32 Further, unfamiliar teams are more likely to experience disruptions in team cohesion, delays in critical actions, and may threaten occupational safety among EMS crews. 31 Additionally, Gersick noted that such unfamiliar teammates may feel "anxiety, confusion, or apprehension" as a result ofsuch lack of professional familiarity with one another. 43,44 Furthermore, others noted that EMS teams with limited prior exposure to one another are more likely to experience lower quality performance. [45][46][47] We found that team cohesion was positively related to motivation and confidence building. As noted above, the shared self-efficacy that members had when working with "my crew" gave EMS personnel a sense of collective confidence in their team's ability to accomplish challenging tasks. Similarly, a meta-analysis showed that interpersonal attraction among teammates was associated with an increased motivation for teammates to perform well on tasks. 48 Additionally, we found that interpersonal trust influenced conflict management. A similar relationship was observed by Benzer et al, who found that psychological safety influences the interpersonal process of conflict management. 23 They noted that "psychological safety promotes effective interpersonal processes by strengthening a collective sense of trust," which is closely related to the concept of trust that emerged from our interviews. 23 Participants shared that they often compartmentalize their emotions rather than addressing them as part of
Fernandez et al.
Applying a Model of Teamwork Processes to EM Services open interpersonal processes. Although many EMS and fire service organizations employ psychologists, conduct occupational stress training, and sponsor in-house peer support groups, the culture within many agencies is one of "do not admit to needing help." 48 Similar barriers are seen in the military setting. 49 It is presumed that the negative stereotypes reduce service members' motivation to seek help. 50 As in the military, normalizing the culture on seeking mental health services is necessary. 51 This framework may be useful for EMS leaders (eg, medical directors, department chiefs, training officers) as well as researchers to identify the strengths and weaknesses in their organization's teamwork performance during team training and evaluation. An EMS agency could then use the results of training evaluations as feedback to modify or emphasize training on weaker teamwork processes, and conversely, allocate resources away from those processes that were judged the strongest.
LIMITATIONS
Our study had some limitations. First, we enrolled individuals at a single agency, which may limit the generalizability of our findings to other agencies. However, the respondents in this study were drawn from a range of ranks (ie, officers and firefighters) and experience levels. Second, the choice of a fire-based EMS agency may limit the generalizability of our findings to agencies whose emergency care services are not organized within a fire department structure. However, the majority of EMS agencies in the United States are fire department based. 52 Third, we enrolled more paramedics than EMTs. However, our aim was to sample a range of EMS providers, including those in senior leadership positions. This likely led to further oversampling of paramedic-certified personnel.
CONCLUSION
In this thematic analysis, we have outlined a model of EMS teamwork processes that describe the procedures that EMS operators employ to convert individual skills, knowledge and resources (ie, inputs) into collective team performance (ie, outputs). Although there are notable exceptions cited in this paper, the science of teamwork research in EMS is still relatively new and developing. Our findings extend prior teamwork research to the EMS context, and form the basis | 2020-11-20T14:07:43.471Z | 2020-10-19T00:00:00.000 | {
"year": 2020,
"sha1": "06870c33c1ea8df469c4f4dafdda4231c3de10ce",
"oa_license": "CCBY",
"oa_url": "https://escholarship.org/content/qt1jq76006/qt1jq76006.pdf?t=qmijnf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3dea889dcad5aa5db4b02d6e3569b12384f912c4",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209370556 | pes2o/s2orc | v3-fos-license | Electronic structure and superconductivity of the non-centrosymmetric Sn$_4As$_3$
In a superconductor that lacks inversion symmetry, the spatial part of the Cooper pair wave function has a reduced symmetry, allowing for the mixing of spin-singlet and spin-triplet Cooper pairing channels and thus providing a pathway to a non-trivial superconducting state. Materials with a non-centrosymmetric crystal structure and with strong spin-orbit coupling are a platform to realize these possibilities. Here, we report the synthesis and characterisation of high quality crystals of Sn$_4$As$_3$, with non-centrosymmetric unit cell ($R3m$). We have characterised the normal and superconducting state using a range of methods. Angle-resolved photoemission spectroscopy shows a multiband Fermi surface and the presence of two surface states, confirmed by Density-functional theory calculations. Specific heat measurements reveal a superconducting critical temperature of $T_c\sim 1.14$ K and an upper critical magnetic field of $H_c\gtrsim 7$ mT, which are both confirmed by ultra-low temperature scanning tunneling microscopy and spectroscopy. Scanning tunneling spectroscopy shows a fully formed superconducting gap, consistent with conventional $s$-wave superconductivity.
INTRODUCTION
Identification of a spin-triplet superconductor would provide us with a potential solid state platform for topological quantum computations, a variant that is particularly robust against decoherence -one of the main impediments to realization of larger scale quantum calculations. There are different routes to realizing spin-triplet superconductivity (SC): either through engineered heterostructures, or in materials where triplet pairing is allowed or even promoted. Here, we focus on the latter path. One class of materials where a triplet component becomes allowed are non-centrosymmetric SC, where mixing of spin-singlet and spin-triplet Cooper pairing channels is possible and Rashba spin-orbit coupling (SOC) can lead to a lifting of Kramers degeneracy for electronic states in the bulk of the material[1].
A possible triplet component is expected to manifest in a number of observables: the upper critical magnetic field will be much higher than for singlet SC and the SC gap will exhibit a more complex structure than the hard gap predicted by the Bardeen-Cooper-Schrieffer (BCS) theory for a singlet SC. One would also expect topologically protected bound states near defects and boundaries that could be detected in local measurements. Experimentally, evidence for this mixing has been found in the non-centrosymmetric heavy fermion CePt 3 Si, where the strong SOC gives rise to a SC gap with line nodes [2]. However, the degree of mixing is not only determined by the strength of the SOC but also by the dominant pairing interaction [3]. If spin-singlet pairing interactions are dominant, the SC in the noncentrosymmetric material will follow the predictions by the BCS theory, as has been found in the case of BiPd [4], and it is independent of the SOC strength. Nevertheless, in BiPd the breaking of inversion symmetry together with strong SOC leads to Dirac-cone surface states [5] with an intricate spin texture [6].
In Sn-based compounds, the development of unconventional SC has been suggested, particularly in the case of the topological crystalline insulator SnTe with Indium doping, where strong SOC has been shown to play an important role [7,8]. A non-centrosymmetric crystal structure in Sn-based materials could then, in principle, favour the appearance of a triplet component. Here we focus on the non-centrosymmetric material Sn 4 As 3 . Transport measurements reveal a metallic nature and SC critical temperature, T c , in the range 1. 16-1.19 K [9], although other measurements showed a partly compensated semimetal [10]. It was only recently that the crystal structure of Sn 4 As 3 was identified as belonging to the non-centrosymmetric space group R3m (no. 160), with a hexagonal unit cell [11]. Additionally, band structure calculations confirmed that it is metallic, although a depletion of the density of states (DOS) is found at ∼ 0.4 eV above the Fermi energy. However, a direct measure of its electronic structure has been missing. Moreover, there have been no recent reports on its SC, apart from other SnAs-based superconductors such as SnAs [12,13] and NaSn 2 As 2 [14][15][16] which exhibit centrosymmetric crystal structures, revealing SC consistent with spin-singlet pairing.
In this work, we report a detailed study of the properties of single crystal samples of Sn 4 As 3 , through thermodynamic and spectroscopic characterization of both normal and SC states using angle-resolved photoemission spectroscopy (ARPES) and ultra-low temperature scanning tunneling microscopy and spectroscopy (STM/STS). The experimental results from ARPES and STM are directly compared with bulk and slab Density Functional Theory (DFT) calculations.
Sample
Growth. Sn 4 As 3 single crystals were synthesized from high purity elemental Sn (99.99%) and As (99.9999%), weighted in stoichiometric molar ratio (4:3). Synthesis was performed inside a quartz ampoule with Ar atmosphere at a residual pressure of 0.2 bar. The ampoule was put into a furnace and heated up to 600 • C for 24 hours. The temperature was then increased to 650 • C, followed by slow cooling down to room temperature at a rate of 2 • C/h. The samples were characterized by energy-dispersive x-ray spectrometry (EDS) and x-ray diffraction (XRD), revealing a chemical composition of Sn 3.8 As 3 and lattice constants of a = 4.0891Å and c = 36.0524Å, in agreement with Refs. [10,11].
Scanning tunneling microscopy and spectroscopy. STM measurements were performed with a home-built ultra-low temperature STM, mounted in a dilution refrigerator [17] with a base temperature of 10 mK and in a superconducting magnet with maximum field of 14 T.
The Pt-Ir tip was cut from a wire and conditioned by field-emission on a Au single crystal prior to measuring. The Sn 4 As 3 sample was cleaved in-situ at low temperatures (T ≈ 20 K).
The bias voltage was applied to the sample. Differential conductance spectra were recorded using a lock-in amplifier (f = 437 Hz), with amplitudes of modulations set at 15 mV and 25 µV for measurements at 11 K and 50-900 mK, respectively.
Angle-resolved photoemission spectroscopy. ARPES measurements were performed at the I05 beamline of Diamond Light Source, UK [18]. Single-crystal samples were cleaved in-situ in a vacuum better than 2·10 −10 mbar and measured at temperatures of 20 K. Measurements were performed using linear horizontal (LH) and linear vertical (LV) polarized synchrotron light with variable photon energy, using a Scienta R4000 hemispherical electron energy analyzer with an angular resolution of 0.2 • and an energy resolution of 20 meV.
Density functional theory calculations. Bulk electronic band structure calculations were performed for the experimental crystal structure of Sn 4 As 3 from Kovnir et al. [11] in the generalized gradient approximation (GGA) using WIEN2k [19], taking into account SOC.
These were used to produce a three dimensional (3D) Fermi surface, as well as 2D cuts at different k z planes. Additionally, bulk and slab calculations were carried out with the Quantum Espresso package [20] using GGA within the framework of Perdew-Burke-Ernzerhof [21], employing optimized norm-conserving Vanderbilt pseudopotentials [22,23], for the same crystal structure as before. We chose a plane wave (PW) cutoff of 80 Ry, Gaussian smearings of 0.02 Ry, and a 24 × 24 × 6 Monkhorst-Pack k-grid to sample the Brillouin zone (BZ). We checked that we reproduce the results for the bulk bands calculated with WIEN2k.
To simulate STM images of the pristine surface and of the surface with a Sn vacancy, we considered a 3 × 3 × 1 supercell, while to simulate STM images with Sn or As vacancies at different subsurface layers, we also considered larger 4 × 4 × 1 supercells. The BZs were sampled using Monkhorst-Pack k-grids (5 × 5 × 1 for the 3 × 3 × 1 supercell, 4 × 4 × 1 for the 4 × 4 × 1 supercell). We chose a vacuum region of 10Å in the case of the slab calculations, SOC was neglected for all STM simulations and all DOS calculations have been performed with a denser 36 × 36 × 8 (36 × 36 × 1 for the slab) BZ grid and a Gaussian smearing of 0.01
Ry.
Specific heat measurements. Specific heat of Sn 4 As 3 crystals was measured by thermal relaxation technique, using a PPMS-9 (Quantum Design) with a 3 He calorimeter. The mass of the sample was m = 11 mg. Measurements were performed at temperatures 0.37-2 K and magnetic fields of 0-20 mT.
Crystal structure and Surface topography
The crystal structure of Sn 4 As 3 belongs to the non-centrosymmetric group R3m, whose hexagonal unit cell is shown in figure 1 (a). The unit cell is composed of three seven-layer blocks of alternating Sn and As layers stacked along the c-axis. Inside each block, pairs of atoms that would otherwise be symmetrically equivalent (Sn1 and Sn2; Sn3 and Sn4; As1 and As3) are inequivalent: the Sn atoms in different layers form distorted octahedra with the surrounding atoms, which are responsible for the lack of inversion centre [11]. The weakest bond in the unit cell occurs between Sn3-Sn4 atoms from two different blocks (indicated by the dashed line in figure 1(a)), with the larger bond distance of 3.24Å, comparable to that observed in other layered SnAs-based materials [14]. Thus, the crystal is expected to cleave well in the (0001) plane, between the Sn3-Sn4 layers. The exposed surface can be either Sn4 or Sn3, which are structurally identical. A Sn4-terminated surface is illustrated in figure 1 figure 4(a). The measurements are consistent with Sn 4 As 3 being metallic, with several bands crossing the Fermi level, in agreement with Kovnir et al. [11]. DFT calculations for the bulk band structure neglecting SOC are shown in figure 4(b). There is good agreement between experiment and calculations over an energy range of several electronvolts, indicating a weakly correlated nature of the Sn 4 As 3 electronic structure. Small discrepancies between experimental data and calculations arise from the strong 3D character of the electronic dispersion and finite k z -averaging in the photoemission experiment. In addition to the bulk bands predicted by the DFT calculations, the measurements show additional bands at energies close to −1 eV (indicated by a white arrow), which are split by ∼ 100 meV. These split bands appear clearly in the slab DFT calculations (red lines in figure 4(c)), similar to those for SnAs [13]. The consistency with ARPES data confirms that they are surface states (SS), which have a splitting of ∼ 108 meV. Including SOC effects in the calculations did not improve the agreement with the experimental data. The DOS of the slab DFT calculation shows two additional peaks, mainly due to contributions from the SS. The STM differential conductance (dI /dV ) spectrum can be taken as proportional to the local DOS. In the dI /dV measurements ( figure 4(d)), two peaks can be identified in the energy range corresponding to the SS, with a splitting of 110 meV (indicated by black arrows), consistent with the splitting observed in ARPES and obtained from the slab calculations. This can be directly compared to the local DOS calculations, which shows the increase in intensity at the SS energies. The STM spectrum shows a depletion of the LDOS around the Fermi level, evidenced by a low differential conductance, which is also consistent with the calculations of the local DOS.
Thermodynamic measurements
Specific heat measurements of Sn 4 As 3 are shown in figure 5. In zero applied magnetic field ( figure 5(a)), a clear jump in C/T is observed at temperatures close to 1.1 K, typical of a superconducting transition. From the local entropy conservation, the critical temperature was found to be T c = 1.14 ± 0.01K, close to the reported values of 1.16 − 1.19 K [9]. The small width of the superconducting transition is indicative of the high quality of the sample.
At low temperatures (well below the Debye temperature) the specific heat of a metal can be written as C/T = γ n + βT 2 , where γ n and β are the electronic and the phonon contributions, respectively. The C/T curve for the normal state at 20 mT field (which fully suppresses the superconducting transition) is shown in figure 5(a). It has a parabolic shape, consistent with a metallic behaviour. A second order polynomial fit (red solid line) yields γ n = 6.66 ± 0.20 mJ mol −1 K −2 and β = 0.933 ± 0.143 mJ mol −1 K −4 5 .
The electronic specific heat, C el /T , can be obtained by subtracting the phonon contribution. Figure 5(b) shows C el /γ n T as a function of T , at zero magnetic field. The jump in specific heat at this temperature is ∆C el = 9.74 mJ mol −1 K −2 . The relative magnitude of the jump, ∆C el /γ n T c = 1.30, is close to the BCS prediction of ∆C el /γ n T c = 1.43. The red line in figure 5(b) shows the fit of the electronic specific heat in the superconducting state derived from the BCS theory: with the temperature dependence of the gap described by where ∆ 0 = rk B T c is the superconducting gap at T = 0 K. f (ε) is the Fermi function and a = 1.138 obtained from fitting the BCS mean field behaviour [4] with equation 2.
Fitting r = ∆ 0 /k B T c yields r = 1.76 ± 0.02 6 in excellent agreement with BCS theory.
Using the measured T c = 1.14 K and the BCS approximation, the superconducting gap is 5 Errors from 95% confidence bounds of the parabolic fit. 6 Error from 95% confidence bounds from the fit.
8 ∆ 0 = 1.76k B T c = 0.177 ± 0.003 meV. Use of more complex models (introducing anisotropy or using two gaps) does not give significant improvement of the fits. The magnetic field dependence of C/T is shown in figure 5(c). SC is found to be completely suppressed already in magnetic fields of H c ∼ 7 mT.
Superconducting gap
In order to obtain further evidence of the superconducting gap structure, we have performed STM/STS measurements at temperatures below 1 K. A well resolved superconducting gap is observed in high resolution tunneling spectra, dI /dV, taken in an energy range of ±1 mV at 50 mK, shown in figure 6(a). The coherence peaks can be clearly identified, while the DOS is completely suppressed around the Fermi energy, as expected from BCS theory with s-wave symmetry. A Dynes equation [24] for a single isotropic gap was fitted to the data, taking into account both thermal and lock-in broadening [17]. The fitting parameters were the superconducting gap ∆ and the electronic temperature, T elec . Here, the additional broadening in the Dynes equation, Γ, was fixed to be very small (Γ ∼ 10 −4 meV). The fit yielded ∆ = 0.182 ± 0.018 meV 7 and T elec ≈ 223 mK. The electronic temperature is dominated by the lock-in modulation (V L =25 µV RMS) used in the experiment, whose contribution to broadening is larger than the thermal broadening. Using the BCS relation ∆/k B T c = 1.76, the gap size yields a critical temperature of T c = 1.20 ± 0.14 K, which is consistent with the reported values [9] and in excellent agreement with the specific heat measurements. broadening. It can be seen that the SC is suppressed already at a temperature of 900 mK, lower than the expected temperature from the thermodynamic measurements. The apparent lower critical temperature can be due to both thermal and lock-in modulation broadening.
The magnetic field dependence of the superconducting gap can be seen in figure 6 (c).
The measurements show suppression of SC above magnetic fields of 8 mT, again consistent with the specific heat measurements. 7 Error from 95% confidence bounds from the fit.
STM topographies show a surface consistent with a cleave between adjacent Sn layers,
where the bond between atoms is expected to be weakest. The surface shows atomic defects that we identify as Sn vacancies in the topmost surface layer from comparison with DFT slab calculations and identification of the defect site. The occurrence of these defects is consistent with the chemical composition of Sn 3.8 As 3 determined from post-growth compositional analysis by EDS.
In addition to these defects in the top surface layer, we find a random distribution of bright triangular patterns. The lack of periodicity indicates that they are not due to the presence of a charge density wave. Additionally, bias dependent imaging and dI /dV spectroscopy maps (not shown) reveal that these are static in energy, suggesting that they are not generated from electron scattering off defects. Simulated STM images from slab DFT calculations show that missing Sn and As atoms from deeper layers produce bright triangular shapes at the topmost layer, which resemble the observed patterns. Thus, we attribute the origin of these patterns as coming from randomly distributed defects throughout different layers of the sample. The less abundant 'triforce' defect does not seem to be captured by these calculations, and is possibly due to an adatom at the surface.
The ARPES measurements confirm the metallic nature of the material with several bands crossing the Fermi level, consistent with tunneling spectra, specific heat and DFT calcula- tions. The Fermi surface shows significant dispersion along all directions, including z direction, which is evidence of a 3D character of the electronic structure despite the layered crystal structure. The ARPES measurements do reveal a pair of surface states that look at first like Rashba-spin split states, but are fully captured in calculations without SOC.
Comparison with bulk and slab DFT calculations reveal that this splitting is due only to the symmetry breaking at the surface.
Despite the non-centrosymmetric crystal structure of the material, the SC properties are found to be fully consistent, within the experimental errors, with what would be expected from BCS theory. The STM tunneling spectra shows a fully formed gap, which is spatially uniform and has the shape characteristic of an isotropic s-wave SC gap. These results follow the trend of other SnAs-based superconductors, where conventional BCS SC with s-wave symmetry has been found [12,13,15,16].
The low upper critical field on its own already provides strong indication that any triplet component of the order parameter is negligible in this system. Taken together with the observations of rather conventional SC in other non-centrosymmetric materials [4,25], this does confirm that to observe a sizeable triplet component of the superconducting order parameter requires a material system where pairing is mediated by a mechanism other than electron-phonon coupling [26,27].
CONCLUSIONS
We successfully synthesized Sn 4 As 3 in the non-centrosymmetric crystal structure (R3m).
Comprehensive characterisation of the normal state electronic structure shows metallic be- For better comparison, bulk and slab DFT DOS and ARPES data were normalised by the intensity at -3.9 eV and at -1.8 V for the STM spectrum. | 2019-12-13T17:40:26.000Z | 2019-12-13T00:00:00.000 | {
"year": 2020,
"sha1": "34f8ccee8b80855a3188dba7e4dcd7536e4b88ba",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1367-2630/ab890a",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "34f8ccee8b80855a3188dba7e4dcd7536e4b88ba",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
216637831 | pes2o/s2orc | v3-fos-license | A review of the bioelectronic implications of stimulation of the peripheral nervous system for chronic pain conditions.
Background Peripheral Nerve Stimulation has been used to treat human disease including pain for several decades. Innovation has made it a more viable option for treatment of common chronic pain processes, and interest in the therapy is increasing. Main body While clinical data is forthcoming, understanding factors that influence successful outcomes in the use of PNS still needs to be delineated. This article reviews the evolution and bioelectronic principles of peripheral nerve stimulation including patient selection, nerve targets, techniques and guidance of target delivery. We collate the current evidence for outcomes and provide recommendations for salient topics in PNS. Conclusion Peripheral nerve stimulation has evolved from a surgically invasive procedure to a minimally invasive technique that can be used early in the treatment of peripheral nerve pain. This review identifies and addresses many of the variables which influence the success of PNS in the clinical setting.
Introduction
The peripheral nervous system is an integral part of the body's communication with the environment (Kiernan et al., 2013). Touch, proprioception, temperature, and nociception influence our perception of the world (Benarroch et al., 2018). In most situations after mechanical or metabolic trauma, transduction and nociception can be beneficial, informing an organism to retreat or protect. However, persistence in nociception or peripheral nerve dysfunction can lead to the development of chronic pain which can have profound consequences to that organism and its social structure (Campbell, 2008;Costigan et al., 2009). Peripheral changes in chemical mediators can lead to pathological nerve firing, triggering changes in the cell bodies of somatosensory neurons located in the peripheral ganglia (dorsal root or trigeminal ganglia). These cell bodies serve as first pass junctions for the transmitted signal, and may lead to a hyperexcitable state with resultant changes in the area of the peripheral nerve, the peripheral ganglia, the spinal cord, and at the level of the anterior cingulate cortex (Tajerian et al., 2013). Common examples of chronic pain caused by peripheral nerve injury include ilioinguinal and/or iliohypogastric nerve pain after inguinal herniorrhaphy, sural nerve injury after podiatric surgery leading to foot pain, intercostal nerve pain after thoracotomy, and facial pain after ophthalmic infection of herpes zoster (Alfieri et al., 2006;Gerner, 2008;Primadi et al., 2016;Opstelten & Zaal, 2005).
Once chronic pain has developed, the patient may experience allodynia, hyperalgesia, and loss of movement or function (Bennett & Xie, 1988). These problems can lead to suffering and disability, which is a major economic burden to both the patient and the healthcare community (Breivik et al., 2006;Duenas et al., 2016). It is estimated that up to 10 % of those impacted by chronic pain may have an origin related to peripheral nerve pathology (Breivik et al., 2006). The diagnosis is made by pain in the distribution of the peripheral nerve, based on both history and examination. Imaging, such as MRI or ultrasound along with diagnostic tests, such as electromyelogram or nerve conduction studies are often used to aid in diagnosis (Rangavajla et al., 2014;Aminoff, 2004). The interventional pain physician may also do diagnostic peripheral nerve blocks using local anesthetic(s) to aid in diagnosis (Bates et al., 2019).
Treatment options for peripheral nerve injury include: physical therapy, oral medications, ablative therapies (e.g. thermal or chemical) and neurostimulation (Bates et al., 2019). If patients do not significantly improve with conservative therapies, PNS is a viable nerve treatment option to treat pain secondary to peripheral nerve injury (Van Buyten et al., 2015;Nagel et al., 2014;Pereira & Aziz, 2014). However, for PNS to be successful, understanding of outcomes is not sufficient. Thus, in this review we discuss the principles that govern successful outcomes for PNS. These principles include understanding the usefulness of a trial prior to an implant, optimal stimulation parameters, appropriate image guidance for optimal lead placement, and specific neural targets which have evidence for success. PNS is considered a low risk procedure, but also has a lower threshold for efficacy, based on limited high evidence studies (Deer et al., 2016).
Historical context
PNS was first used in Philadelphia, Pennsylvania 60 years ago to treat pain in the head and neck (Shelden, 1966). The original approach described by Wall and Sweet was an open approach where the surgeon dissected the tissue to visualize the nerve and apply the lead (Long, 1977). This approach was modified and involved placing a transposed fascial graft between the nerve and lead, though fell out of favor due to the complexity of the surgery and adverse effects (Law et al., 1980). This procedure fundamentally changed in 1999, when Weiner described using percutaneous leads originally designed for the spinal cord to treat occipital nerve-induced headaches (Weiner & Reed, 1999). Eventually, a randomized multicenter prospective study was performed to access efficacy and safety of PNS for the migraine indication (Saper et al., 2011). Unfortunately, this potential landmark study failed to meet the primary end point and brought attention to major adverse events such as lead erosion and lead migration (Saper et al., 2011).
The first clinical study to discuss a device strictly designed to stimulate a peripheral nerve was published by Deer and colleagues as an Investigational Device Exemption (IDE) study on peripheral nerve pain (Deer et al., 2016). This device, the Stimrouter (Bioness, Valencia, California), obtained Food and Drug Administration (FDA) approval for PNS in the trunk and limbs. The differentiating feature was a small implantable tined lead, with a pickup contact that would be used with an external peripheral nerve generator (Pereira & Aziz, 2014). In the past decade, additional innovation has led to miniaturization of technology, education, improvement in placement based on imaging, and additional studies on efficacy and safety (Mansfield & Desai, 2020;Tubbs et al., 2015;Shaw et al., 2016).
Potential mechanisms of peripheral nerve stimulation
PNS is the application of an electric field to a nerve or group of nerves including and/or distal to the dorsal root or trigeminal ganglion (Abejon & Perez-Cajaraville, 2011). Known as the gate control theory, peripheral nerve fibers, including A-alpha, A-beta, A-gamma, Adelta, various B and C fibers, are modulated when an electric field is applied (Melzack & Wall, 1965). The theory suggests that the paresthesia is elicited by an activation of the A-beta fibers which in turn activate the inhibitory interneurons and inhibit the C fibers from carrying afferent nociceptive input. In 1984, Chung et al., provided basic science research using a primate model substantiating this theory (Chung et al., 1984). This afferent inhibition has been the theoretical foundation in peripheral nerve stimulation.
In 2004, additional work by Chae et al. demonstrated that positive efferent stimulation could have an impact on pain perception in human subjects, improving pain and function (Chae et al., 2005;Qiu et al., 2019). This work also supports that PNS may affect the local concentration of biological mediators in the peripheral nervous system target. Chronic pain arising from the peripheral nerve increases the local concentration of mediators such as endorphins and prostaglandins, which leads to increases in blood flow. PNS may have a direct effect on reducing this increased concentration of bioinflammatory mediators, blood flow and pain transmission (Papuc & Rejdak, 2013).
In addition to the peripheral mechanisms discussed here, there are additional theories suggesting there may be spinal or central neural pathways that lead to changes in the pain pathways (Vartiainen et al., 2009;Flor, 2002). Further human studies are needed to authenticate the potential mechanisms. fMRI research in animal models has revealed that the prefrontal cortex and limbic system change in their metabolism and scale after stimulation of peripheral nerves (Long, 1977;Weiner & Reed, 1999;Saper et al., 2011).
Electrical programming variables, such as choice in frequency, pulse width and intensity, have evolved over time. In general, low frequency tonic stimulation to activate either motor or sensory fibers, has been the standard for PNS waveform selection. Recently, higher frequencies (1 to 10 KHz) and novel waveforms, burst therapy, have influenced the spinal cord in unique ways (Amirdelfan et al., 2018;North et al., 2016;Manning et al., 2019). Novel high frequencies have been applied to PNS resulting in a blockade of nerve conduction using 10Khz frequency (Gilmore et al., 2019b).
While significant animal and computer modelling work has been done in the field of PNS, the bioelectronic principles to govern PNS waveform choices clinically are in its infancy. Once implanted in a patient, PNS systems are typically used to stimulate sensory nerve fibers, such as the tibial nerve. Interestingly, the use of PNS on the tibial nerve may result in treatments for sensory pain relief of the pelvis and foot, motor dysfunction, and in some settings for bladder dysfunction (van Balken et al., 2003;de Wall & Heesakkers, 2017). These results are not fully understood. Furthermore, optimal parameters to maximize the potential of PNS is still being defined in the clinical setting. Stimulation effects on a nerve are impacted by distance from the stimulator to the nerve, the impedance of the tissue, and the quality and consistency of the current delivery. Lead shape, consistency of the signal, and electrical field produced by the PNS system may result in unique and different effect on nerves, even if stimulation parameters are similar (Frahm et al., 2016;Morch et al., 2014). In the end, these bioelectronic effects on nerves are only beginning to be elucidated.
Neural targets for headaches and facial pain
The greater and lesser occipital nerves, which are branches of the C2 and C3 nerve roots, have long been a target for treatment of certain headache disorders, chronic post-traumatic pain and occipital neuralgia (Slavin et al., 2019). The occipital nerves innervate the skin overlying the occiput, and have their axons originating in the trigeminocervical complex. Stimulation of these nerves may result in treatment of headache disorders (Bartsch & Goadsby, 2003). Stimulation of the greater, the lesser, and the third occipital nerve with lead placement above and/or below the nuchal line may potentially stimulate one or all three of the occipital nerves (Hayek et al., 2009).
Similarly, supraorbital and infraorbital nerves may be targeted with stimulation techniques (Antony et al., 2019). The supraorbital nerve has a lateral branch, which provides sensory innervation to the skin of the forehead, and a medial branch, which innervates the nose, medial part of the upper eyelid, and medial forehead (Deer et al., 2016;Chung et al., 1984;Chae et al., 2005;Qiu et al., 2019;Papuc & Rejdak, 2013). The infraorbital nerve is a terminal branch of the maxillary nerve and has a sizable area of innervation. This includes sensation to the lower eyelid, the lateral nose, and the upper lip. Current literature supporting the use PNS for these nerves is limited to case series and low level evidence (Antony et al., 2019).
Targets for neuropathies of the upper extremities
Mono-neuropathic pain syndromes of the upper extremity are potential targets for PNS. Common causes of nerve pathology include trauma and post-surgical nerve compression. The ulnar, median, and radial nerve can all be targets for PNS, but it may be technically difficult to place PNS leads due to proximity to the elbow and wrist (Arias-Buria et al., 2019). Ultrasound guidance is particularly helpful in mapping out lead placement and power source placement for the distal upper extremity (Huntoon et al., 2008).
The presence of shoulder pain is common in clinical practice and occurs from a variety of pathologies including osteoarthritis, trauma, post-surgical pain, and poststroke pain. The axillary nerve may be stimulated to improve lateral shoulder pain and motor functional abnormalities, while the suprascapular nerve may be stimulated to improve glenohumeral joint pain and motor pathology of the supra and infraspinatus muscles. The suprascapular nerve can be modulated in the suprascapular notch above the scapular spine to target pain arising from the shoulder joint or from below the notch coming in from a lateral to medial approach (Gofeld & Agur, 2018). Ultrasound guidance has typically been used to identify the suprascapular notch and visualize, and thereby stimulate, the suprascapular nerve and artery, with evidence of efficacy the post-surgical setting. Fluoroscopic guidance may assist in optimal stimulator placement (Kurt et al., 2016). Similarly, the axillary nerve can be modulated by placing a PNS electrode near the nerve either at the quadrangular space or at the surgical neck of the where the nerve and circumflex humeral artery are visualized (Wilson et al., 2018). Research in shoulder pain is a topic of critical interest, and data is improving for both post stroke and degenerative shoulder disease (Mansfield & Desai, 2020).
Neural targets for lower extremity neuralgias
When targeted either individually or in conjunction, neuromodulation of the femoral and sciatic nerves has resulted in significant improvement in patients suffering from phantom and residual limb pain after amputation (Gilmore et al., 2019a). PNS of the femoral nerve has resulted in treatment of post-operative pain relief after knee surgery (Ilfeld et al., 2019). Targeting the sciatic nerve independently for phantom limb pain has been an established therapy (Rauck et al., 2014). An interesting finding from Rauck et al. was determining the optimal distance the stimulating device is from the sciatic nerve. Optimal stimulation may occur 1 to 1.5 cm away from the sciatic nerve, which contradicts earlier techniques which place the stimulating leads next to a nerve.
Some interesting targets have emerged from work in this field. Finch et al. describe various lower extremity targets, including the sacral S1 nerve root using a novel 10 KHz waveform for pain relief (Finch et al., 2019). The tibial nerve has become a unique target because it is part of the craniosacral autonomic parasympathetic nervous system, and has both motor and sensory function. Lead placement is typically parallel to the nerve with the pulse generator or battery source on the calf. Current evidence may support PNS of the tibial nerve as a treatment option of overactive bladder (de Wall & Heesakkers, 2017;Staskin et al., 2012). A similar case may be made for stimulation of the saphenous nerve. Initial reports for the saphenous nerve target for overactive bladder may make PNS a useful modality in the future (MacDiarmid et al., 2018).
Novel neural targets targeting the abdominopelvic wall
Common peripheral nerve targets for chronic inguinal or groin pain, the ilio-inguinal, genito-femoral and iliohypogastric nerves, are easily localized with ultrasound guidance and may be potential targets for peripheral nerve stimulation (Tubbs et al., 2015). Post-operative pain is the most common cause of inguinal, iliohypogastric and genitofemoral nerve pain, usually resulting from herniorrhaphy, hysterectomy, or iliac bone graft (Bouche, 2013). Surgical implantation of PNS systems has resulted in improvement for patients suffering from these ailments (Shaw et al., 2016). Whereas surgery may be a possible technique to identify these set of nerves, ultrasound guided procedures may be preferable to both physicians and patients. Elahi et al. describe a technique placing 2 eight contact electrodes in eight patients resulting in significant pain relief in a small cohort of patients (Elahi et al., 2015).
Guidance methods for lead placement
Guidance methods for lead placement have changed with the evolution of PNS over time. As mentioned previously, direct cutdown to the nerve was used in the 1960s to facilitate placement of a paddle or percutaneous lead in close proximity to the target nerve under direct visualization (Costigan et al., 2009). Open lead implantation had several disadvantages including significant postoperative pain and scar tissue buildup (Costigan et al., 2009).
Guidance for peripheral nerve blockade has evolved providing several modalities to facilitate minimally invasive percutaneous approaches to lead placement for PNS. The PNS lead, consisting of a single or multiple electrodes, is implanted in close proximity to a target peripheral nerve in the perifascial plane without the need for a large surgical incision (Costigan et al., 2009;Saper et al., 2011). The battery source used to stimulate the lead is either implanted or remains external (Costigan et al., 2009;Tajerian et al., 2013). Confirmation of optimal peripheral nerve localization should occur through nerve stimulation with a goal of having the lead be parallel to the nerve (Breivik et al., 2006).
Traditional landmark-based techniques fail to account for anatomic variation which frequently occurs in the course of peripheral nerves. This is of particular concern in individuals with peripheral nerve injury resulting from an operation. As a result, normal anatomic landmarks may no longer indicate the location of peripheral nerves. Nonetheless, anatomic landmarks can facilitate peripheral nerve localization by providing a starting point for further identification. In the case of chronic neuropathic post-amputation pain, the femoral nerve is typically identified 1 to 2 cm distal to the inguinal crease and the sciatic nerve is typically identified by palpating the greater trochanter and ischial tuberosity as landmarks (Breivik et al., 2006). When lead placement requires superficial placement, as in the case of craniofacial stimulation, the use of external landmarks facilitates lead placement.
The use of fluoroscopic guidance for PNS lead placement has been described for a number of targets (Long, 1977;Deer et al., 2014). Fluoroscopic guidance allows for visualization of bony landmarks in the vicinity of the target nerve. The PNS lead can be advanced under intermittent visualization and adjusted as needed with clear documentation of final placement. Compared to ultrasound guidance, fluoroscopy does not allow for visualization of vascular structures or the nerve target itself. Further research is needed to determine the comparative effectiveness and safety of the various imageguided techniques.
Imaging modalities are often combined during PNS lead placement (Weiner & Reed, 1999). For instance, fluoroscopic guidance can be used to initially identify a target lumbar level for stimulation of a lumbar medial branch, with subsequent use of ultrasound guidance for PNS lead placement. Then, fluoroscopy can be used for confirmation of final lead placement at the target lumbar level. A similar sequence of imaging modalities can also be used to identify a target intercostal space for stimulation of a given intercostal nerve via fluoroscopy, with PNS lead placement via ultrasound guidance, and confirmation of final lead placement at the target intercostal space via fluoroscopy (Weiner & Reed, 1999).
An example of using only fluoroscopy for PNS lead placement is seen in the head and neck patient population. When placing a lead for trigeminal branch stimulation of the supraorbital or infraorbital nerves, a Touhy needle can be advanced subcutaneously under intermittent fluoroscopic guidance to the supra-or infra-orbital regions either 1 cm above or below the orbital rim until the distal aspect of the lead reaches the medial border of the orbit (Costigan et al., 2009). Fluoroscopic guidance has been similarly described for placement of occipital leads (Weiner & Reed, 1999). The approach is described as a horizontal placement across the greater, lesser, and the third occipital nerves with tunneling towards the parieto-occipital region (Chung et al., 1984). Compared to fluoroscopy alone, the combined use of fluoroscopy and ultrasound guidance for occipital nerve stimulator implant has not been associated with increased lead survival (Deer et al., 2014).
Nerve stimulation remains a cornerstone for lead placement throughout the implantation process alone or in combination with other guidance methods (Breivik et al., 2006;Saper et al., 2011). Most commonly, ultrasound or fluoroscopic guidance is used for initial target localization, with use of nerve stimulation for confirmation. Thus, nerve stimulation assures identification of the correct peripheral nerve target prior to final implantation. Throughout lead placement, the amplitude needed to produce a paresthesia in the sensory distribution of the target nerve signals proximity to the nerve target. Thus, a higher mA output needed to produce a paresthesia indicates the stimulation probe or PNS lead is further from the intended target (Deer et al., 2016). Specifically, PNS should be targeted to the area where nerve connections are still intact, typically corresponding to the hyperalgesia in the region around the area of allodynia (Costigan et al., 2009;Long, 1977). Targeting paresthesia coverage over the area of allodynia may lead to failure of PNS due to targeting of damaged nerve tissue (Long, 1977).
Nerve stimulation may serve an important role to mimic the outcome of PNS lead placement during the implantation process. Based on the response to nerve stimulation, lead placement can be adjusted incrementally to optimize lead placement. During lead placement for post-amputation pain, the desired response to nerve stimulation is the production of comfortable paresthesia in the amputated foot or leg with minimal subcutaneous sensations proximal to the lead in the skin over the upper thigh or buttock (Deer et al., 2016). This corresponds to PNS of the nerve rather than the activation of subcutaneous afferents superficial to the electrode. By minimizing incisions around the proximal receiver end during percutaneous PNS lead implantation coupled with an external pulse transmitter (Chung et al., 1984), it is now possible to provide PNS in the recovery room immediately after implantation.
Aside from the use of nerve stimulation to produce a sensory paresthesia, PNS lead placement can also be confirmed with corresponding motor stimulation. For example, to target the medial branches of the dorsal rami nerves under ultrasound guidance, nerve stimulation with selective activation of the lumbar multifidus muscles with corresponding comfortable contractions overlapping the painful region have been used to guide PNS lead implantation (Gilmore et al., 2019b). Nerve stimulation has been described as an important technique for lead placement to treat hemiplegic shoulder pain. Monopolar needle electrodes are inserted perpendicular to the skin to localize the axillary nerve along the middle and posterior deltoids. A point between these locations is identified with adjustment of needle position and depth until both heads of the deltoids contract with full reduction of subluxation. The PNS lead is then inserted to the location indicated by the monopolar needle electrode (Chae et al., 2005).
One of the greatest advancements in the development of peripheral nerve stimulation over the past two decades has been the improvement in quality and cost reductions in ultrasound-imaging technology. This technology, carried over from the concepts of regional anesthesia, has provided the ability for a clinician to visualize a nerve and the adjacent structures to avoid, all in real time (Marhofer & Chan, 2007). Furthermore, ultrasound machines are increasingly portable allowing providers to carry them to any location. The use and education of ultrasound has increased significantly in anesthesiology, particularly regional anesthesia, and neuromodulation (Melnyk et al., 2018). In a more widespread fashion, medical education continues to emphasize the use of ultrasound in cadaveric teaching and the instrument is becoming as accepted to the modern-day physician as the stethoscope (Hoppmann et al., 2011).
Understanding of visual and tactile anatomy is still extremely important in ultrasound-guidance. It provides a place to initiate imaging. Sonographic anatomy is dependent on understanding relationships among anatomic structures. These relationships can help "triangulate" certain structures that may appear different in varying individuals (Ihnatsenka & Boezaart, 2010). Peripheral nerves often travel adjacent to arterial structures. Therefore, the use of doppler or color modes can help identify vascular structures that queue in on a nerve target. Also, these vascular structures should be avoided so that proper electric field can provide benefit, and so there is no increased bleeding from the procedure itself (Chan et al., 2010).
During needle entry, visualization can be optimized with echogenic needles and or with beam-steering to enhance the visualization of the needle (Hebard & Hocking, 2011;Prabhakar et al., 2018). The needle can be positioned near the nerve in an orthogonal trajectory or in parallel. Although distance is determined by which system the operator chooses, it is important to avoid mechanically damaging the nerve, which can be visualized with ultrasound imaging when in-plane. Depending on the peripheral nerve stimulator system, it is generally desired to place a lead parallel along the nerve so that if migration does occur, there is less risk of losing therapeutic benefit (Frahm et al., 2016). Another advantage of ultrasound guidance is real-time visualization during hydro dissection to avoid damage to neural and vascular structures in close proximity to the PNS target. This technique has been described to facilitate percutaneous implantation of PNS leads close to the suprascapular nerve or the cervical nerve roots within the brachial plexus to treat chronic neuropathic pain of the upper extremity (Tajerian et al., 2013).
To trial or not to trial
Unlike spinal cord stimulation, a consistent technique for trialing PNS has not been developed. Part of this is dependent on current technology constraints, costs, and partly due to our limited understanding of the long-term mechanisms of action which may determine efficacy of nerve stimulation . This section will review current recommendations and thoughts on potential benefits of trialing techniques.
One common strategy is to employ a local anesthetic block of a peripheral nerve which may blunt the transmission of the pain signal (Chung & Spangehl, 2018). The rationale in this theory is that a conduction block will replicate PNS, however this may be flawed since current cathode stimulating devices either activate motor or sensory fibers as opposed to block conduction (Pereira & Aziz, 2014). Thus, while a test block may alleviate pain, electrical stimulation may or may not result in the same response or vice versa (Sweet, 1976). It is important to think of the local anesthetic block as part of the diagnostic workup, but not a prognostic factor to whether neurostimulation will be successful.
Despite the lack of correlating evidence between the block and PNS some benefits of the local anesthetic block exist. First, the block may determine appropriate targets to apply electrical activity (Manning et al., 2019). This may include an understanding of anatomy surrounding the nerve, especially if ultrasound is used. Second, newer technologies which may utilize anodal blockade of neural current may replicate a local anesthetic blockade (Gilmore et al., 2019b). Finally, if a local anesthetic block did not alleviate pain, perhaps another target may be considered, with a caveat that electrical stimulation may still be effective regardless of results of the local anesthetic block. Local anesthetic block should be considered for patient confidence in an appropriate target, not necessarily for efficacy of PNS on the same neural target.
A simple strategy for trialing may be the application of TENS units near the area of perceived pain. Similar waveforms, motor or sensory frequency and pulse width, may be considered to replicate the waveforms of the PNS system. Unfortunately, direct stimulation of a peripheral nerve may distribute the signal in a different dermatomal pattern that is achievable by TENS. TENS trials have no definitive relationship correlation with success in spinal cord or peripheral nerve stimulation (Kirsch et al., 1975;Picaza et al., 1977;Schwarm et al., 2019). Thus, TENS trial is not routinely recommended for consideration of success of PNS systems.
Similarly, little evidence exists for the use of percutaneous electrical nerve stimulation (PENS) as a surrogate trial for PNS. While direct stimulation of a nerve may be possible, replicating the same waveform pattern is technically difficult. Furthermore, a short PENS treatment may not elicit the long term changes a PNS system may induce, leading to possible false negatives. While not described in the literature, the authors of this section have had negative experiences anecdotally with this trialing method, thus leading to PENS not being actively recommended as a trial for PNS.
Currently, spinal cord or dorsal column stimulators initially have a trial period where a patient gets to experience neurostimulation without a surgical cutdown. One advantage of this technique is one will be able to mimic the proprietary waveform for each technology in that individual's daily life (Amirdelfan et al., 2018;North et al., 2016). Peripheral nerve stimulator trials can also be performed for 3-60 days, although long-term effects of PNS will not be determined. These trials can give the patient the experience of managing the device, as well as functional improvements, quality of life improvements and analgesia associated with the stimulation. Rauck et al. describe a two-week trial targeting the femoral or sciatic nerve for the use of PNS in the post-amputee population (Rauck et al., 2014). Dodick DW et al. describes a prescreening trial for a quadripolar lead for the treatment of migraines by targeting the occipital nerve (Dodick et al., 2015). The pre-screen eliminated 20 patients from a sample size of 177 patients who did not have 50% reduction in pain or paresthesia coverage thus reducing the cost of unnecessary implantation with explantation. Deer et al. described a PNS system that has the trial targeted on the day of the permanent implant informing us that different technologies warrant different considerations with regards to trialing (Deer et al., 2016).
Evidence for the use of peripheral nerve stimulation PNS utilization in named peripheral nerves produce consistently high success rates in achieving pain relief in well selected patients when delivered by skilled clinicians (Pereira & Aziz, 2014). Since the initial percutaneous devices were described there have been numerous case reports and case series published exemplifying the efficacy and safety of peripheral nerve stimulation (Chakravarthy et al., 2016;Manchikanti et al., 2014). The evidence grading (Table 1) can be used to examine the current state of the field. (Table 2). Ref Manchinkotti.).
The occipital neurostimulation study, ONSTIM, was a prospective single-blind randomized study that enrolled 66 patients suffering from chronic migraine. The responder rate was 39% in the stimulation group and 6% in the preset arm. Success was defined as more than 50% reduction in their headache days or three point improvement in VAS (Saper et al., 2011).
Deer et al. published results of their prospective multicenter randomized double-blind partial crossover study, a total of 147 patients were consented and screened for the study, 3 months after randomization to treatment, active stimulation arm achieved a statistically significant higher response rate of 38% vs the 10% rate found in the control group (p = 0.0048). The treatment group specifically, reported mean pain reduction of 27.2% from baseline to 3 month follow-op compared to 2.3% reduction in control group (p < 0.0001). There were no adverse events reported (Deer et al., 2016;Deer et al., 2010;Deer et al., 2012;510(k) Premarket Notification, 2019a). Rauck et al. performed a prospective open label feasibility study with 16 patients suffering from postamputation pain syndrome. Targets were sciatic and femoral nerves statistically significant improvement was shown in quality of life and decrease in Beck Depression Inventory score (Rauck et al., 2014). A novel system was studied by Gilmore et al. prospectively for up to 60 days in the back and/or extremities for symptomatic relief of chronic intractable pain, postsurgical and posttraumatic acute pain (510(k) Premarket Notification, 2019b). The lead used had a novel structure that made 60 day trial possible with a reduction in infection risks. Table 3 summarizes some of the seminal prospective studies in the field of peripheral nerve stimulation.
Future directions of peripheral nerve stimulation
Recently, several PNS devices have come to market which are specifically designed to target a peripheral nerve (Pereira & Aziz, 2014;Tubbs et al., 2015). Historically, PNS was performed utilizing dorsal column spinal systems and bulky internal pulse generators (Antony et al., 2019). Newer systems using lower profile microleads, miniaturized internal pulse generators, and external pulse generators have decreased the invasiveness and improved safety and efficacy of peripheral nerve stimulation. As a result, outcomes for peripheral nerve stimulation by focusing on novel targets is an area of intense research and clinical focus (Finch et al., 2019). The future use of these devices are in line with the discussion of targets such as axial back stimulation, facial nerve stimulation, and novel non-pain indications such as posterior tibial nerve for incontinence, and hypoglossal nerve for sleep apnea (de Wall & Heesakkers, 2017;Deckers et al., 2018;StimRelieve, LLC, 2019;Eastwood et al., 2011). Of great interest is utilizing novel waveforms and frequencies which improved outcomes in spinal cord stimulation (Finch et al., 2019) and applying them to PNS. A pivotal study is underway evaluating high frequency stimulation of the sciatic nerve for phantom limb pain, based on a feasibility study by Soin et al. (Soin et al., 2015). Given the technological advancements, expanded indications, new targets, and recent improvements in reimbursement, PNS is well positioned to be an important part of future treatment strategies.
Conclusion
The use of PNS has been an option for over 60 years for physicians offering neuromodulation to patients. Despite previous PNS therapies, we are currently entering a renaissance which is inspired by new devices engineered for specific peripheral use, improved guidance to place the leads, and an improvement in safety. The greatest need to further improve this biolelectronic therapy is in researching outcomes to help us better understand proper use and patient selection. This review identifies principles that a practitioner needs to address prior to implanting a PNS system in a patient. Best use of image guidance, selection of neural target, optimizing stimulation parameters, understanding trialing benefits and specific techniques are critical for the success of PNS outcomes in the clinical setting. | 2020-04-29T05:06:34.004Z | 2020-04-24T00:00:00.000 | {
"year": 2020,
"sha1": "5deed7004809a5af7c8cfef10320ef84f37a4eb2",
"oa_license": "CCBY",
"oa_url": "https://bioelecmed.biomedcentral.com/track/pdf/10.1186/s42234-020-00045-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ddc66e13f1072a2bdb7637add71f49cf25605e92",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
40232950 | pes2o/s2orc | v3-fos-license | Effects of Heat Treatments on the Thermoluminescence and Optically Stimulated Luminescence of Nanostructured Aluminate Doped with Rare-Earth and Semi-Metal Chemical Element
© 2012 Tatumi et al., licensee InTech. This is an open access chapter distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Effects of Heat Treatments on the Thermoluminescence and Optically Stimulated Luminescence of Nanostructured Aluminate Doped with Rare-Earth and Semi-Metal Chemical Element
Introduction
Ionizing radiation dosimetry plays a very important role in several fields, useful in the ordinary life, such as radiotherapy, nuclear medicine diagnosis, nuclear medicine, radioisotope power systems, earth science, geological and archaeological dating methods, etc.
The phenomenon of Thermoluminescence (TL) has been known since 1663, when Robert Boyle notified the "Royal Society" in London, which observed the emission of light by a diamond when it was heated in the dark [1]. Afterwards, a large number of scientists began to work with TL; as did Henri Becquerel, whose work described IR measurements spectra [2] and the effect of TL, too. Marie Curie, in 1904, noted that the TL properties of the crystals could be restored by exposing them to radiation from the radio element mentioned in her doctoral thesis. In the middle of the 1930's and 1940's, Urbach performed experimental and theoretical work with TL [1] and in 1945 Randall and Wilkins developed a first theoretical model [3] of thermoluminescent emission kinetics.
The use of thermoluminescence in dosimetry date from 1940, when the number of people working on places with radiation sources such as hospitals, nuclear reactors etc. exposed to ionizing radiations (γ-rays, X rays, α and β-particles, UVA and UVB) increased and efforts to develop new types of dosimeters began [4]. Among the pioneers of TLD we have Daniels, 1953, with LiF, Bjarngard, 1967with CaSO4 and Ginther and Kirk (1957 with CaF2. After these works search in other materials such as natural fluorides or synthetic as, LiBO3:Mn, CaF2:Dy, CaSO4 and MgSiO4:Tb. They usually obtained as monocrystalline samples Czochralski, Bridgmann, etc. However, Cameron, 1961, with their research on the application of LiF: Mg and Ti obtained the first thermoluminescence dosimeter (TLD-100) [5], which is still one of most popular TLD phosphor, due to tissue equivalent Zeff = 8.04, which is an important characteristic for personal dosimetry. Akselrod et al., 1990 [6] carried out studies on TL properties of carbon doped Al2O3 (TLD-500), a very sensitive material to radiation exposure, showing few TL peaks, with dose interval of detection between 0.05 µ to 10 Gy and fading rate of 3% by year (when kept in the dark). This high sensitivity of the material is attributed to oxygen vacancies created during the crystal growth procedure; the electrons can be trapped at these vacancies creating Fand F + centers, which act as recombination centers yielding a bright emission.
In order to increase the luminescence emission response with dose of some thermoluminescence dosimeters (TLD), heat treatments procedures were frequently performed. Halperin et al., 1959, [7] noted that the thermal treatment enhanced the intensity of various TL glow peaks of NaCl by factors of a few thousands; on prolonged heat treatment. They observed that the intensity of TL peaks locate above RT decreased, while those at lower temperatures continued to grow even after 80 hours of heat treatment at 550 °C. Mehendru, 1970 [8], studied the effects of heat treatment on the TL response of pure KCl; they associated the peaks at 95, 135, and 190 °C with the F centers, these last created due to the background divalent cation impurities, and with the first-and the second-stage F centers, respectively. Kitis et al., 1990 [9], studied the sensitization of LiF:Mg, Ti as a function of irradiation at elevated temperatures, pre-irradiation annealing, and post irradiation annealing between 150-400 °C ; the results showed that the first and third conditions cause an enhancement of the sensitivity; after more two works about preheating and high temperature annealing on TL glow curves of LiF:Mg, Ti [10] and in LiF TLD100 [11] were published. Holgate, 1994, [12] investigated TL and radioluminescence (RL) spectra of calcium fluoride samples doped with neodymium and variations of spectra with Nd concentrations and thermal treatments were observed. Nowadays, it is possible to find oxides, sulfates, sulfides and alkali haloids doped with rare-earths and transition metals as commercial dosimeters.
Nowadays, luminescent dosimetry materials can be used in personal dosimetry, radiotherapy, nuclear medicine and diagnostic and environmental dosimetry they are widely used due to high sensitivity, linear response to the dose, the response is independent with radiation energy (within a certain range), their reusability, etc.
The aim of this chapter is to present a very comprehensive research about new materials consisting in aluminate crystals doped with rare-earths, for radiation dosimetry using TL and OSL.
Some features on fabrication of aluminates dosimeters will be shown relating the luminescence response according to the relative concentration of several rare-earths and transition metals. A study in nanoscale effects, size, shape and surface morphology using TEM, SEM, EDS and electron diffraction measurements will be shown too. The physicochemical properties of the doped materials are strongly related to the fabrication process as well the experimental parameters as temperature of thermal treatments, calcination time, heating rates, etc.
As materials science has developed down to nanoscale, the exceptional properties of nanoscaled rare-earth materials are only now being recognized and performed intentionally.
Experimental part
Polycrystalline powder samples of α-Al2O3:Er, Yb; Mg; Tb and Nd were obtained by sol-gel and Pechini process. In the sol-gel procedure stoichiometric amounts of tri-sec-butoxide of aluminum was dissolved in distilled water and hydrochloric acid. The dopants Er, Yb, Nd and Tb oxides were added during the sol stage, with different concentrations. Some portions of the resulting powder were calcinated at different temperature from 1200 to 1600 °C. Experimental parameters of the calcination process, as heating and cooling rates and set point values were varied, in order to verify the effect on the luminescence response.
Pechini is a chemical routine that produces, at the end of the stage, an organic polymer with metallic ions, which will be responsible for the formation of the desired material. The polymer is obtained after low temperature reaction among ethylene glycol, citric acid and aluminum nitrate. Once the polymer is ready, a number of heat treatments are carried out in order to (1) collapse the polymeric structure and allow the gradual oxidation reaction of the metallic ions with atmospheric oxygen, and (2) obtain the desired structure of the material. This technique is known to obtain uniform composition and controlled grain size distribution, due to the slow oxidation reaction and the viscosity of the polymer, which avoid precipitation.
The morphological characteristics of the samples were analyzed using a Philips CM200 TEM equipped with EDS operating at 160 keV, some Cu contamination from the sample holder can be observed in the all the EDS results, the samples were located at 400 mm from the source. The X-ray powder diffractions were recorded with the MiniFlex II model diffractometer of Rigaku Corporation.
TL and OSL measurements were performed in an oxygen-free nitrogen atmosphere using Daybreak Nuclear and Medical Systems Inc, model 1100-series TL/OSL reader and RISØ TL/OSL reader Model DA -20.
TL was detected using the BG-39 (340-610 nm) optical filter and heating rate of 10 °C/s. OSL measurements were made using an array of blue (470 nm) LEDs for sample stimulation and detected in the UV with a Schott U-340 optical filter.
The irradiations were performed at RT in a 60 Co source with dose rate of 28.7 Gy/h and with a beta source ( 90 Sr/ 90 Y) coupled to the RISØ TL/OSL reader, with dose rate of 0.08 Gy/s. Figure 1a shows the XRD of pure alumina produced by sol -gel without any dopants. A very well agreement with standard α-Al2O3 pattern is observed (Figure1a). On the other hand, doped samples show additional peaks related with new structures formed by the dopants and the alumina matrix, for example, in the case of Tb doped samples it was verified the Tb3Al5O12 crystalline structure (Figure 1b), and for Nd doped sample the AlNd structure ( Figure 1c).
XRD
Sample doped with Er and Yb and calcinated at 1200 °C supplied a broad background, related to amorphous phase, and many other peaks associated to Yb2O3, Er2O3 and Yb3Al5O12. For the sample calcinated at 1600°C it was not observed the background, but the peaks of Yb2O3, Er2O3 with predominance of the Yb3Al5O12 (Figure 1d and e) were noted. (Figure 2d), most of the material is converted to α-Al2O3, except for a few low intensity peaks related to the occurrence of magnesium spinel (MgAl2O4). This observation will be corroborated in the next section through TEM images.
Thermoluminescence
TL glow curves of samples obtained with different calcinations temperatures, and detected in UV and VIS regions are shown in Figure 3a and 3b, respectively. It can be seen that calcinations at 1600 °C favored the increase of 190 °C TL dosimetric peak and diminution on intensity of high temperature peak simultaneously. The TL intensity in VIS region is higher than those found in the UV one. In the UV region, the thermal treatment of 4 h promoted a high increment of the 190 °C peak while in VIS region, the time was of 8 h. It is known from literature that the emission mechanism of these two luminescence regions is different. In the case of UV emission, the responsible is the F + center according the mechanism: Where the recombination of the F center with a hole (h + ) generates an excited F +* , which decays into the ground state (transition 1B → 1A) emitting a photon at 325 nm. Therefore, the calcinations at 1600 °C stimulated an increase of F centers concentration.
In the other case of VIS emission, it is believed that the luminescence occurs as follows [31,32]: F + e → F * → F + hν heating process, TL Where F center loses an electron after absorption of high energy radiation and become F + center. On thermal stimulation, the recombination of the electron and the F + center produces an excited F center (F*), which decays into its ground state (3P transition state → 3S) with the emission of photons at 410-420 nm.
Therefore, following our results the long set point time (~ 8 h) favored the electron traps formations, and in the case of UV emission we have the great rate of formation of hole traps after 4 hours of calcinations. Figure 4 shows TL glow curves of pure alumina obtained with different heating and cooling rates. In all cases the slow rate of 3 °C/min supplied the best result, confirming that longer calcinations time can promote a better diffusion of the defects and ions and also eliminates the internal tension forces in the crystalline lattice, which can homogenize the crystal.
TL glow curves of the samples doped with Er and Yb are shown in the Figure 5a and 5b. Samples calcinated at 1200 °C supplied one peak at high temperature (Figure 5a), which did not increase proportionally to the dose and another peak at low temperature region with very low intensity. After calcinations at 1600 °C, two prominent peaks at 224 °C and 442 °C were observed (Figure 5b). For the sample doped with Er (1 mol %) and Yb (2 mol %), the peak temperature changed to 203 °C and an increment about 1.4 time in the TL intensity was observed. In all the samples, the TL response of the high temperature peak is not proportional to the dose. For samples doped with Tb (2.5 mol%) and Nd (2.5 mol%) an increased in UV intensity of the 190 °C peak was also noted, the first one increased 3.5 times and for Nd was 2.5 times, when compared to undoped one (Figure 5c). Figure 6 shows TL glow curves of pure and Mg doped samples obtained by Pechini process, the curves are slightly different from those obtained by sol-gel. High intensities for 190 °C TL peak from pure samples were obtained at 1100 o C for VIS region (Figure 6a) and at 1350 °C for UV one (Figure 6b). On sample doped with Mg high intensities were detected in both cases UV and VIS with calcinations at 1600 °C, the same value found in sol-gel samples. Figure 6 shows TL glow curves for samples produced via Pechini method. As the calcination temperature increases, different observations can be made, depending on the composition and the measurement spectra. In Figure 6a, showing TL emission of undoped sample in the visible region, all the peaks intensities decrease for higher temperatures, but mainly low temperature (90 °C) and high temperature (410 °C) peaks. In this case, the best sample would be the one calcinated at 1350 °C, due to its high intensity and low competition among the trapping centers (more stable TL signal). High temperature treatments can destroy as well as create trapping and recombination centers, and that explains why some TL peaks may disappear, whilst others may rise or increase. Figure 6b is the TL emission of undoped sample in the UV region, which indicates that high temperature peaks tend to fade when the sample is calcinated at higher temperatures. Once again, the sample calcinated at 1350 °C showed the best glow curve. For the undoped sample, calcination at 1600 °C seems to increase the competition, which decreases the overall intensity. It is not likely that the high temperature is damaging the material, since the melting point is still too far away (around 2050 °C). Also, the high temperature may be causing the crystallites to grow, decreasing the surface area exposed to the incoming radiation, thus changing the trapping dynamics at some level.
The incorporation of magnesium atoms in the crystalline lattice made a great deal on the TL response of the samples. In the first place, both visible and UV emissions (Figures 6c and 6d, respectively) increased with the increasing of the calcination temperature, which was not observed for undoped samples. Secondly, the high temperature peak (355 °C) of the visible emission had its intensity increased by a factor of 3; a minor difference on the relation between the main dosimetric peak and the high temperature one was also observed for the UV emission.
It is important to observe that only samples calcinated above 1100 °C exhibited some appreciable TL emission; this means that α-Al2O3 acts as a better ionizing radiation sensor than other phases (δ and γ). For samples calcinated below that temperature (600 and 900 °C), most of the trapping and recombination centers may not yet be active.
One reason for the high luminescence of α-Al2O3:C comes from the theory of point defects. It is known that the synthesis of carbon doped alumina is done in a highly reductive atmosphere of carbon ions, resulting into a great production of oxygen vacancies in the crystalline lattice. If these vacancies are occupied by two electrons we have the formation of the neutral F center. Otherwise, if the vacancy is occupied by only one electron, we have the formation of F + center. In the latter situation, the presence of charge compensation is demanded for the F + center formation; in the case of the α-Al2O3:C it is C 2+ impurity, which replaces Al 3+ ion in the crystalline lattice. In our case we suppose that the Mg is acting as C impurity.
Thermoluminescence spectra
TL spectra of pure and doped with Tb, Er-Yb, and Nd alumina are shown in Figures 7a) to 7d) respectively. On visible region from 360 to 600 nm luminescence bands due to rare-earth elements are observed on doped samples. However all the samples pure and doped showed an intense luminescence band between 650 and 800 nm not identified yet in the literature. However, from fluorescence results, this band is usually associated to Cr 3+ impurity incorporated in raw materials used for alumina production [33].
When the dopant are incorporated, other bands are detected related to the rare-earth elements (Figure 7b, 7c, and 7d). The Tb doped sample shows, in addition for the first at 694 nm, another broad band at 428 nm, with 187 nm of width, due to the transitions of Tb 3+ . The results are in agreements with the transitions of 5 D3 and 5 D4 7 Fj (j=1-6). In the case of Al2O3:Er:Yb sample, the band is centered at 528 nm (448-589 nm) and the emission mechanism can be related to these rare-earth elements. It is well known that Er 3+ has high efficiency for the infrared to visible light conversion and cooperative sensitization properties. The 4 I11/2 (Er 3+ ) level and 2 F5/2 (Yb 3+ ) state are very closely matched in energy, thus the exposition from the 0.9 to 1.1 µm range will excite both Er 3+ and Yb 3+ ions. The visible emission can occurs because the Yb 3+ transfers the excitation energy to Er 3+ and the final state of this process is the population of the 4 F7/2 state and nonradiative relaxation to the 4 S3/2 level, from which green photon (547 nm) is emitted. The excitation routes for red emission at 660 nm are not clear yet; in this case the Yb 3+ transfer energy to Er 3+ and the red emission occurs in the 4 F9/2→ 4 I15/2 transition [34].
Optically stimulated luminescence
As seen in TL emission curves showed previously, the response of OSL signal also increased for samples calcinated at high temperatures, for both samples obtained by sol-gel ( Figure 8a) and Pechini process (Figure 8b). Figure 8c and 8d show an example of OSL increment about 3 times, after calcination at 1600 °C.
OSL decays can be fitted by exponential functions [36], depending on the number of the traps involved in the process, for example for two traps we have: Where IOSL is the total OSL intensity, I1 and I2 are the initial intensities of exponentially decaying from faster and slower components of the shinedown curve; τ1 and τ2 the respective decay constants.
TEM
According to TEM images, it was verified that all the doped samples present nanocrystals formations on the surface of alumina grains (Figure 9a). In Tb doped sample, it was verified the presence of Tb3Al5O12, the chemical structure was determined by electron diffraction and EDS results (Figure 9b). Nanocrystals of AlNd are easily observed with its well developed faces (Figure 10a), in most of the case, the nanocrystals average size is about 200 nm, these aluminates composition was also verified by the EDS (Figure 10b). Figure 11a) shows an example of TEM images obtained for α-Al2O3 doped with 1mol% of Er and 2mol% of Yb, and calcinated at 1200 °C. EDS analysis show the presence of Er and Yb dopants in the α-Al2O3. Electron diffraction analysis identified the crystal as Yb2O3, however by XRD analysis showed plus two new composition in the samples doped and attributed to Er2O3 and Yb3Al5O12 (Figure 1e). The results of TEM analysis give the following average nanocrystals diameters D = (36±2) nm for the sample calcinated at 1200 °C and D=(182±8) nm for sample calcinated at 1600 °C (Figures 9 d), these results and the homogeneity in the crystalline size suggests that the nanocrystal powder growth is depending on thermal treatment temperature. The growth and cluster formations of the nanocrystals are the consequence of the reduction of grain boundary area and therefore the total energy of the system.
In the case of Mg doped alumina there is a formation of Mg spinel (magnesium aluminate) nanocrystals dispersed on the surface of alumina grains, with size about 40 nm ( Figure 12). It is considered that the high temperature calcination makes magnesium atoms to diffuse to the surface of the clusters, creating a thin layer of magnesium spinel, due to the high local concentration of the dopant.
Conclusions
Polycrystalline powder of α-Al2O3 was successfully obtained using sol-gel and Pechini process. A second phase was found in doped samples, forming nanocrystals of aluminates and lanthanides oxides on the surface of alumina grains. We did not observe the incorporations of the dopants inside the alumina structure. Strongly dependent of temperature calcinations, the nanocrystallinity of the sample was retained after calcinations at higher temperatures, and an increase in the crystallite size was already perceptible.
The average diameter of nanocrystals depended on the dopant specie: for Yb and Er doped samples, it was D = (36 ± 2) nm for the sample calcinated at 1200 °C and D= (182 ± 8) nm for one calcinated at 1600 °C, therefore increased by 5 times. The approximately size of the AlNd is 200 nm for sample calcinated at 1600 °C during 4 h and for Tb3Al5O12 crystals the size is about 300 nm in the same calcination conditions.
The TL emission mechanism in the visible region can be related to F center and to the lanthanide (Ln) relaxation. During the irradiation the Ln 3+ ion is reduced to Ln 2+ . It is not completely known if the reduction is due to the transfer of an electron or a hole; however, the process of reduction of trivalent lanthanide ions by irradiation was previously verified in literature [18,19]. During thermal stimulation an electron can recombine with the Ln 2+ forming Ln 3+* in excited state, which emits a photons returning on this way to the ground state. In the case of Mg spinel, probably the Mg ions promoted the oxygen vacancies stabilization, improving the luminescence response in the visible spectra, causing the main peak to increase 5 times in comparison with the undoped sample. It is believed that the occurrence of the nanometric spinel layer created an interface between both materials (Al2O3/MgAl2O4) with high concentration of defects.
OSL shinedown curves, supplied by undoped samples calcinated to 1200 and 1600 °C, could be fitted by second for all the samples except to α-Al2O3:Yb, Er, which was fitted by first order exponential decay. TL intensity of 190 °C peak and OSL responses with the dose increased linear for low doses region, from 80 to 1000 mGy, and the minimum dose detected value was 5 mGy obtained for TL (UV) and 350 µGy for OSL α-Al2O3 + Tb3Al5O12.
In summary, calcination conditions are of great importance for materials production that are being used as radiation sensors, once it greatly influences the stabilization of intrinsic defects, diffusion of dopants and the occurrence of new phases, due to the incorporation of dopants alongside the matrix, and others. These new phases also seem to play an important role in the luminescence emissions, due to the creation of new trapping and recombination centers, producing materials with unique properties that can be exploited to obtain better dosimeters. | 2017-09-17T14:44:04.274Z | 2012-09-26T00:00:00.000 | {
"year": 2012,
"sha1": "3c1bbcd7cc50d7331b073f7a88ecd97a769dee6f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5772/51414",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "801750cec23d076fd9d8de4649fd4b18b245d6bd",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
4575499 | pes2o/s2orc | v3-fos-license | Direct integrin binding to insulin-like growth factor-2 through the C-domain is required for insulin-like growth factor receptor type 1 (IGF1R) signaling
We have reported that integrins crosstalk with growth factors through direct binding to growth factors (e.g., fibroblast growth factor-1, insulin-like growth factor 1 (IGF1), neuregulin-1, fractalkine) and subsequent ternary complex formation with cognate receptor [e.g., integrin/IGF1/IGF1 receptor (IGF1R)]. IGF1 and IGF2 are overexpressed in cancer and major therapeutic targets. We previously reported that IGF1 binds to integrins ανβ3 and α6β4, and the R36E/R37E mutant in the C-domain of IGF1 is defective integrin binding and signaling functions of IGF1, and acts as an antagonist of IGF1R. We studied if integrins play a role in the signaling functions of IGF2, another member of the IGF family. Here we describe that IGF2 specifically binds to integrins ανβ3 and α6β4, and induced proliferation of CHO cells (IGF1R+) that express ανβ3 or α6β4 (β3- or α6β4-CHO cells). Arg residues to Glu at positions 24, 34, 37 and/or 38 in or close to the C-domain of IGF2 play a critical role in binding to integrins and signaling functions. The R24E/R37E/R38E, R34E/R37E/R38E, and R24E/R34E/R37E/R38E mutants were defective in integrin binding and IGF2 signaling. These mutants suppressed proliferation induced by WT IGF2, suggesting that they are dominant-negative antagonists of IGF1R. These results suggest that IGF2 also requires integrin binding for signaling functions, and the IGF2 mutants that cannot bind to integrins act as antagonists of IGF1R. The present study defines the role of the C-domain in integrin binding and signaling.
Introduction
Integrins are transmembrane receptor heterodimers formed by α and β chains. Integrins bind to extracellular matrix ligands (e.g., fibronectin, collagen, and vitronectin), cell surface ligands (e.g., intercellular adhesion molecule-1 and vascular cell adhesion molecule-1) and soluble ligands including several growth factors [1]. Several integrins, including ανβ3 and α6β4 are overexpressed in human cancers [2]. We have reported that integrins crosstalk with growth factors through direct binding to growth factors [e.g., fibroblast growth factor-1 (FGF1) [3], insulin-like growth factor 1 (IGF1) [4][5][6], neuregulin-1 [7], fractalkine [8]] and subsequent ternary complex formation with cognate receptor [e.g., integrin/IGF1/insulin-like growth factor type 1 receptor (IGF1R)] [9,10]. We describe this as the ternary complex model of growth factor signaling. The importance of integrins in growth factor signaling was underscored by the findings that the growth factor mutants that cannot bind to integrins are defective in signaling while they still bind to cognate receptors and are dominant-negative antagonists. In the case of IGF1, an IGF1 mutant in which the Arg residues at positions 36 and 37 to Glu mutant, R36E/ R37E was defective in inducing IGF1R signaling and suppressed cell proliferation induced by WT IGF1 in vitro and tumor growth in vivo (dominant-negative antagonistic action) [11]. The dominant-negative growth factor mutants we developed have potential as therapeutic agents and are useful tools for studying the role of integrins in growth factor signaling. It is still unclear if integrins play a role in signaling functions of other members of the same growth factor family. We recently reported that the integrin-binding defective mutants of FGF2 (basic FGF) are defective in signaling functions and potently suppressed angiogenesis induced by WT FGF2, suggesting that integrins may be common co-receptors for other members of the FGF family as well [12]. These findings urged us to study if IGF2, another member of the IGF/ insulin family, requires integrin binding for signaling functions.
IGF1 and IGF2 are homologues and share 67% sequence identity [13], and are involved in growth and development of many tissues in the human body and in tumor growth [14]. IGF1 and IGF2 induce cell proliferation through IGF1R [14]. IGF2 binding to the extracellular subunit of IGF1R induces phosphorylation of the tyrosine kinase of the intracellular β subunit of the receptor, resulting in the activation of MAPK pathway and PI3K/AKT pathway [14,15].
We studied the role of integrins in IGF2 signaling through IGF1R. We show that IGF2 binds to integrins ανβ3 and α6β4, and induces intracellular signaling in an integrin-dependent manner in CHO cells (IGF1R+) that overexpress these integrins. We located the integrin-binding site of IGF2 in the C-domain that connects B and A domains, suggesting that the Cdomain plays a critical role in IGF signaling. The IGF2 mutants defective in integrin binding were defective in signaling and acted as antagonists of IGF1R, indicating that the IGF2 signaling also fits well with the ternary complex model.
IGF2 directly binds to integrin ανβ3
We studied if soluble integrin αvβ3 binds to immobilized IGF2 in an ELISA-type binding assay. We found that soluble αvβ3 bound to IGF2 in a dose dependent manner, but did not bind to wells coated only with BSA ( Fig 1A). We studied if αvβ3 on the cell surface binds to immobilized IGF2 in adhesion assays using CHO cells that express recombinant hamster αv/human β3 hybrid (β3-CHO cells). β3-CHO cells adhered to IGF2 in a dose dependent manner, but control CHO cells that express human integrin β1 (β1-CHO) or parent CHO cells did not (Fig 1B). Monoclonal antibody (mAb) 7E3 (specific to human β3) and cyclic RGDfV (a specific inhibitor of αvβ3) reduced the ability of β3-CHO cells to bind to IGF2 in adhesion assays, but control mouse IgG or DMSO did not ( Fig 1C). These results suggest that WT IGF2 specifically binds to αvβ3.
IGF2 enhances intracellular signaling and cell proliferation in a αvβ3-dependent manner
We studied if IGF2-induced intracellular signaling requires αvβ3 in β3-CHO cells. All the experiments were performed in anchorage-independent conditions (using polyHEMA coated plates) to reduce signals generated by cell-extracellular matrix interaction. IGF2 induced proliferation of β3-CHO cells in a dose dependent manner to a much higher extent than in CHO cells (Fig 2A). Cell proliferation induced by WT IGF2 in β3-CHO cells was reduced by cyclic RGDfV in MTS assays ( Fig 2B). These findings suggest that IGF2-induced cell proliferation is dependent on αvβ3. IGF2 induced IGF1R phosphorylation in β3-CHO cells at a higher level than in CHO cells in a time-and dose-dependent manner (Fig 2C and 2D). IGF2 induced AKT and ERK activation in β3-CHO cells in a dose-dependent manner (Fig 2D). These results suggest that αvβ3 is required for IGF2-induced signaling through IGF1R.
Development of IGF2 mutants that are defective in binding to integrin
Our results so far suggest that αvβ3 binds to IGF2 and is involved in IGF1R signaling. To further study the role of ανβ3 in IGF2 signaling, we generated IGF2 mutants that are defective in integrin binding. We previously reported that Arg36 and Arg37 in the C-domain of IGF1 are critical for integrin-binding [5]. Based on the homology between IGF2 and IGF1 (Fig 3A), we substituted Arg residues at positions 24, 30, 34, 37, 38, and 40 in and around the C-domain of IGF2 were mutated to Glutamic acid. The IGF2 mutants were tested for the ability to bind to ανβ3 in adhesion assays using β3-CHO cells. Although single point mutations did not effectively suppress integrin binding, the combined R24E/R37E/R38E, R34E/R37E/R38E, and R24E/R34E/R37E/R38E mutations effectively suppressed the binding of IGF2 to β3-CHO cells (Fig 3B and 3C). The data suggest that Arg residues at positions 24, 34, 37, and 38 in or close to the C-domain of IGF2 are critical for binding to αvβ3.
IGF2 binds to integrin α6β4 and induces proliferation of α6β4-CHO cells
Our previous studies suggested that α6β4, another integrin that is overexpressed in cancer cells, is critically involved in IGF1/IGF1R signaling through direct binding to IGF1 [6]. We studied if α6β4 is involved in IGF2 signaling. We found that WT IGF2 bound to α6β4-CHO cells in adhesion assays, but R24E/R37E/R38E, R34E/R37E/R38E, and R24E/R34E/R37E/R38E did not (Fig 4A), suggesting that α6β4 also binds to IGF2 and recognizes the C-domain of IGF2. WT IGF2 induced proliferation of α6β4-CHO cells, but not β1-CHO cells (Fig 4B).
at increasing concentrations. The wells were incubated with β3-CHO, β1-CHO and CHO cells in serum free DMEM buffer (10 5 cells/well). The bound cells were measured. The data are shown as means +/-SEM of triplicate experiments. c, Antibody against αvβ3 (7E3) and cyclic RGDfV blocked the adhesion of β3-CHO cells to IGF2. Wells 96 well microtiter plates were coated with IGF2 at 50 μg/ml. β3-CHO cells (10 5 cells/well) were incubated with the immobilized IGF2 plus 7E3 or cyclic RGDfV in Tyrode-HEPES buffer containing 1 mM Mg 2+ , the bound cells were measured. The data are shown as means +/-SEM of triplicate experiments.
We studied if the IGF2 mutants bind to the immobilized soluble IGF1R ectodomain in ELISA-type assays. We detected the binding of WT and three IGF2 mutants to IGF1R. R24E/ R37E/R38E bound to IGF1R to the similar extent to that of WT IGF2, but the binding of R34E/R37E/R38E and R24E/R34E/R37E/R38E was weaker than that of WT IGF2 (Fig 4D). We showed that the three integrin-binding defective IGF2 mutants when added in excess suppressed signaling induced by WT IGF2 (dominant-negative effect) ( Fig 3B). The dominantnegative effect requires that the IGF2 mutants bind to IGF1R. It is likely that the reduced IGF1R binding of R34E/R37E/R38E and R24E/R34E/R37E/R38E is still enough to induce the dominant-negative effect.
Discussion
In the present study, we establish that IGF2 specifically binds to ανβ3 and induced proliferation of β3-CHO cells and cyclic RGDfV suppressed the cell proliferation induced by IGF2. This suggests that the binding of IGF2 to integrin ανβ3 plays a role in IGF2/IGF1R signaling, as in the case of IGF1 [5], and IGF2 signaling fits well with the ternary complex model of growth factor signaling. The present findings also suggest that similar strategy can be used in general to identify dominant-negative antagonists of growth factors if they fit with the ternary complex model.
The biological role of the C-domains of IGF1 and IGF2 has not been established. In our previous studies, IGF1 binds to integrins ανβ3, amino acid residues of IGF1 critical for integrin binding were localized in the C-domain (Arg residues at positions 36 and 37) [5], which are conserved between IGF2 and IGF1. In the present study, we establish that Arg residues at positions 24, 34, 37, and 38 in and close to the C-domain play a critical role in integrin binding, suggesting that the C-domain of IGF2 is involved in integrin binding as in IGF1. Consistently, the C-domains of IGF1 and 2 are evolutionally as conserved as the B-and A-chains, in contrast to the less conserved C-chain of proinsulin, consistent with the idea that the C-domains of IGF1 and IGF2 play an important role in signaling. Our previous studies show that IGF1 induces integrin/IGF1/IGF1R ternary complex formation on the cell surface [5,6]. Therefore, it is highly likely that the C-domain is exposed to the surface when IGF1 and IGF2 bind to IGF1R and becomes accessible to integrins. Interestingly, Drosophila insulin-like peptide-6 (DILP6) contains a short C-domain that is well conserved and contains conserved Arg/Lys residues [16]. This suggests the possibility that DILP6 C-domain may interact with integrins and DILP6 may have similar properties to those of human IGFs.
We used the integrin-binding defective IGF2 mutants (R24E/R37E/R38E, R34E/R37E/ R38E, and R24E/R34E/R37E/R38E) to define the role of integrins in IGF2 signaling. The IGF2 Fig 2. Integrin binding is required for IGF2-induced cell proliferation. a, Cell proliferation was enhanced by IGF2 in β3-CHO cells better than CHO cells. Cells (2 x 10 4 cells/well) were incubated for 48 hrs in polyHEMA-coated plates. Cell proliferation was measured by MTS assays. The data are shown as means +/-SEM of triplicate experiments. b, IGF2-induced cell proliferation was reduced by cyclic RGDfV. Cells (2 x 10 4 cells/well) were incubated with IGF2 (100 ng/ml) in combination with cyclic RGDfV for 48 hrs in polyHEMA-coated plates. Cell proliferation was measured by MTS assays. The data are shown as means +/-SEM of triplicate experiments. c. IGF2 induces phosphorylation of IGF1R in a dose and time-dependent manner. Cells were serumstarved in serum free DMEM for 4 hrs, and treated with WT IGF2 (10 or 100 ng/ml) for 10 min or 1 hr in a polyHEMA coated plates. Cell lysates were analyzed by western blotting. Density of the bands were quantified using ImageJ software and p-IGF1R/ t-IGF1R was calculated. d, IGF2 induces signals in β3-CHO cells. Cells were serum-starved in serum free DMEM for 4 hrs, and treated with WT IGF2 for 10 minutes in a polyHEMA coated plates. Cell lysates were analyzed by western blotting. Density of the bands were quantified using ImageJ software and p-IGF1R/t-IGF1R or p-AKT/t-AKT was calculated.
https://doi.org/10.1371/journal.pone.0184285.g002 mutants were defective in activating IGF1R and in inducing intracellular signaling and cell proliferation in β3-CHO cells. These findings suggest that IGF2 requires integrin binding to induce signaling through IGF1R, as in the case of IGF1 [5]. We previously reported that an IGF1 mutant defective in integrin binding (R36E/R37E) acted as a dominant negative antagonist of IGF1R [11]. Excess IGF2 mutants reduced cell proliferation induced by WT IGF2 in β3-CHO cells, suggesting that the integrin-binding defective IGF2 mutants are dominant-negative antagonists of IGF1R. Taken together, our studies indicate that integrin ανβ3 is involved in IGF2 signaling through IGF1R. We also found that integrin α6β4 bound to WT IGF2 but not to R24E/R37E/R38E, R34E/R37E/R38E, and R24E/R34E/R37E/R38E, and that WT IGF2 induced proliferation of α6β4-CHO cells but not control β1-CHO cells, and that WT IGF2 induced proliferation of α6β4-CHO cells but not the IGF2 mutants. The results suggest that integrin α6β4 is also involved in IGF2 signaling through IGF1R activation. These findings suggest that integrins (e.g., ανβ3 and α6β4) are common co-receptors for IGF1 and IGF2 signaling and that the C-domain is critically involved in integrin binding and that this property is conserved between IGF1 and IGF2. Insulin, which is homologous to IGF1 and IGF2, has no Cdomain and thus it would be interesting to study if integrins play a role in insulin signaling in future studies.
Synthesis of IGF2
A cDNA fragment encoding WT IGF2 was amplified by PCR with synthetic oligonucleotides 5-gtggtgctcgagctcggacttggcgggggtagc-3 and 5-ccgacgcatccatggctgcttaccgccccagtgag-3 using human placenta cDNA library as a template. The PCR fragment was digested with NcoI and XhoI and subcloned into the NcoI/XhoI site of PET28a. The expression construct encodes IGF2 (residues 322-523) with a His6 tag at the C terminus, AYRPSETLCGGELLVDTLQFVC GDRGFYFSRPASSRVSRRSRGIVEECCFRSCDLLALLETYCATPAKSE. Protein was synthesized as insoluble protein by isopropyl β-d-thiogalactoside (IPTG) induction in Escherichia coli BL21. The C-terminal His tag of the protein was used to purify the protein with nickelnitrilotriacetic acid affinity chromatography under denaturing conditions (in 8 M urea). The nickel-nitrilotriacetic acid resin was washed with 1% Triton X-114 before eluting the bound protein to eliminate endotoxin. Purified proteins were refolded in vitro following the protocols ("Isolation of proteins from inclusion bodies" available from the Björkman laboratory). In brief, the purified proteins were eluted in 8 M urea. After elution, the proteins were diluted into refolding buffer (100 mM Tris-HCl, pH 8.0, 400 mM L-Arg, 2 mM EDTA, 0.5 mM oxidized glutathione, 5 mM reduced glutathione and protease inhibitors) on ice. The dilution was kept for 16 hrs at 4˚C with a slow stirring movement. Then the proteins were concentrated by ultrafiltration. Around 2 milligrams of purified proteins were obtained from 1 liter of bacterial culture. WT IGF2 protein concentration was determined by measuring A 280 .
Signaling assays
The anchorage-independent conditions were used to perform all the signaling assays. Wells of microtiter plates were coated with 1.2 mg/cm 2 Poly(2-hydroxyethyl methacrylate) (poly-HEMA) following the protocol as described [20]. Cells were cultured in normal conditions using regular tissue culture plate until 80-90% confluence in DMEM 10% FBS, 1% antibiotics and 1% non-essential amino acids at 37˚C in 5% CO 2 atmosphere. The cells were collected and plated in poly-HEMA plate for starvation for 4 hrs in DMEM without FBS. Serum-starved cells were treated with WT IGF2 or IGF2 mutants (R24E/R37E/R38E, R34E/R37E/R38E, and R24E/R34E/R37E/R38E). After treatment, cells were collected by spinning the samples at 3.5 rpm and then solubilized using lysis buffer (20 mM HEPES pH 7.4, 10% glycerol, 100 mM NaCl, 1 mM MgCl 2 , 1% Nonidet P-40, 20 mM NaF, 1 mM PMSF, protease inhibitor mixture (Sigma-Aldrich) and 1 mM Na 3 VO 4 ). BSA assay (Thermo Scientific) was performed to determine the protein concentration to each sample. The cell lysates were analyzed by western blotting using different antibodies. The HRP-conjugated anti-IgG antibody, Supersignal West Pico and Femto (Thermo Scientific) were used to detect IgG. Fuji LAS 4000 mini luminescent image analyzer and Multi Gauge V3.0 software (Fujifilm, Tokyo, Japan) were used for analysis of the images.
MTS assays
Cells (2x10 4 cells/well) were serum-starved overnight in serum free media (DMEM) and incubated with proteins (WT or IGF2 mutant) for 24 hrs in 96 wells plate coated with 1.2 mg/cm 2 in PBS for 1 h at 37˚C, and the remaining protein-binding sites were block by incubating with 1 mg/ml BSA for 1 h at room temperature. WT and mutant IGF2 (2.5 μg/50 μl in PBS) were added to the wells and incubated in PBS/0.05% Tween 20 at room temperature for 1 hr. After washing with PBS/0.05% Tween 20, wells were incubated with anti-5His antibody conjugated with HRP, then peroxidase substrate. The data are shown as means +/-SEM (n = 3) https://doi.org/10.1371/journal.pone.0184285.g004 of poly-HEMA following the protocol as described [20]. Cell proliferation was measured by MTS assay using Aqueous cell proliferation assay kit (Promega, Madison, WI).
Binding to IGF1R in ELISA-type assays
Wells of 96-well microtiter plate were coated with recombinant human soluble IGF1R (391-GR, No His-tag, R&D System) at 1 μg/ml in PBS for 1 h at 37˚C, and the remaining protein-binding sites were block by incubating with 1 mg/ml BSA for 1 h at room temperature. WT and mutant IGF2 (2.5 μg/50 μl in PBS) were added to the wells and incubated in PBS/ 0.05% Tween 20 at room temperature for 1 hr. After washing with PBS/0.05% Tween 20, wells were incubated with anti-5His antibody conjugated with HRP (Qiagen), then peroxidase substrate.
Other methods
Cell adhesion assays [11] and binding assays [8] were performed as described. Statistical significance was tested in Prism 7 (GraphPad Software) using analysis of variance (ANOVA) and Tukey's multiple-comparison test to control the global type I error. | 2018-04-03T01:33:19.611Z | 2017-09-05T00:00:00.000 | {
"year": 2017,
"sha1": "ad413d990afdb367225a6b6c2d0073ba32d1b2d4",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0184285&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad413d990afdb367225a6b6c2d0073ba32d1b2d4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
158509424 | pes2o/s2orc | v3-fos-license | Does the Market Model Provide a Good Counterfactual for Event Studies in Finance?
We provide a common framework that relates traditional event study estimation methods in finance with a modern approach for causal event studies. This framework is called synthetic portfolio and is a particular case of synthetic control methods. We provide a simulation exercise and an empirical application to evaluate the performance of the method. In addition, synthetic control methods provides a reliable framework, for test based on the abnormal returns, that overcomes some difficulties in the traditional test. We conclude that the market model provides a counterfactual as good as a synthetic control.
Introduction
Event studies is one of the most widely used methodologies in accounting and financial research ( [1]), and in certain legal proceedings. The timeline structure of an event study, determined by the estimation and event window has not changed dramatically since its introduction in the late sixties ( [2]). There are important number of contributions that have focused specially on providing better tools for statistical inference, see [3] for a recent discussion. A recurrent element in event studies is the use of the market model to estimate the so call normal returns. In fact, [3] argues that the popularity of event studies stems from a coincidence of developments in financial market research in the late 60s: CAPM, the CRSP data and more sophisticated and accessible statistical software. The author also concludes, that due to the size of research output using event studies published in the major finance journals, accounting journals and the use in other fields, the methodology continues to be popular and will continue to be an important element in empirical capital market research. In the field of accounting in particular, and finance there is a recent interest in using the tools in empirical microeconomics to address classical issues in accounting research. The claim is that better research designs and statistical methods, in empirical microeconomics, have increased the credibility of the implications obtained in these studies. [4] provide an analysis of the use and potential of causal inference methods in the field of accounting research. Their main conclusions are as follows: accounting research does primarily address problems that are causal in nature; there is an increase in the use of quasi-experimental methods in addressing causal questions in accounting research; there is still a lot to be done in the field to use the tools that are already available and have been successfully used in empirical microeconomics; the authors emphasize the use of causal diagrams and structural models in accounting research. The potential outcome framework for causal inference has also been used in empirical finance in recent years. The potential outcomes are generally considered as missing variables in the causal inference literature because it is not possible to observe all the instances of the variable of interest simultaneously. In many financial event studies the researcher only observes the treated observation. Estimation of potential outcomes in observational studies is usually performed using one of the following techniques or a combination of some of them ( [5]): model based-imputation, weighting, blocking and matching methods. In model based imputation, a model is build in order to predict the missing potential outcome of unit that is not treated. This is exactly what traditional event studies do when they define the normal return model as the constant return model or the market model. For the causal inference literature [5] model based imputation is not recommended to estimate treatment effects because a proper fit can only be accomplished by specifying the post-event outcomes. Weighting and blocking use different methods ( being propensity score one of the most popular) to combine the in-formation of the control units in order to build a proper conterfactual 2 . Using the propensity score achieves a balance between treated and control groups in order to estimate an unbiased treatment effect. Matching techniques find direct comparisons or matches for each unit. For a given treated unit with a particular value for the covariates, one searches for a control unit with similar values in the covariates. A distance metric is needed to implement a matching technique so as to asses the trade-off in choosing between different units and/or controls. The use of propensity score for balancing and estimating causal effects has already been used in event studies in finance for example [6] use propensity score matching to re-examine the long-run underperformed anomaly of stocks after seasoned equity offerings. The authors find that under-performance could be due to incorrect matching. Once issuers and non-issuers are matched using propensity score they find that under-performance is economically and statistically non-significant. [7] uses propensity score matching to adjust for selection bias when comparing public and private equity acquisitions. The author finds no significant difference in the premiums between what public acquires pay for acquisition relative to private equity acquirers (after controlling for target and deal characteristics). The results is a sharp contrast to established results. This small sample of results show that re-examining event studies in finance with different causal approaches can lead to different results. The synthetic control method ( [8]), has received a lot of attention in comparative case studies on different subjects: terrorism, natural disasters, tabacco control programs. As opposed to competing methods, synthetic control method's strength relies in the use of a combination of units to built a more objective comparison for the unit exposed to the intervention, rather than a choosing a single unit or a Ad hoc reference group. The authors advocate for the use of data drive procedures to build the reference group. The synthetic control method is a weighted average of the available control units, that makes explicit: the contribution of each unit to the counterfactual of interest and the similarities (or lack thereof) between the unit affected by the event or the intervention of interest and the synthetic control in terms of the preintervention outcomes and other predictors of post-intervention outcomes. More recently synthetic control methods have been the focus of intense re-search and it is considered as the most important innovation in the policy evaluation literature in the last 15 years according to [9]. The most recent literature has been addressing some limitations of the method: [10] and [11] provide generalization of the synthetic control method that address dimension reduction prior or during the estimation of the weights. In the former the author also illustrates the relationship between an interactive fixed effects model and synthetic controls methods under which Difference-in-Difference method is a special case. [11] also relaxes some of the restrictions imposed by the estimation of the synthetic controls. [12] and [13] propose complementary approaches to perform statistical inference on the estimated average treatment effects from synthetic control methods, these are important contribution because the limiting properties of the estimator were not known for a broad set of data generating process and testing was based on a placebo randomization. The developments of these methods has also motivated by the use in macroeconomic oriented research questions [14].
Synthetic matching techniques applied for event studies in finance are not common, we are only aware of their application in a recent paper, [15]. In this paper the authors measure the effect of personal connections on the returns of financial firms. The study is based on the connections of Timothy Geithner to different financial institutions prior to his nomination as Treasury Secretary at the end of 2008. The synthetic matching methodology is used as a complement to the usual approach in event studies of capturing the difference between a treatment and control group using for the latter the mean return model or the fitted market model. In addition we have an earlier paper using synthetic matching to measure the effectiveness of volatility actions with intra-day stock market data ( [16]) In this paper we provide a detailed analysis of the synthetic matching technique, which we denote as synthetic portfolio method. The notion of a synthetic portfolio provided a common framework that relates traditional event study techniques, in particular the use of the market model, and synthetic control method, which is has been gaining popularity in the causal event literature in recent years. A common framework is important so as to evaluate the benefits of new methodologies with regards to the traditional approach that has been in use sin the late sixties. We provide explain the framework, as series of estimators and their relationship to the traditional methods and more recent approaches, such a difference-in-difference which is a special case of synthetic portfolio. These alternative methods are able to handle the high-dimensional challenge brought by the large asset space in the US stock market. We provide simulation results to evaluate the performance of the different method and provide an empirical application using as event merger announcements. In addition this new methodological framework that we claim encompasses traditional approaches provides additional and valuable insights to perform statistical inference over the individual abnormal returns (the treatment effects) that was hardly possible with the traditional testing framework . This is made possible by the recent work on statistical inference for synthetic control methods by [12] and [13]. The simulations exercise evaluates how well does each of the method is able accommodate the evolution of the asset in question before in the estimation window and also how well does it estimate the true treatment effect. The results indicate that the performance of the market model and the synthetic portfolio approach are quite similar in terms of biases and variance. The empirical application reexamines effects of merger announcements on the value of the firm in the short run, that is in the immediate days after the announcement. As in the literature these affects are measured along different sub-samples, for example, deals that affects publicly trades firms versus private, deals that are finances by stocks or cash. The results indicate that the estimated effects of the merger announcements are similar across the different estimation approaches, both in terms of the point estimates as well as the variance. In addition the introduction of a feasible testing framework for short term studies over each individual event provides a more through analysis of the effects so as to determine the cases where the effects are non existent versus the cases where the effect is strictly positive or strictly negative. This opens the possibility to explore empirically in a second stage the different determinants of the cumulative abnormal returns taking into account this significant variation. Overall both the simulation results and the empirical application indicate that the market model fairs well with respect to competing approaches that have surfaces within synthetic control methods. One of the reasons is that the market model is also a portfolio that tracts the asset of interest and is able to provide a reliable potential outcome, in particular when the beta of the assets tends to one, as one would expect. However, we believe that this is a particular property of the large sample of equities in the US and that some of these results might not be true in markets with very few liquid trades stocks where the synthetic portfolio approach might be a better option 3 . This is a subject of further research. The paper is organized as follows, Section 2 discusses how the potential outcome approach can be introduced to the traditional event studies approach. Section 3 presents the synthetic portfolio approach. Section 4 presents the simulation exercise and the results comparing synthetic portfolio to traditional event studies. Section 5 discuss the data and empirical applications for merger announcements and seasoned equity offerings. Finally, Section 6 concludes.
Potential outcomes in event studies
Traditional Event studies in finance have a standard setup in terms of the the time leading up to the event and the outcome variable of interest, in many cases the holding period returns of the stock. Let t = T 0 denote the moment of time when the event takes place. If we are performing an event study on daily returns then is is customary to define the event window as an interval around the event [T 0 − d, T 0 + d] where d denotes the number of days around the event. For short term event studies d is usually 10, 5 or 1 day(s). Accounting for a gap between the event window and the estimation window is also recommended. The reason for the gap is that noisy information regarding the event might become available to market participants some days before and therefore the stock price could start to deviate from some "normal" behavior in the month before the event. The gap is around a month, m, or two before the event window [T 0 − d − m, T 0 − d). The estimation window has usually a length of one year (250 days to approximate the number of days in a calendar year) of market returns before the start of the gap window. To avoid any confusing notation from this point on we will consider two excluding time intervals (figure B.1): the estimation window [T 1 , T 3 ] and the event window [T 4 , T 5 ]. Note that the event window is centered in the actual time of the event. For completeness the gap is the interval (T 3 , T 4 ).
A recurrent element in event studies is the use of the market model to estimate the so call normal returns. Let R 1,t denote the holding period returns of the stock price of the firm that is affected by the event (without loss of generality firm 1 is the only firm affected by the event). There is no formal definition for the "normal" returns, however the implementation of an event study requires the researcher to disentangle the effects of two types of information on stock prices ( [17]): the information that is specific to a firm (the event) and the information that is likely to affect stock prices marketwide (or a subset of interchangeable stocks). The disentanglement requires a way to control for the latter using the "normal" expected behavior of returns. In traditional event studies the most common approach to define the normal returns, is to use a market model or another factor model (Fama-French three factor or Carhart four factor model) 4 . In finance, factor models are used in many application, and although there is an extensive literature, there is also an important discussion on the validity of the factors used to explain the cross section of returns ( [18]). Event studies consider mainly the one factor market model.
where R m,t denotes the market return. The parameters of the market model are estimated using the information from the estimation window.
In traditional event studies ( [19]), the effect of a particular event on a stockss price is measured by the abnormal returns (ARs).
where R 1,t is the actual return and E[R 1,t |R m,t ] is the expected normal return. In the market model, the normal return is given by, E[R 1,t | R m,t ] =R 1,t , therefore the expected return is the fitted value obtained in 1. The abnormal returns measure the effect of the event on the return of the firm that have been affected by that particular event 5 .Therefore the notion of normal returns tries to measure the expected behavior of the returns in the absence of the event.
In order to determine a causal impact of the event on the performance of the stock, first, we must see the "normal" expected as a mechanism to provide a measure of the expected behavior of the returns in the absence of the event and hence, the market model is a framework that provides a model based potential outcome. Second, if we note that event studies in finance are observational studies rather than perfectly randomized experiments, then we can use the potential outcome approach (also known as the Rubin Causal Model) to come up with an identification strategy for causal event studies. One of the key insights of the Rubin Causal Model is to think of potential outcomes as missing variables. In the event study we observe the returns before the event in the estimation window, but in the event window we only observed the returns that are already affected by the event R I 1,t := R 1,t for t ∈ [T 4 , T 5 ]. This is equivalent to the notion that the stock price of firm 1 in the event window is subject to a treatment and the treatment is the event. For example, if the event is a merger announcement then there are two firms directly affected by this event the acquiring firm and the target firm (for simplicity let firm 1 be only the acquire firm). Once the announcement happens or the event then we cannot observe the state of the world where this event did not take place, therefore the missing potential outcome is R N 1,t ; that is the returns of firm 1 in the event window if the event had not taken place. The effect of the event (or treatment) will be equivalent to the notion of abnormal returns, As mentioned in the introduction the causal inference literature provides various identification strategies to estimate causal effects using the potential outcomes approach (see [9], for a recent survey) we will now only focus on synthetic matching techniques base on the synthetic control method proposed in [8].
Synthetic portfolio
In the potential outcome approach we have units of analysis that are partitioned into a treatment and a control group; borrowing an ideal experimental setup from randomized control trials. For financial event studies we consider one firm that is affected by an event and the outcome variable, over which we want to measure the effect of the event is the returns of the stock of that firm. In addition we have a larger universe of stocks for other firms (other units of analysis) that are trading at the same time. These other firms as long as they are not directly or indirectly affected by the event of interest can be used to create a control group of firms. The fact that the firms in the control group are not affected by the event of interest is very important assumption in the potential outcome framework. The methodology we propose is to use the returns of the firms in the control group to build a synthetic portfolio and use the returns of this portfolio as a potential outcome or a measure of the "normal" returns; that is the returns that we would observe had the event not taken place. We arrive at this synthetic portfolio by applying synthetic control methods introduced by [8] to the problem at hand. The authors propose a weighted average of the units of analysis in the control group as a way to come up with a synthetic counterfactual. The weights are obtained by using a minimum distance estimator applied to a series of restrictions on the outcome variable and a set of exogenous variables. Again we let R 1,t denote the return of the stock of interest where we want to measure the effect of the event (the stock that has been treated). Conversely, the synthetic portfolio is built using the other stocks (that are not involved in a similar event and that are trading during the same days) to replicate the performance of the security of interest. These set of stocks make up the control group, (R 2,t , . . . , R J,t ). The methodology is very simple since we only need to estimate w * j required to estimate the effect of the intervention by solving the optimal tracking problem. Therefore we have to solve, for the estimation window t ∈ [T 1 , T 3 ). It is possible to include in this optimization problem restrictions on the estimated weights, for example nonnegativity constraints (w j ≥ 0, j = 2, . . . , J), constant weights (w j =w, j = 2, . . . , J) or that the weights sum to one ( J j=1 w j = 1). A proper tracking of the stock of interest R 1,t in the estimation window [T 1 , T 3 ] would guarantee that the synthetic portfolio can provide a potential outcome for the latent variable R N 1,t , in the event window [T 4 , T 5 ]. The goodness of fit of the matching can be established by estimating the Mean Square Error in-sample in the estimation window, or out-of-sample by splitting the estimation window into a training [T 1 , T 2 ) and a testing window [T 2 , T 3 ] (figure B.2 ). In traditional events studies goodness of fit is not explicitly mentioned, although inference on the cumulative abnormal returns in the estimation window is considered. The effect of the intervention is equivalent to the abnormal returns of the asset of interest, The optimization problem formulated in expression 4 provides a initial proposal for a a synthetic matching technique. However, this approach is only feasible if the the number of stocks in the control group are of moderate size with respect to the size of the estimation window (the number of pretreatment outcomes) J << T 3 − T 1 . In other words we need sufficient time series observations to be able to estimate the J-dimensional vector of portfolio weights. If the requirement is not met then we have overdetermined system, more weights than observations. In addition, even if the size of the control group of stocks is reasonable the un-restricted optimization problem could favor solutions where there is larger extrapolation effect in the tracking performance of the portfolio than we would desire. [8] already mention the risk of excessive extrapolation when using synthetic control method. The authors suggest that the optimization problem used to find the optimal weights incorporate restrictions so as to avoid excessive extrapolation. Therefore in their method they restrict to non-negative weights and that the weights have to sum to one. In addition they use exogenous variables for the treated variable and the control group variables so as to match the behavior of the unit of interest to the behavior of the control set not only on the dimension of the variable of interest but also on the other exogenous variables. This approach has some similarities the propensity score based matching as a way to balance estimated causal effect ( [5]). Although, synthetic control methods could provide a solution to the extrapolation problem we still have to deal with the large dimensional problem that we face when we use a high dimensional asset space for the control group. This is precisely the case when we look at event studies in the US stock market where we have historical information on more than 5, 000 stocks for building a control group.
High-dimensional problems are not new in portfolio optimization an there a couple of techniques that we explore to deal with both the high-dimensional issues but also the extrapolation problems.
Traditional optimal portfolio problems are formulated explicitly using the trade-off between return and risk, that is the mean-variance problem ( [20]).
where µ p is the target expected portfolio return and Σ is the variance covariance matrix of the universe of expected returns, µ. The optimization problem has a tractable analytical solution Well documented ill-posed problems arise when we plug-in the sample counterparts of µ and Σ, for medium to large size problems in terms of the number of assets considered. Many regularization techniques have been proposed to deal with this problem ( [21]; [22]; [23]; [24]). The use of regularization techniques also point to different ways to writing up the optimization problem that will be specially useful to the synthetic portfolio framework ( [25]). We can write the variance covariance matrix as the outer product of the returns, and squared first moment, Σ = E[R t R t ] − µµ . Then the empirical counterpart of the expectation of the mean-variance problem is equivalent to the sample mean of the squared l 2 norm, This same setup can be used to find a solution for the optimal tracking problem as a special case of the optimal portfolio problem where instead of targeting a particular return (scalar) for the porfolio µ p we are interested in tracking over time a particular stock, in our case the first stock in the asset space, R 1,t . Let µ p 1 T := R 1,t and R := [R 2,t , . . . , R J,t ] denote the subspace of assets that excludes asset 1, then we get the optimal tracking problem in expression 4. This optimization problem is not very different form minimizing the sums of square residuals if we let, Y := R 1,t and X β := R ∀j =1 w, then we can use ordinary least squares to obtain the portfolio weights. This is a first step toward one solution to the high-dimensional problem by applying regularization to the optimization problem. The least absolute shrinkage and selection operator (LASSO) regularization technique introduced by [26] is the l 1 -penalized version of the optimal tracking problem that gives the solution to the synthetic portfolio. The Lasso regularized solution is obtained by solving, The optimization problem for LASSO provides a long-only portfolios w i ≥ 0 or impose an specific penalty (τ ) only in the short positions (Brodie et al. 2009). This solution has been recently explored in [11] for a generalization of synthetic control methods. The authors propose the use of LASSO and Elastic net as a way to generalize synthetic control methods. This generalization looks into regularization techniques as a way to improve on the original approach proposed by [8] both in terms on the restrictions on the weights and the introduction of exogenous covariates. The authors propose a class of estimators that can be used depending on the size of the estimation window T 3 − T 1 (the number of pre-treatment outcomes) and the number of stocks in the control group J. In other words theses estimators leverage the use of regularization methods and restrictions in the optimization/estimation problem to deal with the challenges of a high-dimensional problem. They proposed an elastic net type penalty for regularization, The elastic net is a regularized regression method that linearly combines the l 1 and l 2 penalties of the lasso and ridge methods. In addition to the penalty τ we introduce a parameter for the optimal linear combination α. Therefore, in this case be have two tunning parameters (τ, α). The authors illustrate the methods using the data for three seminal studies in causal inference.
Finally, since in portfolio optimization it is important to compare any methods to a computationally inexpensive benchmark we can define a naive synthetic portfolio as the solution to the optimization problem where the weights are constant (and sum up to 1) across the members in the control group, w j =w := 1 J−1 , j = 2, . . . , J. The effect of the intervention is the (naive) abnormal returns, The naive synthetic portfolio is the cross sectional simple average of all of the available stock that are not affected by the event. The naive synthetic portfolio can be seen as a special case of the difference-in-difference method (DID). The average cumulative abnormal returns is the simple time average of the estimated abnormal returns over the event window for the firm affected by the event, ACAR 1 = 1 In the DID method the average cumulative abnormal returns for firm 1 (over the event window) is a function of different time series and cross sectional averages over the the returns of firm 1 and the returns of the control group in the estimation and event windows. Let D 1 denote the difference in the returns before and after the event, It is important to note that expression 12 is equivalent to the constant mean model in the traditional event study literature. Let D co denote the difference in the returns before and after the event (where every element in the control group has an equal weight) for the control group, If we denoteR 1,(T 1 ,T 3 ) = 1 as the average returns during the estimation window for firm 1 and the control group (taking a simple average over the cross-section), respectively. Then we can show that the DID average treatment effect using the Difference in Difference method is, This implies that the DID estimator is equivalent to the naive synthetic portfolio estimator with an adjustment to the average return difference between the treated and the control group in the estimation window. This adjustment acts as an intercept correction for the forecast in the event window.
To test for the statistical significance of the average treatment effect over the event window we use the end-of-sample instability test of [27] following recent work by [13]. [13] provides tests statistics and the distribution for average treatment effects estimated by synthetic control methods for two different cases based on the relative size of the events window with respect to the estimation window. Since, short term financial event studies have a small event window (−1, 1) we can use the tests statistics provided by [13] for synthetic control methods. Recall that in expression 5 we noted that the treatment effect on firm 1 is equivalent to the abnormal return of firm 1, AR 1,t ; taking the average over the event window we can obtain the average treatment effect, Recall thatR N 1,t is estimated using the market model or the synthetic portfolio weights. We want to test the null hypothesis, H 0 : AR 1,t = AR 1,0 for all t = T 4 , . . . , T 5 , against a strictly positive treatment effect alternative, H a : AAR 1 = E[AR 1,t ] > AR 1,0 . The test statistic for such a test, We derive the empirical distribution of the statistic by block sampling the estimation and gap window using the length of the event window; first we compute, (18) As mentioned in [13], the empirical distribution of [B T 5 −T 4 ,j ] can be used to obtain critical values for the test statisticB T 5 −T 4 under the null hypothesis. IfB T 5 −T 4 is at the tail of the empirical distribution then we reject the null hypothesis, H 0 : AR 1,t = AR 1,0 for all t = T 4 , . . . , T 5 . This hypothesis test and the statistic is only valid for treatment effects that are constant across the event window, note that this mights be true for short term event studies but no necessarily for long term event studies. The main advantage of the test is that by considering the event as an structural change in the end of the sample then it is possible to use the abundant information in the estimation window to obtain re-sampled values of the statistic and perform inference. One of the biggest obstacles that the traditional framework of short term events studies faces is the very few observations available in the event window for individual events. With these few number of observations (3,11 or 21 days) individual test on abnormal returns have such small degrees of freedom such that they are hardly ever considered; therefore most of the reported inference is based on pooling the events. Andrew's end-of-sample stability test provides a simple framework to overcome this problem in financial event studies. As far as we are aware of although Andrews test is not specifically designed with in the framework the causal inference literature we find that it has not been previously considered as a viable alternative to individual test on average cumulative abnormal returns, [3] and [1].
Simulation
In the previous sections we provide an overview of traditional event studies and four alternative methods for causal events studies in finance based on the synthetic portfolio approach and its relationship to DID in this context. We mention in the introduction the importance of a causal approach to event studies, but in addition these alternative methods are able to handle the high-dimensional challenge brought by the large asset space in the US stock market. The main reason that we want to consider a large asset space is to provide a matching technique that avoids any Ad hoc choice of which stock should be in the control group. This is a desired property since the abnormal returns (estimated effects of the intervention) will be more robust and less sensitive to the choice of the researcher. These regularized estimators have the advantage that they provide automatic punning of the units within the control group and hence they avoid any manipulation in favor of a particular hypothesis; this is desired property of the estimator and the research design ( [28]). From a statistical point we can test the viability of the alternative methods by looking at the biased and variance of the proposed estimators for the average treatment effects. In addition, we measure the mean square error of the synthetic portfolio in-sample and out-off-sample in the estimation window. For this reason split the estimation window into a training and testing window (B.2). Although we have argued that with the synthetic portfolio approach there could be less model risk than accepting the market model to build the potential outcome, there could potentially be the risk of overfitting by the best synthetic tracking portfolio. The mean square error and the estimated treatments effect will give us an idea of the possible trade-off regarding over-fitting, bias and variance of the proposed estimators. We now setup a simulation exercise in order to determine the statistical and financial merits of the alternative methods for event studies. The simulation considers a large dimensional asset space of five hundred assets J = 500. We consider a short term financial event study where cumulative abnormal returns are measured one day before and after the event (−1, 1) for the stock return that is affected by the event, R 1,t . In event studies with daily data the estimation window has a length of 150 trading days. We consider these 150 trading days as the training data and introduce 50 trading days for the testing window. In addition we consider a gap window with a duration of 25 trading days. Therefore the training window [T 1 , T 2 ) will cover trading days [−226, −76) and the testing window will cover trading days [−76, −27). We perform 10, 000 simulations. The data generating process for all of the stocks is a one-factor model (a CAPM model with estimated β's) or a stationary first order autoregressive process. For the one factor model, we first simulate the market return R m,t ∼ N (0, 1), fix a value for α = 0.1 and set a value for β = 0 for the first hundred stocks, for β = 0.2 for the next hundred stocks and so on until the last hundred assets with β = 1. The idiosyncratic component for each stock is simulated as, ε i,t ∼ N (0, 1.5). We use this data generating process to simulate the behavior of all of the stocks and during the entire time line. However, for the stock 1 affected by the event we simulate the effect of the event at the event time T 4 + 1 such that, R N 1,t = R 1,t for t > T 4 is the latent potential outcome, in other words the realization where the event does not take place. The observed outcome (the event takes place) is given by R 1,T 4 +1 = R 1,t + γ, where γ = −0.035 implies a drop in the returns of 3.5% once the event takes place, for example a merger announcement. For the first order autoregressive process, we simulate the idiosyncratic component ε i,t as we did in the previous case and start the recursion with R i,0 = 0. We consider different values for φ consistent with stationarity and α = 0.1. The effect of the event on stock 1 is modeled as in the previous case. An important difference from the previous case is that we do not have an explicit simulation of the market returns R m,t , required to estimate the market model. In this case we use as the market return the ex-post equal weighted portfolio (excluding the stock 1), R m,t := 1 J−1 J j=2 R j,t . After simulating the returns using each of the data generating process we estimate the abnormal return at the event date T 4 + 1 which is equivalent to the treatment effect,γ. We look at the average treatment effect over all the simulations and compare across the different models.
The results are summarized in tables A.1, A.2 and A.3. The mean square error in the training and testing window indicates that there is some degree of over-fitting in synthetic portfolio method, specially using Elastic-Net. The overall performance of the Elastic-Net estimator is above all other estimators in the training window but not in the testing window. On the other hand the performance for the naive synthetic and the difference-in-difference estimator is rather stable at both in and out-of-sample. The Market model also shows good performance, specially when the the asset of interest R 1,t has a true β that approaches one (β = 0.8 and β = 1). This is to be expected because this is the situation where the market model provides the best tracking performance. When the data generating process is an autoregressive model and the process has a strong autocorrelation the performance of all of the methods is rather poor. In table A.3 we look at the estimated treatment effect and the variance (in parenthesis). The true treatment effect is γ = −0.035 and hence we are interested in the performance of the estimators that indicate the lowest bias. The lowest bias is obtained by the Elastic-Net estimators for β = 0.5, 0.8, the synthetic naive estimator for β = 0.2 and the market model for β = 1. With respect to the variance all of the estimators indicate a large uncertainty around the point estimates and it is not clear which is preferable in terms of the variance, therefore relative performance can only be based on the bias. When the data generating process is an autoregressive model with either a strong persistence or driven only by noise performance of all of the estimators is poor.
Empirical application: Merger announcements
We obtain M&A data from the Thomson Reuters SDC Platinum Financial Securities database. Thomson Reuters SDC collects all M&A transactions in the US that involve at least 5% of the ownership change of a company. We apply several filters to the M&A data that are common practice in the literature [29]. We download all US M&A transactions from 2003 to 2014. After applying these filters and merging the resulting events with the CRSP databases, we identify 5, 025 M&A announcements from June 2003 to December 2014. The theoretical and empirical literature on the effects of merger announcements is extensive and our aim is not to provide an exhaustive overview (there are some very comprehensive reviews, [30] more recently an on-line special issue of financial management).
We focus on identifying the main empirical results and the well established empirical regularities that have been published using predominantly traditional event study methodologies with a large scale sample, that it where the number of events exceeds 1, 000 observations. We also look at the more recent literature that has used propensity score matching as a first step toward balancing the treatment and control groups, in particular, [7] and [31] . Table A.4 provides a short summary of the effects of mergers on market value of the firm. This summary is meant to be illustrative rather than comprehensive. One important observation is that most of these large scale studies have been performed with data from the 80's and 90's. The literature is concerned not only in a brought measure of impact for all firms but in a quest to determine how theses effects change along different dimensions, for example by taking the point of view of the acquirer/bidder or the target, whether the company is publicly traded or a private company, if the merger is financed using cash or stock, or if there is a diversification motive from the bidder 7 . Some of the empirical regularities are: positive (negative) effects are observed for bidders acquiring private (public) firms; positive effects are observed for target firms when the acquierer is either a public or a private firm, but the value creation is larger for the former; all stock (all cash) financed bids have a negative (positive) effect on stock performance, unless the target is a private firm where stock finances acquisitions create value. The more recent literature has also taken advantage of propensity score matching to rebalanced the sample of teated and control firms, in the first example all stock finances acquisitions still create negative effects as apposed to all cash deal, however by using propensity score matching the difference between the effects for all-cash or all-stocks is smaller. Finally, the observed premium for target public firms versus private firms becomes statistically not significant by after using propensity score.
We examine the effects of merger announcements of the firm value in the short term using the competing method, but we reports the findings that used the traditional method (the market model), synthetic portfolio (where estimation is base on Elastic-net version with non-negative portfolio weights), and difference-in-difference (we have shown in section 3 that is related to the naive synthetic portfolio.). We also estimates these effects along different subsets of data that have been well studies in the literature, A.4.
The results are presented in figure ??. The average estimated effects are nominally very similar across the different methodologies. It is important to note that these average are estimated using the cross section of events, and although the averages are similar in magnitude, we reject the null hypothesis that the sample of individual measures are statistically equivalent across the different methodologies. These results are consistent with the simulations where we find close estimates of the true treatment across the different methodologies. The sample specific results are consistent with the literature, for example the effect on the bidders performance is small varying form 0.52% to 0.71% and for the target the it is quite large varying from 19.4% to 20%. In the literature, based on an earlier historical sample, these estimates are around 1.8% and 27.7%, respectively. We also observe the change in the sign, for a bidder acquiring a private firm (0.78% to 1%) versus a public firm (−0.67% to −0.57%). On the other hand with all of the three methods we do not observe that 100% stock financed mergers has a negative affect on the bidder after the announcement, the effect is positive (0.27% to 0.38%) but small than in the case of 100% cash deals (0.5% to 0.66%). For all other cases such as diversified/un-diversified, a single or more than one bidder the signs and the magnitude are within the range of the results in the literature.
As mentioned in 3 one of the additional benefits of framing traditional event studies within causal inference is the use of Andrews end-of-sample instability test. This test is not affected by the extremely small sample of the event window that makes traditional parametric t-test of non-parametric wilcoxon test, difficult to apply in this context. Therefore we can use the subset of events where we can reject the null hypothesis that the treatment effect is zero and therefore we have events either with strictly positive or negative effects of the merger. That is we are interested in the number of cases where the effects is different form zero, these samples represent anywhere from 8% to 25% of the total sample, for example out of 1, 009 events for acquisitions where the announcement involves a public target, there are 144 cases where the effect on the bidder is significantly positive and 223 cases where it is significantly negative. This means that overall negative average effect is strongly affected by a number of treatment effects for approximately 640 cases where the true effect is statistically non-existent. In figure B.3 we use the sub-samples to get an idea of the magnitude of the effects along the different cases considered in the literature. This provides a way to derive some lower and upper bound for the treatment effect that circumvents that controls for the individual events for which the true effect of the merger announcement is zero. The magnitude of these effects are more or less equivalent across the different methodologies therefore we only report the results based on the traditional market model. The results indicate, that although as we saw before that the overall average effect of merger announcements is negative on the firm value of the bidder if the target is a public company, there are 146 events in the sample where the effect is in average 8.4%. Another interesting case is for 100% stock financed mergers, where the literature reports negative effects on the bidder in the magnitude of −3.5%, and we find an overall small but positive effect 0.37% and also a subset of firms where the average effect is significantly positive 16% and significantly negative −9.1%.
Conclusions and future research
The methodological framework for events studies was laid out in the late sixties and although the testing framework has evolved, the estimation approach has not change profoundly since it was laid out. Research on methods has also received little attention and even though the estimation of cumulative abnormal returns is not the main focus of the papers in empirical finance they still represent an important first in many empirical motivated research questions across the different areas of finance. In the last twenty years research in causal inference methods has introduced more credible tools to re-examine causal effects that are at the core of many questions that motivate empirical research. This paper provides a simple unifying framework between traditional event study estimation methods and modern causal inference methods, in particular synthetic control methods. The framework is based on common tool in finance which is an optimal tracking portfolio. Although causal inference methods are already within the toolkit of empirical finance researchers, in particular the use of propensity score matching, a competing method like synthetic control has not been extensively used. The simulation exercise and the empirical application indicate that the market model provides a counterfactual as good as the synthetic portfolio estimators we introduce. Fitting the return of interest through the market model is also based on the idea that a portfolio provides a good potential outcome, therefore the results are not too surprising. Additional research will eventually tell us if the conditions under which this performance is adequate hold in general or under specific conditions for example if the asset space for estimating these portfolios is sufficiently large and/or the granularity of the elements in the portfolio. Note: The mean square error is estimated in-sample using the information in the testing subset of data from the estimation window, t ∈ [T 1 , T 2 ). All of the estimators that use regularization impose non-negative weights. The estimator Syn.Elastic-Net * includes a constant in the estimation of the weights. Note: The mean square error is estimated out-of-sample using the information in the testing subset of data from the estimation window, t ∈ [T 2 , T 3 ). All of the estimators that use regularization impose non-negative weights. The estimator Syn.Elastic-Net * includes a constant in the estimation of the weights. Note: Average treatment effects are reported for the simulation exercise along with the variance in parenthesis. The true treatment effect is γ = −0.035 and the number of simulations is 10, 000. All of the estimators that use regularization impose non-negative weights. The estimator Syn.Elastic-Net * includes a constant in the estimation of the weights. Note: 1) public and private target firms in the merger. 2) underspecified or diversified deals, meaning that the target firms belongs or does not belong to the same sector.
3) The merger is finance completely by cash or by stocks form the point of view of the acquirer. 4) The target firm is public or private and the acquirer is financing the bid using stocks. 5) propensity score matching is used to balance the treatment and control sample in the estimation of the treatment effects. 6) public and private bidding firms in the merger (in this particular result the difference becomes statistically insignificant once propensity score is used). | 2019-05-20T13:03:30.621Z | 2017-12-07T00:00:00.000 | {
"year": 2017,
"sha1": "8247175eea1fad0e8990771a6948e773458a7d0f",
"oa_license": "CCBY",
"oa_url": "https://repository.urosario.edu.co/bitstream/10336/14168/4/dt211.pdf",
"oa_status": "GREEN",
"pdf_src": "ElsevierPush",
"pdf_hash": "e96d8d2471ce4dcee890a1efa5b344984ced7537",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
245610217 | pes2o/s2orc | v3-fos-license | Analgesic, Anti-inflammatory, and Anti-pyretic Activities of Crinum pedunculatum R.Br. Bulb Extracts
Background: Crinum pedunculatum R.Br. bulbs are used for the topical management of inflammation by traditional healers in the southern region of Ghana. Objectives: This study aims to assess the analgesic, anti-inflammatory, and anti-pyretic activities of different solvent extracts of Crinum pedunculatum . Methods: The analgesic, anti-inflammatory, and anti-pyretic activities of the bulb extracts of Crinum pedunculatum were determined in rats at doses of 100, 200, and 400 mg/kg. The acetic acid induced writhing test was used to determine the analgesic activity, carrageenan was employed to determine the anti-inflammatory activity, and Brewer’s yeast-induced pyrexia was studied to evaluate the extract’s anti-pyretic activity. Results: All solvent extracts of Crinum pedunculatum significantly decreased ( P < 0.001) the frequency of writhing in rats at all doses with 400 mg/kg of the ethanolic extract showing a 98% inhibition comparable to that obtained with diclofenac sodium at 94%. These extracts also caused the inhibition of the increase in paw diameter induced by the administration of carrageenan with 400 mg/kg of the ethyl acetate extract of Crinum pedunculatum causing a 97% inhibition of paw oedema. All doses (100, 200, and 400 mg/kg) of the methanol extract caused a significant decrease ( P < 0.0001) in the temperature of rats induced via the administration of yeast with ethanol and ethyl acetate extracts also showed a significant reduction ( P < 0.001) in rectal temperature. Conclusion: These results obtained indicate that the methanolic, ethanolic, and ethyl acetate extracts of Crinum pedunculatum R.Br. possess analgesic, anti-inflammatory, and antipyretic activities.
INTRODUCTION
Several disease conditions are routinely present with pain and pyrexia.Nonsteroidal anti-inflammatory drugs (NSAIDS) are frequently prescribed to manage these conditions, but gastrointestinal bleeding, perforation, exacerbation of gastric ulcers and cardiac irregularities are some of the side effects associated with their use. [1]Natural products and their derivatives are principal sources for the management of several diseases worldwide. [2]The scientific investigation of plants used as analgesics, anti-inflammatory, and antipyretic agents in traditional medicine is a strategy that has yielded and will continue to yield promising prospects.Crinum species have a substantial medicinal reputation as potent traditional remedies with their use extending to present times in Africa, tropical Asia, and South America. [3,4]They are used traditionally as emetics, laxatives, expectorants, antipyretics, among others.Extracts of Crinum species have been reported to possess cytotoxic, antitumor, antiviral, antimicrobial, antimalarial, analgesic, and immunomodulating activities.][8][9] Crinum pedunculatum, also known as swamp lily, belongs to the family Amaryllidaceae.Plants of the Crinum species have been reported to contain phytoconstituents such as coumarins, catechic tannins, triterpenes, anthocyanidins, polyphenols among others. [10]The bulbs of Crinum pedunculatum plant are used by traditional healers of the southern region of Ghana for the management of inflammation, pain, and fever.However, there are no scientific studies carried out to validate these activities and to the best of our knowledge no pharmacologic or biologic evaluations of any activities concerning this plant have been reported.Consequently, this study was carried out to evaluate the analgesic, anti-inflammatory, and anti-pyretic activities of the methanolic, ethanolic, and ethyl acetate extracts of the bulbs of Crinum pedunculatum.
Plant collection and identification
Bulbs of Crinum pedunculatum were collected from Oframatin, Kwahu-Asakrakra in the Eastern region of Ghana (Latitude: 6.62942 N 6° 37'45.9048".Longitude:-0.68647W 0° 41'11.30253").They were identified and authenticated by Mr Clifford Asare, an herbalist at the Herbal Medicine Department, Faculty of Pharmacy and Pharmaceutical Sciences (FPPS), Kwame Nkrumah University of Science and Technology, Ghana.A sample was kept at the herbarium (voucher specimen number CP/01/19) of Central University, Ghana, where part of the research was carried out.
Plant preparation and extraction
The dried bulbs were ground into coarse powder and extracted with 99.8% ethanol, 99.8% methanol, and ethyl acetate using cold maceration method.The preparation was shaken intermittently for 7 days, after which filtration through a No. 1 Whatman filter paper was carried out.A rotary evaporator was used to evaporate the solvents, and the dried extracts were stored in separate air-tight containers and refrigerated at 4°C for use.
Experimental animals
Wistar albino rats weighing 93-110 g obtained from the University of Ghana animal house were used in this study.They were allowed to acclimatize for 14 days.Animals were kept in plastic cages and fed with a standard pellet diet and granted unrestrained access to clean water.All experimental protocols and handling of animals were carried out in compliance with the Institute for Laboratory Animal Research [11] and were authorized by the Institutional Review Board on Animal Experimentation, Kwame Nkrumah University of Science and Technology with the ethics reference number FPPS/PCOL/010/2019.
Phytochemical screening
Preliminary phytochemical analysis was conducted on the methanolic, ethanolic, and ethyl acetate extracts of Crinum pedunculatum using standard methods described by Trease and Evans. [12]Qualitative screening was carried out for tannins, phlobatannins, flavonoids, saponins, cardiac glycosides, and alkaloids.
Acute toxicity test
Acute toxicity test was performed on the methanol, ethanol, and ethyl acetate extracts of Crinum pedunculatum following the guidelines stated by The Organization for Economic and Co-operative Development. [13]
Acetic acid induced writhing test
The method described by Koster et al. [14] with some adjustment was employed to determine the analgesic effect of the crude extracts in rats.Experimental animals were weighed and distributed into 5 groups of 5 animals each.Group 1 served as the negative control and were administered normal saline 10 ml/kg, group 2 received diclofenac 75 mg/kg (standard drug), while groups 3, 4, and 5 received Crinum pedunculatum extracts at 100, 200 and 400 mg/kg respectively.All administrations were done orally.Thirty minutes after pretreatment, each animal received 1% acetic acid (10 ml/kg) intraperitoneally.Frequency of abdominal writhes were counted for 15 min commencing 5 min following acetic acid administration.Percentage of analgesic activity was determined using the following formula:
Percentage inhibition of writing
Where W = number or frequency of writhes
Anti-inflammatory activity
Carrageenan-induced rat paw oedema Anti-inflammatory activity was carried out using Wistar rats according to the method reported by Winter et al. [16] Group 1 served as the negative control and were administered normal saline 10 ml/kg, group 2 (positive control) received diclofenac 75mg/kg (standard drug), and groups 3, 4 and 5 were administered 100, 200 and 400 mg/kg of Crinum pedunculatum extract respectively.One hour following pretreatment, inflammation was induced by the administration of 0.1ml carrageenan (1%w/v) in 0.9% normal saline to the right hind paw of each animal by sub-plantar injection.Diameter of the injected paw was measured every hour for 6 hr using digital callipers.Percentage reduction in diameter of the treated group was calculated as follows: [17] ( ) control ( ) treated Percentage reduction in paw diameter 100 ( ) control
Ct Co Ct Co Ct Co
Where C t = paw diameter at time t; C 0 = paw diameter before carrageenan injection.
Anti-pyretic Activity
Anti-pyretic activity was determined in rats using Brewer's yeast following standard procedures. [18,19]The basal rectal temperature of each animal was taken using a clinical digital thermometer, after which pyrexia was induced by the subcutaneous injection of 20% w/v Brewer's yeast suspension in normal saline at 10 ml/kg of rat.Increase in rectal temperature of each animal was recorded 18 hr after yeast injection and animals that showed a rise in temperature of ≥ 1F (0.6°C) were chosen for the study.Animals were subsequently distributed into five groups of 5 animals with group 1 receiving 10 ml/kg normal saline, group 2 receiving 125 mg/kg paracetamol, groups 3, 4, and 5 receiving 100, 200 and 400 mg/kg of Crinum pedunculatum extract respectively.Rectal temperature of each rat was recorded for the first 6 hr and at 12 and 24 hr to confirm activity of the extract.
Statistical analysis
All results are expressed as mean ± standard error of the mean (SEM).Results were analysed statistically using GraphPad prism version 8 software and P< 0.05 was regarded as statistically significant.
RESULTS
Phytochemical analysis of Crinum pedunculatum extracts indicated the presence of saponins, alkaloids, and phlobatannins among others (Table 1).
Acute toxicity test
Acute toxicity experiments carried out with all Crinum pedunculatum extracts showed that the limit dose of 2000 mg/kg did not result in mortality and any visible toxic manifestations such as changes in skin, fur, eyes, respiration, tremors, convulsions, salivation, diarrhoea, sleep and lethargy.
Carrageenan-induced inflammation in rats
Oral administration of methanol, ethanol, and ethyl acetate extracts of Crinum pedunculatum showed anti-inflammatory activity by significantly decreasing paw oedema induced by carrageenan (Figure 2).The protection from inflammation exhibited by the ethanol extract was non-dose dependent with 100 mg/kg of the ethanol extract showing better protection than 200 mg/kg and 400 mg/kg showing the highest protection.Although all doses of the ethyl acetate extract showed significant inhibition of inflammation, a ceiling effect was observed at 200 mg/kg.The methanol extract at 400 mg/kg showed a significant reduction (P< 0.0001) in paw diameter from the first to the sixth hour compared to the negative (normal saline) and positive control (diclofenac).
Brewer's yeast-induced pyrexia
Rectal temperature was recorded for each animal every hour for 6 hr after the administration of different solvent extracts of Crinum pedunculatum as well as the standard drug paracetamol.All doses of the methanol and ethyl acetate extracts showed significant reduction of rectal temperature (P< 0.0001 and P< 0.001), which was not dose-dependent.The ethanol extract at the dose of 200 and 400 mg/kg caused significant reduction in temperature (P< 0.0001 and P< 0.001), but a ceiling effect was observed at 200 mg/kg (Figure 3).
DISCUSSION
Efforts made to develop new, efficacious and relatively safe agents for the management of inflammation are still necessary today to find an alternative to the use of NSAIDs.Natural product drug discovery remains a major contributor to the development of novel therapeutic agents. [20]his study was carried out to evaluate the analgesic, anti-inflammatory, and antipyretic activities of the bulbs of Crinum pedunculatum, family Amaryllidaceae.Acute toxicity experiments showed no mortality all the extracts at 2000 mg/kg.Analgesic activity was determined by intraperitoneal injection of acetic acid, which results in contortions of the animal's abdominal muscle (writhing) along with the stretching of the hind limbs, and these reactions are considered to be caused by local peritoneal receptors. [21]In the acetic acid-induced model, several cytokines are released like interleukin 1β, tumour necrosis factor α, and chemokines which act together to induce writhing. [22,23]Furthermore, several studies have shown elevated levels of prostaglandins, mainly PGF2α, PGE2, PGI2 in the peritoneal fluids of animals administered with acetic acid [24,25] and NSAIDs have been established to alleviate pain by inhibiting the synthesis of prostaglandins along with several inflammatory mediators by inhibiting these cyclooxygenase enzymes. [26]Therefore, any substance that causes a reduction in the frequency of abdominal writhes induced by acetic acid can be postulated to possess analgesic effect.The methanol, ethanol, and ethyl acetate extracts of Crinum pedunculatum significantly decreased the frequency of abdominal writhes in a dose-dependent form (Figure 1).It can be postulated that the analgesic activity observed by the extracts could be due to the inhibition of the pathway involved in the synthesis of prostaglandins as well as the local inhibition of peritoneal inflammation.Carrageenan-induced hind paw oedema was employed to investigate the anti-inflammatory activity of Crinum pedunculatum.Oedema induced by carrageenan is regarded as the initial phase of the process of inflammation and is characterized by fluid and cell exudation. [27][29] This study showed that the standard drug diclofenac caused the inhibition of paw oedema from the 3 rd to the final hour, which is consistent with its mechanism of action.Diclofenac acts by inhibiting cyclooxygenase-1 and 2 enzymes, thereby inhibiting the synthesis of prostaglandins released during the late phase after carrageenan administration. [30,31]All doses of the ethyl acetate extracts showed a significant decrease in paw diameter from the first hour after carrageenan administration; 200 and 400 mg/kg of the methanol extract as well as 100 and 400 mg/kg of the ethanol extract also showed significantly lower paw diameter compared to the negative control.(Figure 2).The highest and most significant inhibition of the increase in paw oedema caused by carrageenan was observed at the fourth hour for all solvent extracts of Crinum pedunculatum (Figure 2).It can be postulated that the ethanol, methanol, and ethyl acetate extracts of Crinum pedunculatum inhibit fluid exudation, as well as several mediators of inflammation such as serotonin and histamine that contribute to the acute inflammatory process.Further studies are required to determine the specific inflammatory mediators inhibited by these extracts.
The extracts were also evaluated for antipyretic activity using Brewer's yeast model which is associated with fever through an inflammatory reaction [32] caused by the synthesis of pro-inflammatory cytokines like interleukin-1β and interleukin-6, interferon-α, tumour necrosis factor α and prostaglandins E2 and I2.These mediators are responsible for causing an increase in body temperature through their action on the brain. [33,34]Antipyretic agents like paracetamol, used as a standard drug in this study, exert their effects by decreasing prostaglandin synthesis through the inhibition of cyclooxygenase enzymes as well as by activating anti-inflammatory signals at the site of tissue damage. [35]The methanol, ethanol, and ethyl acetate extracts all showed a significant reduction in temperature induced by the administration of yeast (Figure 3), which could be as a result of the inhibition of pro-inflammatory cytokines.
Phytochemical analysis was carried out on all extracts of Crinum pedunculatum used in this study.[38][39] It can therefore be postulated that the analgesic, anti-inflammatory, and anti-pyretic activities observed by the methanol, ethanol, and ethyl acetate extracts of Crinum pedunculatum may be due to the presence of these phytochemical constituents.
CONCLUSION
This study showed that the methanol, ethanol, and ethyl acetate extracts of the bulbs of Crinum pedunculatum R.Br.possess significant peripheral analgesic, anti-inflammatory and antipyretic activities justifying their use by traditional healers in the southern regions of Ghana.Further studies are ongoing to isolate and characterize compounds that are responsible for these observed activities to offer new leads for the development of agents with analgesic, anti-inflammatory, and antipyretic effects.
ACKNOWLEDGEMENT
The authors are grateful to Mr Kwame Koomson for his technical assistance all through the research as well as Mr Kevin Fiati for his help with the phytochemical screening. | 2022-01-01T16:16:10.782Z | 2021-12-29T00:00:00.000 | {
"year": 2021,
"sha1": "d42b8dcabc36f56d6e2791b22de0bba51063bbc9",
"oa_license": "CCBY",
"oa_url": "https://www.phcogres.com/sites/default/files/PharmacognRes-14-1-24.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4dc51d98cee05419805dffa2abe347a1e9286242",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
238410868 | pes2o/s2orc | v3-fos-license | Which Is More Predictive Value for Mechanical Complications: Fixed Thoracolumbar Alignment (T1 Pelvic Angle) Versus Dynamic Global Balance Parameter (Odontoid-Hip Axis Angle)
Objective In this study, we investigate about relationship between postoperative global sagittal imbalance and occurrence of mechanical complications after adult spinal deformity (ASD) surgery. In global sagittal balance parameters, odontoid-hip axis (OD-HA) angle and T1 pelvic angle (TPA) were analyzed. Methods Between January 2009 and December 2016, 199 consecutive patients (26 males and 173 females) with ASD underwent corrective fusion of more than 4 levels and were followed up for more than 2 years. Immediate postoperative and postoperative 2 years whole spine x-rays were checked for evaluating immediate postoperative OD-HA, TPA, and other parameters. In clinical outcomes, back and leg pain visual analogue scale, Scoliosis Research Society-22 spinal deformity questionnaire (SRS-22), Oswestry Disability Index (ODI), 36-item Short Form Health Survey (SF-36) were evaluated. Results Based on the occurrence of mechanical complications, a comparative analysis was performed for each parameter. In univariable analysis, mechanical complications were significantly much more occurred in OD-HA abnormal group (odds ratio [OR], 3.296; p<0.001; area under the curve [AUC]=0.645). In multivariable analysis, the result was much more related (OR, 2.924; p=0.001; AUC=0.727). In contrast, there was no significant difference between normal and the occurrence of mechanical complications in TPA. In clinical outcomes (normal vs. abnormal), the differences of SRS-22 (0.88±0.73 vs. 0.68±0.64, p=0.042), ODI (-24.72±20.16 vs. -19.01±19.95, p=0.046), SF-36 physical composite score (19.33±18.55 vs. 12.90±16.73, p=0.011) were significantly improved in OD-HA normal group. Conclusion The goal of ASD surgery is to improve patient life quality through correction. In our study, TPA was associated with spinopelvic parameter and OD-HA angle was associated with health-related quality of life and complications. OD-HA angle is predictable factor for mechanical complications after ASD surgery.
INTRODUCTION
Degenerative changes have the potential to greatly disrupt the normal curvature of the spine, leading to sagittal malalignment. 1 The interaction between deformity and compensatory mechanisms depicts the final presentation of patients with adult spinal deformity (ASD). 2 ASD is a debilitating condition that often requires surgical correction. In case of severe deformity, surgical treatment has been shown to offer better clinical and radiological outcomes compared with nonoperative treatments. [3][4][5] However mechanical failure, such as proximal junctional kyphosis (PJK), proximal junctional failure (PJF), or rod fracture is one of the most common complication and have substantial incidence in ASD surgery. There were many studies to investigate about risk factors or predictive factors of mechanical failure after ASD surgery. [6][7][8][9][10][11] Among these radiologic parameters, increasing evidence implies that sagittal vertical axis (SVA) alone does not fully reflect sagittal malalignment, and global spinal pelvic alignment such as the T1 pelvic angle (TPA) assessment provides a more complete picture of the mechanisms for maintaining an upright posture. 12 Thus, TPA is one of the global tilt parameters that is not affected by posture with good parameter for showing thoracolumbar alignment. On the other hand, as Le Huec et al. 13 summarized the sagittal balance of the spine, odontoid-hip axis (OD-HA) angle includes a cervical alignment and have been proven to represent a constant global sagittal parameter which could show current patients posture according to gravity line. 14 TPA corresponds to the angle between a line connecting the center of T1 to the center of the femoral heads and the line to the center of the S1 endplate. It has been correlated with pelvic tilt (PT) and SVA, but does not account for pelvic incidence (PI) value. The TPA target value is under 14° and OD-HA angle is the angle between the vertical and the highest point of the dens connecting the center of the acetabulum. 15,16 The OD-HA angle target value is +2° to -5°. This angle takes into account the position of the cervical spine, the thoraco-lumbar spine and pelvis, and may benefit an overall analysis and assessment of the risk of PJK after ASD surgery (Fig. 1). 13,14,17 Although both of these parameters have been proved to reflect global balance, there is little comparative study between these 2 parameters with regard to impact on mechanical complications or patients' reported outcome.
Therefore, this study aimed to investigate which one would be a good representation of a patient's global balance, to predict clinical outcome and the occurrence of mechanical complica-tions after surgery for patients with ASD.
Patient Population
We retrospectively reviewed patients with ASD who underwent posterior spinal fusion and instrumentation in 2 centers. Inclusion criteria were as follows: (1) patients who underwent surgical corrective surgery for ASD; (2) those with at least one of the following radiologic criteria: coronal Cobb angle more than 20°, SVA more than 5 cm, PT more than 25°, and/or tho- ) is the angle between the vertical and the hightest point of the dens connecting the center of the femoral heads (black dotted line, center of the black circles). The OD-HA angle target value is +2° to -5°. (B) T1 pelvic angle (TPA). TPA (white dotted lines) corresponds to the angle between a line connecting the center of T1 to the center of the femoral heads (black dotted line, center of the black circles) and the line to the center of the S1 endplate (black line). The TPA target value is under 14°.
Radiological Assessments
In order to minimize the error, our study used the radiographic measurement manual introduced by the Scoliosis Research Society for whole spine radiograph imaging. A 36-inch whole spinal anteroposterior and lateral planar radiographs were collected at a distance of 72 inches from the film. The patient was standing in a comfortable position with the knees fixed, feet shoulder-width apart, looking straight ahead, elbows bent, and the knuckles of the supraclavicular fossa bilaterally. 18,19 All radiologic evaluation of OD-HA angle and TPA were conducted at 4 weeks postoperatively. The normal value of OD-HA angle is +2° to -5°, and normal value of TPA was under 14°. 13 And whole spine anteroposterior/lateral was performed at postoperatively 2 years to evaluate mechanical complications; such as PI, sacral slope, L1-S1 lordosis (LL), PT, SVA, and PI-LL. In order to reduce the error between individual measurements, a software program called Surgimap (https://www.surgimap.com/) was used. Also, level of fusion vertebra, uppermost instrumented vertebra (UIV) and lowest instrumented vertebra (LIV), and state of spinopelvic fixation (SPF) were conducted.
Mechanical Complications and Clinical Outcomes
Mechanical complications were defined as PJK or PJF, distal junctional kyphosis (DJK) or distal junctional failure, rod fracture, and implant-related complications. 20,21 Implant-related complications were defined as rod breakage or prominence, painful implant, screw breakage, loosening, or malposition, implant (interbody graft, hook, or set-screw) dislodgement. 20,22 In clinical assessments, patients reported pre-and postoperative 24-month back and leg pain using a visual analogue scale (VAS) scored from 0-10. The Oswestry Disability Index (ODI), Scoliosis Research Society-22 spinal deformity questionnaire (SRS-22), and 36-item Short Form Health Survey (SF-36) were used to measure health-related quality of life (HRQoL) measures.
Statistical Analysis
Statistical analyses were performed using SAS 9.4 (SAS Institute Inc., Cary, NC, USA). Demographic and radiological data were compared using independent t-test and categorical variables using chi-square test or Fisher exact test. The logistic regression model is established with mechanical complications, PJK, PJF, and implant-related failure as outcome. The results are expressed as mean ± standard deviation or number (percentage). A p-value less than 0.05 was considered statistically significant.
RESULTS
A total of 199 patients (26 males and 173 females) were retrospectively reviewed. The average age was 67.36 years (range, 49-80 years), and they were followed for an average of 30.54 months (range, 24-118 months).
Patients were classified according to normal TPA and OD-HA angle values. In the OD-HA angle group, 102 patients were in the normal range and 97 patients were in the abnormal range, In the TPA group had 59 patients with a normal range and 140 patients with an abnormal range. Although the OD-HA angle group showed no difference between normal and abnormal groups in demographic comparisons, the TPA group had a high average age and female ratio in the abnormal group. In radiological assessments, postoperative sagittal balance parameters were compared with fusion segment, UIV, LIV, and SPF via whole spine radiographs anteroposterior/lateral view for the 2 years after surgery. In postoperative parameters, in OD-HA angle groups, the normal group was on average close to normal compared to the abnormal group, but there was no statistical significance. On the other and, the TPA group showed differences in SVA, PI-LL, PI, and PT values, which were statistically 600 www.e-neurospine.org significant. For instrumentation, on average, there were 7 fusion segments, T11-12 for UIV, L5-S1 for LIV. In these results, both OD-HA angle and TPA were not different in normal and abnormal groups. In SPF, 91 patients were administered and 108 were not. In this result, OD-HA angle was significantly more frequent in the normal group, and there was no difference between the 2 groups in TPA (Table 1).
In clinical assessments, back and leg VAS related to pain and ODI, SRS-22, SF-36 related to functional impairment were analyzed. First of all, there was no significant difference in pain between normal and abnormal groups in the OD-HA angle group. However, there were significant differences in the change values of ODI, SRS-22, and SF-36 physical composite score related to the functional impairment. On the other hand, in the TPA group, there was no significant difference in functional impairment between normal and abnormal groups, but in the case of pain, the results were particularly favorable in the back pain, which was statistically significant ( Table 2).
In order to investigate the correlation more closely, a logistic regression was constructed using mechanical complication, PJK, PJF, and implant-related complication as outcomes. In univariate analysis, OD-HA angle, age, BMD, BMI, postoperative SVA was related with postoperative mechanical complication. In multivariable analysis, OD-HA angle was related with postoperative mechanical complication (OR, 2.924; p = 0.001; AUC = 0.727) ( Fig. 2).
DISCUSSION
Recent studies on outcomes following ASD surgeries have shown high rates of complications (8.4%-42%) and revision rates (9%-17.6%). 2,12,[23][24][25] In our study, the overall mechanical complication occurred in about 42%, and revision rate was about 21%. This is slightly higher than other studies, but does not show much difference. 3,11,22 The occurrence of mechanical complications after ASD surgery has already been dealt with in several studies. In previous studies, thoracoplasty, posterior spinal fusion, combined an- 11 demonstrated that PJK can be minimized by postoperative normalization of global spine alignment and balance. Thus, we analyzed the difference according to whether normality of the postoperative global balance parameters TPA and OD-HA angle.
It is done through cervical curvature and lumbar lordosis in order to maintain a horizontal gaze and to free the upper limbs. It is important to analyze the problem statically and dynamically to understand the conditions required for this balance. Recently, several studies demonstrated that OD-HA angle was characterized the overall spinal balance, remains constant whatever the age and despite variations of lordosis (which decreases with loss of disc height) and the presence of compensation mechanism. And it hardly varies and is a good way to study the overall sagittal balance. It integrates the cervical spine and head and stays constant even in elderly if they are asymptomatic. [13][14][15] In Dubousset's conus of economy (ref), the concept of balance includes from head to lower limbs. Therefore, the center of the head, that is, the center of C2, which is a line descending from the center of the external auditory meatus, can be regarded as the center of gravity. For that reason, OD-HA could be a good indicator of global balance in terms of the concept of Dubousset's conus of economy that global balance is the ability of a person to stand upright with respect to gravity and that it is efficient to use the least energy. However, there are not many stud- Values are presented as number (%). OD-HA, odontoid-hip axis; TPA, T1-pelvic angle; PJK, proximal junctional kyphosis; PJF, proximal junctional failure. *p < 0.05, statistically significantly differences in chi-square test. 16 introduced about TPA, and several studies reported it is related with clinical outcomes of patients' mechanical complication after ASD surgery. 12,16,20,21,34 TPA is similar to the spinopelvic angle, allows the patient to check thoracolumbar alignment well, and is not affected by changes in the patient's posture, so it can be evaluated objectively. It can be assumed that there may be a downside to being difficult to know exactly in terms of the ability to stand in the Dubousset's conus of economy. In our study, the normality of TPA was related to the normal value of the spinopelvic parameter after surgery, and was related to the pain parameters. The sagittal spinopelvic param-eters were related with chronic back pain and/or HRQoL. [34][35][36] It can be seen that this contributed to the improvement of back pain by sufficiently making lordosis through correction of the sagittal imbalance. TPA has a certain value even in the stooping posture of the patient because the alignment of cervical spine and the horizontal gaze of the patient are missing. There was no research on whether these differences were related to the prediction of mechanical complications. In postoperative stooping posture related with global imbalance of the patient after ASD surgery, it may be due to pain, and there may be various reasons. Such as, PJK, DJK, pain, insufficient decompression. If the patient's global balance cannot be maintained due to various reasons, assuming that the OD-HA angle might come out www.e-neurospine.org 605 poorly and TPA remains constant, we studied whether this difference is different in the prediction of the patient's postoperative prognosis, that is, the mechanical complication. Results in our paper, OD-HA angle showed better results.
In several studies have reported that spinopelvic fixation affects the occurrence of PJK. In several studies reported SPF with iliac screws had high rates of lumbosacral fusion and low incidence of mechanical complications and revision surgery for PJK 37 and reduced sacroiliac joint pain after multisegment spinal fusion after SPF with S2 alar iliac screws. 38 Otherwise, some studies reported although the rigid SPF has decreased the risk for distal screw loosening, cyclic loading during daily activities might lead to fatigue of the posterior instrumentation, which can result in mechanical long-term complications such as nonunion and eventually increase the risk of iliac screw loosening, development of PJK, PJF, and pseudarthrosis or pedicle screw loosening at L5-S1 level. 10,39,40 In our study, statistical significance was not observed, but there was a force to SPF was related with development of mechanical complication especially PJK/PJF. Also, many articles reported that older age, osteoporosis, and obesity are important risk factors of mechanical complication, PJK, and PJF. Lau et al. 32 demonstrated that age was an important risk factor of PJK and PJF. And high BMI was related with worse sagittal alignment after ASD surgery and worse postoperative scores in HRQoL, and development of PJK. 10,20,41 And osteoporosis was related with PJK and PJF. 10,11,20 Especially, Yagi et al. 11 reported low BMD (T score < -1.5) was a significant risk factor for the incidence of PJF. In our study, older age was related with occurrence of mechanical complication, BMD was related with all types of complications, and BMI was related with occurrence of mechanical complication and PJK. Sexual difference was not related with occurrence of complications. In radiological assessments, postoperative SVA was related with occurrence of mechanical complication and PJK. The other postoperative sagittal parameters were not related with complications. And UIV and LIV were similar between the 2 groups as T11-12 and L5-S1. In SPF, there is no significant difference between the 2 groups, but it shows approaching an acceptable significance level. The results of our study were also similar to previous other studies ( Table 5).
The present study had several limitations. Because this was not a randomized and prospective study, but rather retrospective in design, a control population that received standard conservative care was not included. In addition, we did not control for selected surgical method or the period of preoperative conservative management. Meanwhile, the clinical score was not an absolute result because it was entirely patent specific. The images of the patients were measured by whole spine x-ray. Due to this, there may be some correction by the patient's position. Finally, the results of this study may be limited because it was conducted only in a single country and a single institution. Further studies are needed with multicenter, multinational, and multiracial data for more reliable results in the future.
CONCLUSION
The goal of ASD surgery is to improve patient life quality through correction. In our study, TPA was associated with spinopelvic parameter and clinical parameters related with pain, OD-HA angle was associated with clinical parameters with functional impairment and complications. OD-HA angle is predictable factor for mechanical complications after ASD surgery. | 2021-10-07T06:17:15.150Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "a936fc5d801a81d41f24c540a63dd5c294d087d3",
"oa_license": "CCBYNC",
"oa_url": "https://www.e-neurospine.org/upload/pdf/ns-2142452-226.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cc750503cff9d46f31aca5dd379fd9fcd73c13ac",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17387813 | pes2o/s2orc | v3-fos-license | Truncated Harmonic Osillator and Parasupersymmetric Quantum Mechanics
We discuss in detail the parasupersymmetric quantum mechanics of arbitrary order where the parasupersymmetry is between the normal bosons and those corresponding to the truncated harmonic oscillator. We show that even though the parasusy algebra is different from that of the usual parasusy quantum mechanics, still the consequences of the two are identical. We further show that the parasupersymmetric quantum mechanics of arbitrary order p can also be rewritten in terms of p supercharges (i.e. all of which obey $Q_i^{2} = 0$). However, the Hamiltonian cannot be expressed in a simple form in terms of the p supercharges except in a special case. A model of conformal parasupersymmetry is also discussed and it is shown that in this case, the p supercharges, the p conformal supercharges along with Hamiltonian H, conformal generator K and dilatation generator D form a closed algebra.
Hamiltonian cannot be expressed in a simple form in terms of the p supercharges except in a special case. A model of conformal parasupersymmetry is also discussed and it is shown that in this case, the p supercharges, the p conformal supercharges along with Hamiltonian H, conformal generator K and dilatation generator D form a closed algebra.
A great deal of attention is now being paid to study [1,2,3,4] quantum mechanics in a finite dmensional Hilbert space (FHS). In particular, we would like to mention the recent developments [2,3,4] in quantum phase theory which deals [2] with a quantized harmonic oscillator in a FHS and which finds interesting applications [4] in problems of quantum optics.
Recently, two of us (BB and PKR) studied [5] some basic properties of these oscillators. In particular it was pointed out that the raising and lowering operators of the truncated oscillator behave like parafermi osillator. Inspired by this similarity, a parasupersymmetric quantum mechanics (PSQM) of order 2 was also written down where the parasusy is between the usual bosons and the truncated bosons. However, the explicit form of the charge was not written down. Further the consequences were also not elaborated upon. The purpose of this note is to generalize this construction to arbitrary order. In particular, we show that for these PSQM models of arbitrary order p, the algebra is given by and the Hermitian conjugated relations and discuss their consequences in some detail. In particular, we show that the consequences following from this algebra are identical to those following from the well known PSQM model of the same order p [6] [7] even though the two algebras are different. In particular, whereas eq. (1) is identical in the two schemes, eq. (2) is different in the two schemes in the sence that in the well known case the coefficient on the r.h.s. is 2p instead of p(p+1) in eq. (2). In view of the identical consequences, it is worth examining as to why the PSQM of order p can be written down in an alternative way. To that end we show that one can infact express PSQM of order p in terms of p super (rather than parasuper) charges all of which satisfy Q 2 i =0 and further all of them commute with the Hamiltonian. However, unlike the usual supersymmetric (SUSY) quantum mechanics (QM), here H cannot be simply expressed in terms of the p supercharges except in a special case. In the special case we show that the Hamiltonian has a very simple expression in terms of the p supercharges We also discuss a para superconformal model of order p and show that the dilatation and conformal operators also can similarly be expressed in quadratic form in terms of the p SUSY and p para superconformal charges. Let us start with the truncated raising and lowering operatorss a + and a. It is well known that if one truncates at (p+1)'th level (p > 0 is an integer) then a and a + can be represented by (p + 1) × (p + 1) matrices and they satisfy the commutation relation [8] [a, a + ] = I − (p + 1)K (4) where I is (p + 1) × (p + 1) unit matrix while K = diag(0, 0, ..., 0, 1) with Ka = 0 and further K 2 = K = 0. As shown by Kleeman [9], the irreducible representations of eq.(4) are the same as those for the scheme A convenient set of representation of the matrices a and a + is given by where α, β = 1,2,...(p+1). As shown in [5], the nontrivial multilinear relation between a and a + is given by These relations are strikingly similar to those of parafermi oscillator of order p [7] except that in the later case, the coefficient on the right hand side is p(p+1)(p+2)/6 unlike p(p+1)/2 in eq. (8). As expected, for the case of the Fermi oscillator (p=1), both the coefficients are same while they are different otherwise.
Motivated by the nontrivial relation between a and a + as given by eq.(8) it is worth enquiring if one can construct a kind of PSQM of order p in which there will be symmetry between bosons and truncated bosons of order p. It turns out that the answer to the question is yes. In particular, on choosing the parasusy charges Q and Q + as (p + 1) × (p + 1) matrices as given by where b, b + denote the bosonic annihilation and creation operators and α, β = 1, 2, ..., (p + 1), so that Q and Q † automatically satisfy Q p+1 = 0 = (Q † ) p+1 . Further, it is easily shown that the Hamiltonian (h = m = 1) where (r = 1, 2, ..., p) Here C 1 , C 2 , ..., C p are arbitrary constants with the dimension of energy. It turns out that the nontrivial relation given by eq.(2) between Q, Q † and H is satisfied provided It is interesting to notice that the PARASUSY charge as well as the algebra as given by eqs.(1), (2), (9), (10) and (14) is very similar to that of standard PSQM of order p [7] except that in the standard case the coefficient on the r.h.s. of eq.(2) is 2p instead of p(p+1)/2 and instead of eq.(14) in the standard case one has Besides, unlike in eq. (9), in the standard case, Q is defined without the factor of √ α. However, the Hamiltonian and the relation between the superpotentials as given by eqs. (11) to (13) are identical in the two cases. As a result the consequences following from the two different PSQM scemes of order p are identical. In particular, as shown in [7], in both the cases (i)the spectrum is not necessarily positive semidefinite unlike in SUSY QM (ii) the spectrum is (p+1)-fold degenerate atleast above the first p levels while the ground state could be 1,2,...,p fold degenerate depending on the form of the superpotentials and (iii) one can associate p ordinary SUSY QM Hamiltonians. Why do the two seemingly different PSQM schemes give the same consequences? The point is that in the case of parasusy of order p, one has p independent parasusy charges and in the two schemes one has merely used two of the p independent forms of Q. It is then clear that one can infact define p seemingly different PSQM schemes of order p but all of them will have identical consequences. For example, the parafermi operators are usually defined by the following (p + 1) × (p + 1) matrices [7] (a) αβ = α(p − α + 1)δ α+1,β (16) So one could as well have defined the parasusy charges by instead of the usual choice without the squareroot factor [7]. It is easily shown that in this case too the parasusy charges Q and Q + satisfy the algebra as given by eqs. (1), (2) and (14) except that the factor on the r.h.s. of eq. (2) is now p(p+1)(p+2)/3 and the constants C i satisfy 2 ) = 0, p even (21) instead of eq. (14). However, as before the Hamiltonian and the relation between the various superpotentials is unaltered and hence one would get the same consequences as in the standard PSQM case [7].
At this stage it is worth asking if parasusy QM of order p can be put in an alternative form by making use of the fact that there are p independent parasupercharges [7]? If yes this would be analogous to the so called Green construction for parafermi and parabose operators [10]. We now show that the answer to the question is yes. Let us first note that the supercharge as given by eq. (9) can be written down as a linear combination of the following p supercharges [12] It is easily checked that these p charges Q j are infact supercharges in the sense that all of them satisfy Q 2 j = 0. Further, all of them commute with the Hamiltonian as given by eq. (11) provided condition (13) is satisfied. Besides they satisfy However, there is one respect in which these charges are different from the usual SUSY charges in that unlike in that case, the nontrivial relation of the usual parasusy algebra (i.e. eq. (2) but with 2p on the r.h.s. instead of p(p+1)) now contains product of all p charges i.e.
provided eq. (15) is satisfied. If one instead considers other versions of PSQM of order p then one would have similar relations but with different weight factors between the various terms and also different relations between C i which can easily be worked out.
There is one special case however when the algebra takes a particularly simple form. In particular when all the constants C i are zero then it is easily checked that the Hamiltonian can be written as a sum over quadratic pieces in Q as given by eq. (3) which is a generalization of the SUSY algebra in the case of p supercharges. In this case, clearly the spectrum is positive semidefinite and most of the results about SUSY breaking etc. would apply. Further all the excited states are always (p+1)-fold degenerate. It is amusing to note that in ortho supersymmetricsy QM too [11] the relation between H and charges is exactly as given by eq. (3).
Following the work of [7], we now consider a specific PSQM model of order p which in addition is conformally invariant and show that the conformal PSQM algebra is rather simple. Let us consider the choice Note that in this case the condition (13) is trivially satisfied when all C i are zero. The interesting point is that in this case, apart from the p parafermionic charges Q i , we can also define the dilatation operator D, the conformal operator K and p para superconformal charges S j so that they form a closed algebra. In particular, on defining D = − 1 4 (xP + P x); K = x 2 /2 (29) it is easy to show that the algebra satisfied by D, H and K is standard Besides, apart from the parasusy algebra as described above (with C i = 0), we have S i S j = 0 = Q i S j = S i Q j if j = i + 1 (34) It is quite remarkable that an identical algebra also follows in the case of the conformal ortho supersymmetric case [11]. | 2014-10-01T00:00:00.000Z | 1995-05-06T00:00:00.000 | {
"year": 1995,
"sha1": "3e59f89c661a0b84a11e12b0c34027dc03e5644c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9505042",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "29470900696dce36f898ed6e4158c96b05e4cc5f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
222134172 | pes2o/s2orc | v3-fos-license | All-Pass Filters for Mirroring Pairs of Complex-Conjugated Roots of Rational Matrix Functions
In this note, we construct real-valued all-pass filters for mirroring pairs of complex-conjugated determinantal roots of a matrix polynomial. This problem appears, e.g., when proving the spectral factorization theorem, or more recently in the literature on possibly non-invertible or possibly non-causal vector autoregressive moving average (VARMA) models. In general, it is not obvious whether the all-pass filter (and as a consequence the all-pass transformed matrix polynomial with real-valued coefficients) which mirrors complex-conjugated roots at the unit circle is real-valued. Naive constructions result in complex-valued all-pass filters which implies that the real-valued parameter space (usually relevant for estimation) is left.
Introduction
It is well known that there are multiple spectral factors which generate the same spectral density, see Baggio and Ferrante (2019) for a recent contribution. In the classical time series literature, (Rozanov, 1967;Hannan, 1970) use all-pass filters (also known as Blaschke filters) to mirror determinantal roots of spectral factors from inside to outside the unit circle when proving the spectral factorization theorem (for rational spectral densities). More recently, all-pass filters play an important role in the literature on possibly non-invertible or possibly non-causal vector autoregressive moving average (VARMA) models, see Lanne and Saikkonen (2013), Velasco and Lobato (2018), Funovits (2020).
It is not obvious whether the all-pass filter (and as a consequence the all-pass transformed polynomial or rational matrix with real-valued coefficients) which mirrors complex-conjugated roots at the unit circle is real-valued and, to the best of our knowledge, there is no proof of whether this is true available in the literature. Naive constructions, as in (Gouriéroux et al., 2019), result in complex-valued all-pass filters which implies that the real-valued parameter space (usually relevant for estimation) is left.
Here, we show how to obtain real-valued all-pass filters for mirroring pairs of complex-conjugated determinantal roots of a rational matrix function at the unit circle in two ways. Both constructions start from the QR decomposition of the real and imaginary part of a normalized vector in the (right-) kernel of a polynomial matrix p(z) evaluated at a determinantal zero with non-trivial imaginary part.
One approach parametrises consecutively unitary matrices and Blaschke factors in terms of the matrix R of the QR decomposition and the real and imaginary part of the determinantal root, say α + , of p(z).
This approach leaves the real-valued parameter space in intermediary steps and it is only ensured at the end that the parameter matrices are indeed real-valued The second and more elegant construction is based on state spaces methods and does not leave the real-valued parameter space.
The remainder of this note is structured as follows. In Section 2, we define all-pass filters and some special instances of all-pass filters that will appear in our derivations. In Section 3, we prepare the two approaches by discussing the QR decomposition of the real and imaginary part of a normalized vector in the (right-) kernel of p (α + ) and some implications. In Section 4, we parametrise unitary matrices and Blaschke matrices in terms of the elements of R and the real and imaginary part of α + such that their product has real coefficients and mirrors the given pair of complex-conjugated roots at the unit cirlce. In Section 5, we discuss the state space approach for constructing real-valued all-pass filters.
All-Pass Filters and Blaschke Matrices
A multivariate rational all-pass filter is an (n × n)-dimensional matrix V (z) whose entries are rational functions and which satisfies V (z)V * 1 z = V * 1 z V (z) = I n . The superscript asterisk takes an (arbitrary) matrix function m(z) = ∞ j=−∞ m j z j to its version with complex conjugated and transposed coefficient matrices, i.e. m * (z) = ∞ j=−∞ m * j z j .
An elementary Blaschke factor at α (which is obviously all-pass) is of the form 1 B(z, α) = 1−ᾱz −α+z . A squared Blaschke factor at the complex root α ± = α r ± iα i (in obvious notation) is defined as Lastly, a bivariate Blaschke factor pertaining to the pair of complex conjugated roots α ± = α r ± iα i , where α i > 0, and the non-zero vector w ∈ C 2×1 is given
QR Decomposition of Complex-Valued Right-Kernel
We start from a (normalized) vector v = v r + iv i ∈ C n×1 in the right-kernel of p (α + ), where v r , v i ∈ R n×1 and will eventually result in the transformed polynomial matrix where the orthogonal real matrixQ and the upper-triangular matrix R with positive diagonal elements will be constructed below. Note thatp(z)p 1 z = p(z)p 1 z holds.
In this case, we can apply the real-valued univariate all-pass function B sq (z, α ± ) straightforwardly. Thus, we assume in the following that the matrix v r v i has rank 2.
Starting from the QR decomposition as described above, it is possible to construct unitary (2 × 2)dimensional matrices V β , V γ , and V δ such that has real-valued coefficient matrices.
Last, V δ is chosen such that V β · B(1,α+) 0 0 1 · V γ · B(1,α−) 0 0 1 · V δ is equal to the identity matrix. Straight-forward computation verifies that the coefficient matrices in 3 Remember that for z = x+iy, the polar representation z = r·cos(φ)+i·r·sin(φ) can be obtained with r = x 2 + y 2 and, for r > 0, φ = arccos x r when y > 0 and φ = − arccos x r when y ≤ 0. Note that a c is always positive for us by construction.
The most elegant approach to construct a polynomial matrix b(z) with real coefficients (such that p(z)b(z)a −1 (z) is real as well) is a state space construction. In the following, we construct the matrices (A, B, C, D) in the state space representation of the (2 × 2)-dimensional, real-valued, rational, all-pass
Fixing the Poles of the All-Pass Filter: Determining A
The eigenvalues of A are equal to the inverse of the determinantal roots of (I n −Az), i.e. the eigenvalues of A = λr λi −λi λr are λ + = λ r + iλ i = α −1 + = (α r + iα i ) −1 and λ + . Of course, the zeros of a(z), i.e.
5.2
Fixing the Column-Space at α ± : Determining C Next, we determine C such that the column-space of b (α + ) is spanned by a given column vector Therefore, we set C iλi −λi = λ i w −1 w, i.e. C = w −1 ( wi −wr ) where w r , w i denote the real and imaginary parts of w.
Ensuring All-Pass Property: Determining B and D
Finally, we construct B, D (for given A, C) 4 such that the rational matrix B 2 (z, α ± , w) is indeed allpass. If (A, B, C, D) is a state space realization of the real-valued all-pass filter B 2 (z, α ± , w), then the product B ′ 2 1 z , α ± , w B 2 (z, α ± , w) (and thus B 2 (z, α ± , w)B ′ 2 1 z , α ± , w ) has a realization 5 given by In order to make the (1,1) block non-controllable and the (2,2) block non-observable, we need to set the (1,2), the (1,3) and the (3,2) block on the right-hand-side equal to zero. That is, 4 Equivalently, matrices C, D of an all-pass filter which is left-multiplied on a polynomial matrix to mirror roots could be constructed from given A, B.
5 In general, the multiplication of two rational functions k 1 (z) and k 2 (z) of appropriate dimensions and parametrized as two state space systems The rational function B ′ 2 1 z , α ± , w may be represented as First, we obtain X as solution of the Lyapunov equation obtained from block (1,2). Second, we obtain B as a function of D from the equation obtained from block (1,3), which contains the same information as the equation obtained from block (3,2). In particular, we obtain that B(D) = −X −1 A ′−1 C ′ D.
The (3,3) block needs to satisfy
Together with the above, we have thus and may obtain D from a Cholesky decomposition of I n + CA −1 X −1 A ′−1 C ′ .
Summary of State Space Construction
It follows that we end up with the system | 2020-10-06T01:01:04.616Z | 2020-10-04T00:00:00.000 | {
"year": 2020,
"sha1": "87a2707bcc3040e5eae65df6a86a29a8f867300c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "87a2707bcc3040e5eae65df6a86a29a8f867300c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
53225782 | pes2o/s2orc | v3-fos-license | Arrhythmia initiation in catecholaminergic polymorphic ventricular tachycardia type 1 depends on both heart rate and sympathetic stimulation
Aims Catecholaminergic polymorphic ventricular tachycardia type 1 (CPVT1) predisposes to ventricular tachyarrhythmias (VTs) during high heart rates due to physical or psychological stress. The essential role of catecholaminergic effects on ventricular cardiomyocytes in this situation is well documented, but the importance of heart rate per se for arrhythmia initiation in CPVT1 is largely unexplored. Methods and results Sixteen CPVT1 patients performed a bicycle stress-test. Occurrence of VT triggers, i.e. premature ventricular complexes (PVC), depended on high heart rate, with individual thresholds. Atrial pacing above the individual PVC threshold in three patients did not induce PVCs. The underlying mechanism for the clinical observation was explored using cardiomyocytes from mice with the RyR2-R2474S (RyR2-RS) mutation, which exhibit exercise-induced VTs. While rapid pacing increased the number of Ca2+ waves in both RyR2-RS and wild-type (p<0.05), β-adrenoceptor (βAR) stimulation induced more Ca2+ waves in RyR2-RS (p<0.05). Notably, Ca2+ waves occurred despite decreased sarcoplasmic reticulum (SR) Ca2+ content in RyR2-RS (p<0.05), suggesting increased cytosolic RyR2 Ca2+ sensitivity. A computational model of mouse ventricular cardiomyocyte electrophysiology reproduced the cellular CPVT1 phenotype when RyR2 Ca2+ sensitivity was increased. Importantly, diastolic fluctuations in phosphorylation of RyR2 and SR Ca2+ content determined Ca2+ wave initiation. These factors were modulated towards increased propensity for arrhythmia initiation by increased pacing rates, but even more by βAR stimulation. Conclusion In CPVT1, VT propensity depends on individual heart rate thresholds for PVCs. Through converging data from clinical exercise stress-testing, cellular studies and computational modelling, we confirm the heart rate-independent pro-arrhythmic effects of βAR stimulation in CPVT1, but also identify an independent and synergistic contribution from effects of high heart rate.
Introduction
Patients with catecholaminergic polymorphic ventricular tachycardia (CPVT) have an increased risk of sudden cardiac death due to ventricular arrhythmias. The mortality rate in untreated patients is 30-33% by the age of 35. [1] Current treatment options comprise β-adrenoceptor (βAR) antagonists and flecainide. [2][3][4] In earlier studies, as many as 46% of patients treated with βAR antagonists experienced breakthrough ventricular tachycardias (VT). [5] Flecainide offers effective added protection, but some patients still experience breakthrough VTs even on combined treatment. [2,6,7] In such patients, or patients who do not tolerate treatment with βAR antagonists, left cardiac sympathetic denervation can be an option. [8][9][10] If serious arrhythmic events occur despite optimal medical treatment, an implantable cardioverter defibrillator (ICD) is recommended, [8] but involves a risk of inappropriate shocks that can lead to patient distress and initiate VT and death. [3] Thus, new therapeutic strategies are needed, based on improved mechanistic insight.
Stress testing of patients with CPVT, a central diagnostic strategy, shows a relationship between increasing heart rate and the occurrence of ventricular ectopy. [1,5] More severe arrhythmias are often observed during high heart rates, such as sustained VTs appearing above a certain heart rate threshold. [1] βAR stimulation has been identified as an important factor for the development of arrhythmias in CPVT, and catecholamines, i.e. adrenaline [11] or isoprenaline (ISO), infusion [12] has been used as a stress test in CPVT. However, the diagnostic polymorphic non-sustained VT was only induced in 31% of patients with mutations pathogenic for CPVT. [5] In vivo, βAR stimulation increases heart rate, [13] but also has important non-chronotropic, i.e. heart-rate independent, effects on ventricular cardiomyocytes. On the other hand, increased heart rate has important effects on ventricular cardiomyocytes that are independent of βAR stimulation. Therefore, clarifying the relative importance of the heart rate and sympathetic activity for development of arrhythmias in CPVT could have important implications for diagnostic procedures and treatment strategies.
The focus of this study is CPVT type 1 (CPVT1), caused by mutations in the gene encoding the major intracellular cardiac Ca 2+ release channel, i.e. the ryanodine receptor 2 (RyR2). [14] RyR2 mutations in patients with CPVT1 cause pathological Ca 2+ leak from the sarcoplasmic reticulum (SR) in ventricular cardiomyocytes. [15,16] Diastolic SR Ca 2+ leak may lead to delayed afterdepolarization (DAD) and trigger ventricular arrhythmias. [15] Theoretically, βAR stimulation and high heart rate can increase the amplitude of DADs, and promote triggered activity. [17] Accumulating evidence indicates that Ca 2+ /calmodulin-dependent protein kinase II (CaMKII) could be a common mediator for the effects of both heart rate and βAR stimulation. [18,19] CaMKII-dependent phosphorylation increases RyR2 channel opening probability, and thus the propensity for increased SR Ca 2+ leak and arrhythmogenic Ca 2+ waves. [20] Indeed, inhibition of CaMKII has proved beneficial in models of CPVT1. [19] We hypothesized that both heart rate and βAR stimulation contribute independently to the development of ventricular arrhythmias in CPVT1. We tested this hypothesis by combining observations from patients, cellular experiments and mathematical modeling.
Patients and patient data
Patients with genetically confirmed CPVT1 were included through the Department of Cardiology, Oslo University Hospital Rikshospitalet. The study was approved by the Regional Committee for Medical and Health Research Ethics (REC-South-East; REC ID 201772 / 2011-19297), and conformed to the declaration of Helsinki. Written informed consent was obtained from all enrolled patients.
Sixteen patients performed standardized bicycle stress testing using a protocol previously described. [21,22] Briefly, 12-lead ECGs were recorded during bicycling with increasing workload (Schiller CS-200 Ergo-Spiro, Diacor), starting at 25 W with stepwise increase until exhaustion. One to four tests per patient were included in the study. The threshold heart rate for ventricular arrhythmias in individual patients was defined as the heart rate at which premature ventricular complexes (PVC) occurred as bigeminy, couplets, or VT during stress testing. If patients did not develop any of these arrhythmic events, the threshold was set as the heart rate were single PVCs occurred.
Three patients with ICDs volunteered for an ICD-based pacing protocol following the bicycle stress test. In accordance with approval from the regional Ethical Committee, the pacing procedure was performed as part of the standard follow-up of these patients, and with a minimum of intervention. We wanted to assess the heart rate for start of ventricular arrhythmias before the pacing, to be able to choose the correct rate. Therefore, the exercise stress test had to be performed first and according to standard follow-up protocol. After cessation of the exercise test patients rested in the supine position until recovery of baseline heart rate, and for at least 10 minutes before the pacing procedure was performed as an add-on to their standard ICD control. Electrical pacing through the atrial electrode was performed for 30 s at 5-10 beats per minute (b.p.m.) above the individual threshold heart rate for ventricular arrhythmias identified during the bicycle stress test. A 12-lead ECG was recorded continuously during the ICD-pacing protocol.
Animal model of CPVT1
This project was approved by the Norwegian National Committee for Animal Welfare under the Norwegian Animal Welfare Act (FOTS ID: 7169, 5669), and conformed to the National Institute of Health guidelines (NIH publication No. 85-23, revised 1996, US). The generation of knock-in mice with a human CPVT1 causative RyR2-R2474S (RyR2-RS) mutation used in this study has been described previously. [16]
Western blots
Hearts used for protein analysis were mounted on a modified Langendorff setup, and perfused through the aorta with a 37˚C modified Hepes-Tyrode's solution. The hearts were then paced at 4 or 8 Hz for three min, i.e. the same duration as the protocol for cellular experiments. This frequency was based on pilot experiments, and chosen to allow stable pacing. The frequency of activation was confirmed by simultaneous ECG recordings by electrodes from telemetric ECG transmitters (Data Sciences International, St. Paul, USA). After three min of pacing, the left ventricle was isolated, rapidly frozen in liquid nitrogen, and stored at -80˚C. For βAR stimulation, hearts were perfused with ISO (200 nM) for 1 min.
Western blotting was performed with total protein homogenates from left ventricles, as previously described. [23]
Computer model
To quantitatively explore the effects of heart rate and βAR stimulation on Ca 2+ handling in ventricular myocytes, we employed a computational model of mouse ventricular myocyte electrophysiology previously published by Morotti et al. [24] This model includes detailed representations of all membrane ion channels, as well as phospholemman, RyR2, the sarcoplasmic reticulum Ca 2+ ATPase, phospholamban (PLB) and Troponin I. Importantly, this model also includes detailed and dynamic representations of protein kinase A (PKA) and CaMKII activity and their regulation of these ion channels and Ca 2+ handling proteins. To model RyR2-RS cardiomyocytes, the Morotti computational model was only altered by increasing RyR2 luminal Ca 2+ sensitivity until the model reproduced Ca 2+ wave frequency and latency measured in RyR2-RS during cellular experiments.
Briefly, the Morotti RyR2 formulation is an extension of the 4-state model of Shannon et al. [25] for which RyR2 SR luminal Ca 2+ sensitivity is calculated as a sigmoidal function of the luminal Ca 2+ concentration. This sensitivity can be modulated by the half maximal effective concentration (EC50) for luminal Ca 2+ , which increases the RyR2 closed-to-open transition rate, and reciprocally reduces the open-to-inactive transition rate. The EC50 was the only parameter we modulated to fit experimental RyR2-RS data. We simulated a range of increased Ca 2+ sensitivities, from 2-30% above the value in the original Morotti model, to identify the Ca 2+ sensitivity that best fit the experimental data on the latency of Ca 2+ wave after pacing and the number of Ca 2+ waves in the post-pacing period, and remained in qualitative agreement with steady state Ca 2+ handling. The simulations for RyR2-RS presented in this article were all run with a constant EC50 value, which was decreased by 10% compared to WT.
Statistics
Results are reported as mean ± standard error of mean (SEM). Ca 2+ sparks data are reported by density plots using kernel density smoothing with a bandwidth of 0.3 in R software (version 3.0.2, The R Foundation for Statistical Computing). ANOVA or Nested ANOVA analysis with Bonferroni corrections were used as appropriate for analysis of RR-intervals in CPVT1 patients and cellular experiments, except analysis of Ca 2+ spark frequency for which Poisson analysis was used to adjust for a skewed distribution. P<0.05 was considered statistically significant.
Ventricular arrhythmias in patients with CPVT1
were associated with increased heart rates during bicycle testing, but not during direct pacing Sixteen patients (39±4 y, 56% women) diagnosed with CPVT1 were included. All patients were positive for CPVT-associated RyR2 mutations, two were probands and 14 were identified as part of family screening. Of these, 11 patients (69%) had symptoms associated with arrhythmias. All patients had been evaluated according to guidelines with echocardiography, and seven patients (44%) also with cardiac MR as part of the initial evaluation. (Table 1).
ECGs were recorded during bicycle stress tests from all patients. When available, we included results from multiple stress tests per patient. No patients exhibited PVCs or ventricular arrhythmias at rest, and the mean RR-interval at baseline was longer than the RR-interval immediately preceding any of the ventricular arrhythmias observed (p<0.05, Fig 1A-1C). This illustration of the heart rate dependence of ventricular arrhythmias in CPVT1 was seen both in untreated patients and in patients treated with a beta-adrenoceptor antagonist ( Fig 1C).
In addition to the sixteen patients included based on available data from bicycle stress testing, we included three patients harboring an ICD implanted due to CPVT1 (Table 1). To test the importance of heart rate, we performed atrial pacing at rest in these patients. The pacing protocol was performed following bicycle testing after complete recovery to baseline heart rate. Atrial pacing was performed at 5-10 b.p.m. above the threshold for occurrence of PVCs during the bicycle stress test. None of the patients developed PVCs during pacing through the ICD (Fig 1D).
βAR stimulation revealed an increased propensity for arrhythmogenic Ca 2+ release events in RyR2-RS mouse left ventricular cardiomyocytes
The propensity for diastolic Ca 2+ waves in RyR2-RS and WT left ventricular cardiomyocytes was measured in a 10 s period after stable Ca 2+ transients, which had been induced by field stimulation for 30 s (Fig 2A). In absence of βAR stimulation, progressive increase of the pacing frequency raised the number of Ca 2+ waves in both RyR2-RS and WT (p<0.05), but did not reveal any differences between RyR2-RS and WT ( Fig 2B). βAR stimulation, however, resulted in an increased number of Ca 2+ waves in RyR2-RS compared to WT both at 0.5 and 4 Hz pacing (p<0.05, Fig 2B).
Another measurement of Ca 2+ wave propensity is the time to occurrence of the first Ca 2+ wave upon secession of pacing, i.e. post-pacing Ca 2+ wave latency. This period decreased with pacing frequency in both RyR2-RS and WT (p<0.05), but was shorter in RyR2-RS than WT at 4 and 8 Hz both in the absence and presence of βAR stimulation (p<0.05, Fig 2C).
To test the dose-response relationship between βAR stimulation and the number of Ca 2+ waves in the post-pacing period, cardiomyocytes were exposed to 0, 2, 20 and 200 nM ISO (4 Hz pacing, Fig 2D). Increasing ISO concentrations resulted in increased frequency of Ca 2+ waves in both WT and RyR2-RS (p<0.05). Overall, the frequency of Ca 2+ waves was higher in RyR2-RS than WT across ISO concentrations (p<0.05).
To further characterize diastolic SR Ca 2+ release in RyR2-RS and WT, Ca 2+ sparks were recorded by confocal microscopy (Fig 3A and 3B). In the absence of βAR stimulation, the propensity for Ca 2+ sparks was low in both RyR2-RS and WT (Fig 3C), while during βAR stimulation the number of Ca 2+ sparks increased in both groups and was higher in RyR2-RS compared to WT (p<0.05, Fig 3D).
SR Ca 2+ content and threshold for Ca 2+ waves were lower in RyR2-RS than WT
The effects of pacing and βAR stimulation on Ca 2+ release and removal was further investigated by characterization of key aspects of Ca 2+ handling in isolated left ventricular cardiomyocytes. This was studied by pacing and caffeine induced Ca 2+ transients in RyR2-RS and WT (Fig 4A-4D). In absence of βAR stimulation, RyR2-RS developed higher Ca 2+ transient amplitudes compared to WT at 4 Hz, while at 0.5 and 8 Hz, no significant differences were found (Fig 4E). Following βAR stimulation, however, the Ca 2+ transient amplitude was lower in RyR2-RS compared to WT at all pacing frequencies (p<0.05, Fig 4E). In line with this, SR Ca 2+ content in the absence of βAR stimulation was not significantly different in RyR2-RS and WT across pacing frequencies (Fig 4F), while following βAR stimulation, SR Ca 2+ content was lower in RyR2-RS than WT overall (p<0.05), and at both 0.5 and 4 Hz ( Fig 4F, p<0.05). These differences could not be explained by refilling of the SR, as cytosolic Ca 2+ removal rate, a key determinant for the Ca 2+ transient amplitude and SR Ca 2+ content, was not significantly different between RyR2-RS and WT ( Fig 4G). Next, threshold SR Ca 2+ content, defined as SR Ca 2+ content at which Ca 2+ waves occurred, was assessed (Fig 4D and 4H). Overall, Ca 2+ waves developed at a lower SR Ca 2+ content in RyR2-RS than in WT both in the absence (p<0.05) and presence of βAR stimulation (p<0.05).
Measurements of abundance of Ca 2+ handling proteins did not reveal differences between RyR2-RS and WT
The abundance of key Ca 2+ handling proteins and phosphoproteins were quantified in left ventricular tissue from Langendorff perfused and paced hearts (Fig 5 and S1 Fig). The only observed difference between the RyR-RS and WT was higher abundance of CaMKII phosphorylated at threonine286, i.e. autophosphorylated CaMKII, in RyR2-RS compared to WT at 8 Hz in the absence of βAR stimulation (p<0.05, Fig 5A). No significant differences between RyR2-RS and WT were observed with regard to the abundance of RyR2 phospho-serine2808, RyR2 phospho-serine2814, PLB phospho-serine16 or PLB phospho-threonine17 at 4 or 8 Hz pacing (Fig 5B-5E). SERCA2a abundance was also similar in RyR2-RS and WT (Fig 5F). Importantly for quality control, βAR stimulation increased PLB phospho-serine16 in both RyR2-RS and WT (p<0.05, Fig 5B and 5D).
Computer simulations support that pacing rate and βAR stimulation have independent effects on the propensity for Ca 2+ waves in both RyR2-RS and WT
A computational model of mouse ventricular cardiomyocyte electrophysiology and ion homeostasis was employed to deconvolve the factors underlying the effects of pacing frequency and βAR stimulation on Ca 2+ wave propensity. [24] With a 10% increase in RyR2 luminal Ca 2+ sensitivity, the model reproduced the effects of pacing and βAR stimulation on Ca 2+ wave frequency and latency in a 10 s post pacing period (S2 Fig). The model allowed a continuous readout of intracellular Ca 2+ , SR Ca 2+ content, CaMKII activity, and level of CaMKII-dependent phosphorylation of RyR2 at serine 2814. The following observations were made regarding the development of Ca 2+ waves in the post-pacing period (Fig 6): First, at 0.5 Hz βAR stimulation was necessary for Ca 2+ waves to occur in both RyR2-RS and WT (Fig 6A, left vs. right upper panels). Second, when all effects of CaMKII was removed or RyR2 phosphorylation level was kept at baseline, i.e. without effects of pacing or βAR stimulation, Ca 2+ wave propensity decreased in both RyR2-RS and WT (Fig 6A, red and green lines). Third, higher pacing frequency increased the propensity for Ca 2+ waves by increased activation of CaMKII and phosphorylation of RyR2 (Fig 6B, left vs right panels). However, higher frequency also increased the propensity for Ca 2+ waves by increasing SR Ca 2+ content, but only in WT ( Fig 6B). Fourth, the main effect of CaMKII on Ca 2+ wave propensity depended on the time after cessation of pacing: Early after pacing, CaMKII activity and RyR2 phosphorylation were high, Caffeine was added immediately after the occurrence of a Ca 2+ wave. This protocol was used for measurements of threshold SR Ca 2+ content. Peak fluorescence intensity was used for measurements of SR Ca 2+ content. (E) Bar graphs allowing Ca 2+ waves to initiate at a low SR Ca 2+ content. Ca 2+ waves occurred earlier in RyR2-RS because the SR Ca 2+ threshold for Ca 2+ waves was intrinsically lower than in WT, i.e. the SR Ca 2+ content at the time of the first Ca 2+ wave was lower in RyR2-RS (6A-C). Later in the diastolic period, the degree of RyR2 phosphorylation decayed more rapidly than the SR Ca 2+ refilling, and further Ca 2+ waves could only be generated by increased SR Ca 2+ content caused by reduced CaMKII-dependent RyR2 phosphorylation (Fig 6C), and subsequent reduced SR Ca 2+ leak. With removal of CaMKII-dependent effects on Ca 2+ handling or by keeping RyR2 phosphorylation level at the baseline for the model, the threshold SR Ca 2+ content for waves was high even in the early phase of diastole, and time for refilling dominated the Ca 2+ wave latency (Fig 6A and 6C).
Discussion
Our study confirms that ventricular arrhythmias in patients with CPVT1 are associated with increasing heart rate during exercise testing, as previously reported. [1,21] However, comparisons of bicycle testing to ICD pacing indicate that increased heart rate by itself is not sufficient to induce arrhythmias. Experiments with ventricular cardiomyocytes corroborated this conclusion: while high pacing frequencies increased the number of Ca 2+ waves in both RyR2-RS and WT, βAR stimulation was necessary to reveal the increased propensity for Ca 2+ waves associated with CPVT1 in RyR2-RS. Computer simulations of CPVT1 cardiomyocytes further strengthened these findings, and show that although higher pacing frequency promotes Ca 2+ wave development, the effects of βAR stimulation on SR Ca 2+ release dynamics are necessary to allow increased propensity for Ca 2+ waves during high pacing frequencies in RyR-RS cardiomyocytes.
Previous studies have established that βAR stimulation increases the degree of SR Ca 2+ leak and the propensity for arrhythmias in CPVT1, [26] while the effect of pacing frequency per se has not yet been studied in this condition. Based on previous studies, we had reason to believe that frequency could promote SR Ca 2+ leak, and that this could be partly CaMKII-dependent: In ventricular cardiomyocytes, both high heart rate and βAR stimulation increases the activity of CaMKII, which has been shown to increase SR Ca 2+ leak. [18,19] The mechanism for activation of CaMKII during increased pacing frequencies is high cytosolic Ca 2+ concentration. [27] The importance of CaMKII-dependent SR Ca 2+ leak in CPVT1 is further indicated by the fact that CaMKII-inhibition by KN-93 or autocamtide-1 related inhibitory peptide reduced spontaneous Ca 2+ release in ventricular cardiomyocytes from mice with the CPVT1-causative RyR2-R4496C mutation. [19] Our results support an important role for CaMKII in RyR2-RS, although the mechanism is somewhat more complex than previously hypothesized.
SR Ca 2+ leak is highly dependent on SR Ca 2+ content. In our study, SR Ca 2+ content did not change with increasing pacing frequency or βAR stimulation in RyR2-RS cardiomyocytes. However, compared to WT, Ca 2+ transient amplitude in RyR2-RS went from equal or higher without βAR stimulation, to lower during βAR stimulation. This is in contrast to findings in showing mean Ca 2+ transient amplitude at different pacing frequencies in absence and presence of ISO. Analyzed by Nested ANOVA with data from 8-10 RyR2-RS mice and 7-19 WT mice per bar (29- RyR2-R4496C cardiomyocytes in which Ca 2+ transient amplitude was not changed by βAR stimulation. [19] One explanation for our findings is that the RyR2 is slightly sensitized even in absence of βAR stimulation, resulting in an increased fractional release even at baseline. Indeed, SR Ca 2+ threshold for Ca 2+ waves in RyR2-RS was lower even in absence of βAR stimulation. Thus, one interpretation of our results is that when βAR stimulation is added, SR Ca 2+ leak increases more than Ca 2+ homeostatic mechanisms can compensate for, resulting in decreased SR Ca 2+ content and Ca 2+ transient amplitude.
We used the computational model to further elucidate the effects of βAR stimulation: The model shows that in absence of βAR stimulation, the rate of SR Ca 2+ refilling is insufficient to increase the propensity for Ca 2+ waves. However, in presence of βAR stimulation, SR Ca 2+ refilling in combination with a decreased threshold for SR Ca 2+ release is sufficiently fast for early initiation of Ca 2+ waves. These results are in line with conclusions from the RyR2-R4496C CPVT1 mouse model, [26].
Still, two possible mechanisms for the increased number of Ca 2+ waves seen during βAR stimulation: either that βAR stimulation is necessary to increase SR Ca 2+ content sufficiently for release, or that such stimulation further destabilizes RyR2 and thereby decreases the SR Ca 2+ threshold for Ca 2+ release. Our experimental results did not show a change in this threshold in RyR2-RS cardiomyocytes during βAR stimulation compared to baseline. This could indicate that the main effect of βAR stimulation is to increase SR Ca 2+ content sufficiently for release. Interestingly, this interpretation is partly supported by our computational model: The major observation across simulations was that the propensity for Ca 2+ waves to occur depended most critically on two factors that vary in time during the diastolic interval. The first factor is the degree of CaMKII phosphorylation at RyR2. This is because, in this model, increasing CaMKII-dependent phosphorylation of RyR2 reduces the threshold SR Ca 2+ load for a Ca 2+ wave. The degree of phosphorylation at any time after a beat depends on both the peak level achieved during pacing, and the rate of dephosphorylation in the period after the beat. The second factor is the rate at which SR Ca 2+ load is restored during the diastolic interval. Because RyR2 dephosphorylation dynamically increases the threshold SR Ca 2+ content after each beat, while the SR is simultaneously refilling with Ca 2+ , it is the combination of these two dynamic effects that determines when a Ca 2+ wave will occur in this model. βAR stimulation promotes Ca 2+ waves because it both increases RyR phosphorylation by CaMKII, and dramatically increases the rate of SR Ca 2+ refilling. Increased pacing frequency also exaggerates both of these effects, but more modestly.
A potentially important observation from our computational model with regard to diastolic Ca 2+ release is that the effects of CaMKII may be highly dynamic in cardiac myocytes, even during an individual cardiac contraction-relaxation cycle, which might explain why an increase in CaMKII-dependent phosphorylation of RyR2-Ser2814 during ISO stimulation was not detected by immunoblotting. However, changes in RyR2-Ser2814 phosphorylation have been well documented in chronic disease models by previous studies. [18] While the relevance of the observations made from pause-induced release experiments (and simulations) to clinical exercise stress tests is limited, the finding that RyR2 phosphorylation is important in early diastole may help to explain why blockade of CaMKII alleviates the pro-arrhythmia associated with CPVT1-causative mutations. [19] The discussion above illustrates the complexity of Ca 2+ homeostasis in ventricular cardiomyocytes at the core of arrhythmia development in CPVT1. However, the complete understanding of arrhythmia development even in this monogenic disease requires even further levels of complexity as both βAR stimulation and heart rate also affects the propensity for DADs to trigger action potentials and the propensity for development of VT. [16,28] These aspects are beyond the scope of our study. Also important are electrophysiological differences between the human and mouse heart for interpretation of our data: In humans, SR Ca 2+ content increases with increasing pacing frequencies, contributing to a positive force-frequency relationship; in contrast, mice exhibit a less steep or no increase in SR Ca 2+ content during increased pacing frequencies. [29] Furthermore, due to a 10-fold faster heart rate and shorter action potentials the mouse is different from the human heart. To capture spontaneous Ca 2+ release events occurring during diastole in isolated myocytes, experiments were performed at artificially slow pacing frequencies compared to in vivo heart rates for mice. Because relatively slow pacing can affect the probability of these events, further validation in human cardiomyocytes and in vivo models is warranted.
In our study, direct observations of heart rate effects in patients were made in the ICDbased pacing procedure, which indicated that increased heart rate per se was not sufficient to induce arrhythmias. However, the pacing protocol have important limitations: F.ex., during the pacing protocol, heart rate was increased for a shorter time than during the bicycle exercise test. We can only speculate that this could affect CaMKII activation. Our results show that the untangling of the effects of cathecolamines and heart rate should be further pursued in future studies, including extended and comprehensive pacing procedures.
In conclusion, our clinical and experimental data, as well as computational modelling, show that increased heart rate and heart rate-independent effects of βAR stimulation on ventricular cardiomyocytes combine to increase the risk of arrhythmias, with βAR stimulation being the necessary factor to reveal the CPVT1 phenotype in RyR2-R2474S. These data supports that the mainstay of treatment for CPVT1 is antagonism of βAR stimulation in ventricular cardiomyocytes. | 2018-11-15T17:36:51.789Z | 2018-11-06T00:00:00.000 | {
"year": 2018,
"sha1": "b078ea655fdd5e72ca9cc072e7ed7728b77a7999",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0207100&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b078ea655fdd5e72ca9cc072e7ed7728b77a7999",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4772997 | pes2o/s2orc | v3-fos-license | A Variable Neighborhood Search for Flying Sidekick Traveling Salesman Problem
The efficiency and dynamism of Unmanned Aerial Vehicles (UAVs), or drones, present substantial application opportunities in several industries in the last years. Notably, the logistic companies gave close attention to these vehicles envisioning reduce delivery time and operational cost. A variant of the Traveling Salesman Problem (TSP) called Flying Sidekick Traveling Salesman Problem (FSTSP) was introduced involving drone-assisted parcel delivery. The drone is launched from the truck, proceeds to deliver parcels to a customer and then is recovered by the truck in a third location. While the drone travels through a trip, the truck delivers parcels to other customers as long as the drone has enough battery to hover waiting for the truck. This work proposes a hybrid heuristic that the initial solution is created from the optimal TSP solution reached by a TSP solver. Next, an implementation of the General Variable Neighborhood Search is used to obtain the delivery routes of truck and drone. Computational experiments show the potential of the algorithm to improve the delivery time significantly. Furthermore, we provide a new set of instances based on well-known TSPLIB instances.
Introduction
According to 2017 Global Online Consumer Report (KPMG, 2017) for 43% of the customers, one of the most important characteristic when deciding where to buy is the enhanced delivery options. This characteristic is because the millenniums -people born between 1980 and 2000 -have a higher demand for instant satisfaction than earlier generations. Even though they are increasingly comfortable with buying products online, they are more likely to visit shops to get the product right away, rather than await delivery. For this reason, companies need to continually create ingenious ways to shorten delivery times and therefore satisfy the needs of customers. One of the most innovative announcements envisioning the improvement of delivery process occurred in December 2013 by Amazon's CEO Jeff Bezos, who declared that one of the biggest e-commerce company was testing drone parcel delivery. The project has described a drone (generically known as Unmanned Aerial Vehicles -UAV) that departs from a warehouse loaded with a parcel, travels to the customer's location where it drops the container, and then returns to the warehouse. This operation demands no human intervention or guidance. Since Amazon's announcement, many companies started their drone delivery projects. Google calls Project Wing the research team responsible for developing technologies to make this drone delivery possible. Also in 2013 DHL launched a project called Parcelcopter that in 2016 accomplished more than 100 successful deliveries, including deliveries to remote villages within 30 minutes, which is faster than transporting them across steep terrains in a car (Burgess, 2016). Another successful company is JD.com, China's second-largest e-commerce, that is developing a drone capable of delivering packages weighing as much as one metric ton throughout rural areas of the country flying a total of 100km before recharging (Meredith and Kharpal, 2017). Moreover, the automobile company Mercedes-Benz teamed up with the drone logistic company Matternet to start a project based on a van drone delivery concept. The project is a combination of work between van and drone in which the drone reads the destination information using a QR code on the package; then it proceeds to deliver the goods with a speed of up to 70km/h and endurance of 20km (Etherington, 2017).
Even though drones are fast, their payload capacity is minimal, and they are restricted by a limited endurance, this means that after each visit the drone has to return to the depot, which is not efficient. On the other hand, trucks are heavy and slow, but they can carry numerous parcels. This information is summarized in Table1 defined by Agatz et al. (2016). The high speed of the drone and the large capacity of the truck are complementary features of each vehicle that can make the delivery process more efficient and, possibly, at a lower cost. A possible form to take advantage of these attributes is combining the work of drone and truck to attend all customers. Hereafter is a brief description of the delivery model assumed in this work which was introduced by Murray and Chu (2015) known as Flying Sidekick Traveling Salesman Problem (FSTSP). Primarily, the drone is launched from the truck, then proceeds to deliver goods to a customer and finally joins back the truck in a third location. While the drone goes to a customer, the truck can travel to deliver parcels to other customers. Moreover, the truck has to travel to the returning customer before finishes the endurance of the drone.
The contributions of this work concern in combining the best features of each vehicle to build a good truck and drone delivery route to parcel delivery. The proposed heuristic addresses two TSP variants using drones and trucks working collaboratively, the problem presented by Murray and Chu (2015) and the one of Agatz et al. (2016). Furthermore, a new set of instances based on TSPLIB instances for Traveling Salesman Problem (TSP) is advised. Overall, the results obtained by the algorithm demonstrate the effectiveness of combining truck and drone for last mile parcel delivery. Ulmer and Thomas (2017) present a different approach to the problem, a Same Day Delivery Problem (SSDP). For each ordering customer, the provider must decide if the order will be performed in the same day or not and for which vehicle, the truck or the drone. The authors define a Markov decision process model to describe the problem and use a policy function approximation (PFA) to determine the best values for a parameterized policy. The parameter is a threshold of travel distance used to define which vehicle will attend each customer. Trucks preferably serve the customers in the zone of the threshold, and drones preferably attend customers with further distances. The results show that the districting by the proposed PFA is beneficial once it is possible to increase the number of services by the fleet significantly. Wang et al. (2017) study the Vehicle Routing Problem with Drones (VRPD) from a worst-case point of view. The paper describes several theorems formulation for the vehicle routing problem with drones and represents bounds on maximal savings to the companies. Poikonen et al. (2017) enlarger the description of the theorems comparing different drone configurations in the delivery process to determine the maximum benefit. For example, the trade-off between speed and the number of drones, i.e., they compare what is better, a more substantial number of slower drones or a smaller number of faster drones. Pugliese and Guerriero (2017) provide an Integer Programming formulation for the Vehicle Drone Routing Problem with Time Windows (VDRPTW) with the objective function of minimizing the total transportation cost. In this work, the drone has a waiting time limit of the truck after performing a delivery, which prevents it from being idle. Additionally, after delivery, the drone must return to the truck it was launched that is located at another customer.
The problem introduced by Daknama and Kraus (2017) uses several trucks and drones traveling along a route. Trucks follow Manhattan metrics while drones follow Euclidean metrics. A local search was implemented in a framework called JAMES, customized with appropriate neighborhoods.
The primary goal of Goodchild and Toy (2017)'s paper is to determine whether or not drone technology in the logistics industry would have a positive environmental impact regarding vehicle-miles traveled (VMT) and carbon dioxide (CO 2 ) emissions. Thus, the work presents models with different scenarios in which trucks and drones originating from a central depot deliver parcels to recipient addresses in circular service zones. The results suggest that a system would perform best with drones serving nearby addresses and trucks delivering to the farther ones.
Another variant for the problem is the Drone Delivery Problem (DDP) by Dorling et al. (2017) where only drones perform the delivery. The distribution center is the return point of routes, where the delivery man changes the battery of the UAV and loads it with another parcel. Moreover, drones can visit a location only once, and perform multiple trips per route. The paper provides a MILP implementation for small instances and for large ones a SA heuristic was proposed to find suboptimal solutions to the DDP within a limited run time. Vorotnikov et al. (2017) also consider the drone as the only vehicle. The addressed problem determines the drone route by solving the TSP with three approaches: the Monte Carlo method, the method of reduction of rows and columns and the averaged coefficient method. Research results show that reduction of rows and columns presents the best cost. However, according to the time criterion for the solution, the latter has a significant advantage, which is especially noticeable with an increase in the number of objects. Othman et al. (2017) introduce four different variants for DDPs. In the first, the drone launches from the truck, and while it is delivering a parcel, the truck is allowed to wait for the drone to return at the previous rendezvous point whereas in the second description the truck is not allowed to wait at the previous rendezvous point. The others definitions are similar, in the third, the truck is allowed to wait for the drone to return at the previous rendezvous point and the drone can be transported by the truck before proceeding to its next customer. Finally, in the last one, the truck is not allowed to wait for the returning of the drone at the previous rendezvous point. They modeled this problem as a problem of finding a special type of a path in a graph of a special structure, and proposed a polynomial-time approximation algorithm for the graph problem in metric graphs. Coelho et al. (2017) present an approach using the concept of smart cities to introduce a multiobjective green UAV routing problem. The problem considers a dynamic scenario in which new orders may arrive at any moment. The scenario is composed of airspace divided into two layers: a lower layer, located at low altitudes with lower speed drones and an upper layer, located at high altitudes composed of faster drones. A Multi-Objective Smart Pool Search (MOSPOOLS) Matheuristic is used to obtain a solution to the proposed problem.
The use of drones in a humanitarian manner can have a significant role, especially in post-disaster operations. Whereas it is possible to take advantage of the favorable characteristics of drones, such as the high speed to deliver urgently needed small items in locations with difficult access.
The work of Scott and Scott (2017) is concerned with creating drone delivery models for health-care to locate a warehouse with supplies and drone nests to complete final delivery. They developed two models to the problem; the first one has an objective to minimize the total weighted delivery time, i.e., road covered by the truck plus air covered by the drone. The second model intends to minimize the maximum weighted time for truck/drone delivery subject to a budget and drone travel distance constraint. Table 2 presents an overview of the works related to drone delivery. The authors and year of the publication are in the first column. The second column presents the models proposed. The columns three and four indicate how many vehicles each problem uses. The fifth column states if the node the drone launch from can be the same it returns to. The sixth column indicates if the vehicles follow the same network. Finally, the last column shows the method each author used to address the delivery problem. The column "Problem" presents the acronym adopted in the respective paper or the type they belong to if no definition is provided. Different acronyms have the same description, as can be seen in Ha et al. (2015) and Ferrandez et al. (2016).
The table summarizes the differences and similarity between the formulations using delivery with drones. A distinct number of drone and truck is used for each work as well as the road network the vehicles travels in. This work deals two variants, the one introduced by Murray and Chu (2015) and the one proposed by Agatz et al. (2016).
Problem Description
This work assigns a variant of the classical Traveling Salesman Problem (TSP) called Flying Sidekick Traveling Salesman Problem (FSTSP) first introduced by Murray and Chu (2015).
The FSTSP, also considered an NP-Hard problem, describes a drone and a truck that collaboratively delivery parcels to a set of customers. One drone and one truck execute the deliveries. The drone can launch and return to the truck at any customer. However, the drone can only visit eligible customers, i.e., customers that the parcel does not exceed the vehicle payload capacity and respects the endurance of the vehicle to complete the trip. Additionally, while the drone goes to a node, the truck can independently travel to deliver parcels to others customers, considering that the vehicle proceeds to the return node before finishes the endurance of the drone. For better understanding, we define some parameter notation to represent the sets used in the problem. Let C = {1, ..., c} denotes the set of all customers and let the subset C ′ ⊆ C represents the eligible customers to the drone. The distribution center (depot) is denoted by node 0. Thus N = {0, 1, ..., c} represents all nodes in the network.
The drone is allowed to fly in a straight line ignoring the road restrictions; however, the truck must follow the road network. Additionally, vehicles may have different speed. Thus, we consider the travel time instead of the travel distance to respect the traffic characteristics. The truck travel time from node i to node j is given by τ ij . Analogously, τ ′ ij represents the travel time of the drone. Moreover, we consider service time, i.e., the time required before the drone launches to change a battery and load the vehicle with a parcel, represented by s l . s r is the time necessary to recover the drone after a delivery. The drone has a limited flight endurance defined by parameter e.
The objective is to find the minimum cost route that serves all customer locations by either the truck or drone. Both the departure and return to the depot can occur in tandem or independently. However, while traveling in tandem, the truck must transport the drone to save battery. As the drone has unitary capacity, it has to pick up a new parcel at the truck after each delivery. Figure 1 represents a delivery route where nine customer locations are needing to be attended. It is reasonable to observe that serving two customers (customers 4 and 8) with the drone instead of the truck reduces the total distance traveled by truck. Therefore, by shortening the truck route, it is possible to reduce the overall delivery time required to serve all customers.
As demonstrated in Figure 1 a customer can be truck-only, drone-only or mixed. Mixed are the customers where the drone launch or return (customer 3, 5, 6, 9). Drone-only customers are the ones visited only by drones (customer 4, 8). Finally, truck-only customers are the remaining truck customers (customer 1, 2, 7).
The drone has a trip composed of three distinct nodes: launch, visiting, and return. The launch node is where the driver can change the battery and load the drone with a parcel. A visiting node consists of a customer that is visited only by the drone. The return node describes the location where the driver recovers the drone. The return node can either be a customer serviced by the truck or the depot, though it can not be a customer already visited. Furthermore, in the case of the depot, the drone cannot be relaunched. All drone trips must respect the endurance flight limit, i.e., the drone has to have enough battery to visit the three nodes and, when necessary, wait for the truck in the return node.
As the FSTSP considers only one drone, two prohibited situations exist in the route, as reported by Ponza (2016). Figure 2 describes those situations within continuous line representing the truck tour and dashed lines the drone tour. The first case represented in Figure 2a happens when a drone trip starts before the last trip finishes, i.e., a drone is launched from customer c when the trip (a, b, d) was not finished yet. The other situation occurs when a drone trip starts and ends before the last trip finishes, i.e., a launch (node c) and a return (node d) is inside another trip of launch-return (nodes a and f ) described in Figure 2b.
HGVNS Algorithm
The algorithm framework used in this work presents an exact model to obtain an initial solution and the General Variable Neighborhood Search (GVNS) metaheuristic as an improvement heuristic. Algorithm 1 illustrates the hybrid GVNS heuristic framework, named HGVNS. The algorithm is a simple hybrid heuristic integrating an exact method and heuristics. The HGVNS is composed of three steps, described below. The input data is the set of customers and the cost matrices. Two different cost matrices are used: the truck and the drone travel time, τ and τ ′ , respectively (see Section 3).
The first step of the algorithm creates an initial solution where the truck visits all customers. Since this is a regular TSP, a well-studied problem, we use a Mixed-Integer Programming (MIP) solver to determine the optimal TSP route. The solver requires the truck travel time matrix τ and the entire set of customers to generate the solution (line 1). The solution s * T SP is the optimal solution for the TSP, as it is also a feasible solution for the FSTSP, but not including any drone trip. s * T SP is given to CreateInitialSolution() procedure to add the drone trips to the route (line 2). Finally, the initial solution s is given to the GVNS procedure (line 3), to obtain the improved solution s * , which is then returned by HGVNS.
Algorithm 1 HGVNS Framework
The CreateInitialSolution() procedure, based on the heuristic presented by Murray and Chu (2015), removes some of the truck's customers and makes them drone customers. Algorithm 2 illustrates the procedure. The algorithm initializes variable maxSavings and truckSubRoutes. Following, for each eligible drone customer j is computed the savings of removing this customer from the truck route (line 5).
Algorithm 2 CreateInitialSolution
Require: cost matrices (τ , τ ′ ); customers (C, C ′ ) Ensure: (lines 4 -13). If the answer is positive, customer j may be relocated in the truck route. If the answer is negative, customer j may be visited by the drone, creating a new sub-route. Ultimately, if one of the operations above reduces the delivery time, the routes of the vehicles are updated (line 15).
The GVNS presented in the third step of HGVNS is defined in Algorithm 3. Given an initial solution (line 3), the main operation iterates over the parameter k until it reaches the stop condition, which is the cardinality of the list of neighborhoods |N |. Each interaction generates a solution s ′ from a neighborhood N (k) (s) of current solution s (line 4).
Next, a local search, named RVND (Algorithm 4), is performed in solution s ′ to obtain s ′′ as the local optimum (line 5). If the cost value of the obtained solution is better than the incumbent one, s ′′ is assigned as the current solution and the search continues with N (k) (s). Otherwise, the next neighborhood in the list N is executed (lines 6 -11). Both procedures, GVNS and RVND use the same neighborhoods list N .
Algorithm 3 General Variable Neighborhood Search
Require: s, τ , τ ′ , C, C ′ , k max Ensure: s 1: Initialize Neighborhood List (N ) 2: k ← 1 3: while k ≤ k max do 4: Generate a point s ′ at random from k th neighborhood of s (s ′ ∈ N (k) (s)). The local search used by GVNS is the Randomized Variable Neighborhood Descent (RVND). The RVND, presented in Algorithm 4, differs from VND by randomly choosing the next neighborhood. First, the heuristic initializes a list of neighborhoods and then shuffles it (line 1). The algorithm ends when counter k reaches the stop condition, which in this case is to perform all neighborhoods without improvement (line 3). We apply the Best Improvement (BI) approach that exhaustively explores the neighborhood and returns one of the solutions with the lowest solution value (line 4). Then, the neighborhood solution s ′ is compared with the current solution s. Thus, if s ′ is better than s, s ′ is the new solution. The list of neighborhoods is reinitialized and shuffled again (lines 5 -7) and the counter is restarted k. The neighborhoods used in this method are detailed in the Sections 4.2 to 4.8.
5:
if s ′ < s then
Neighborhoods
The data structure used to store the solution of both truck and drone is an array. However, the drone's array is composed of a tuple of three values, indicating the launch, visit and return node of a trip.
When a change is performed on a truck-only customer the cost of the new solution can be calculated in O(1), using the following formula: In Equation (1), set θ − represents the removed edges and θ + the reconnected edges set. Sets θ − and θ + have a fixed size. Considering n the number of customer, |θ − | << n and |θ + | << n . An example of the use of Equation (1) is to calculate the new cost of the solution presented in Figure 4(b). The edges {(0,5), (3,1), (6,2)} ∈ θ − were removed and edges {(0,1), (6,5), (3,2)} ∈ θ + were reconnected. Therefore, the new solution cost is represented in Equation 2, considering that τ is the travel time between two customers.
Now, if the movement is performed in mixed customers, different situations arise. However, the drone trip only modifies the solution cost when its last trip returns to the depot. Thus, even when the drone trip is modified, the new solution cost may be defined entirely by the changes in the route of the truck.
Hereafter is presented the seven neighborhoods structures. A neighborhood movement is accepted as long as it results in a feasible solution: the move must not violate the endurance constraint, and it must not create the prohibited situations of Figure 2.
Reinsertion
This neighborhood removes a customer and reinserts it in other position in a tentative solution. Figure 3(a) illustrates a relatively straightforward move in which a truck-only customer is relocated. The path represented by Figure 3 (b) -(c) relocates the customer 1, additionally, in both relocations the truck route during a drone trip increases, thus, the drone trip is changed. In case (b) the drone still launches from customer 1, however, in case (c) the trip is inverted, turning customer 1 the return node of a drone's trip.
Or-opt2
This neighborhood relocates two adjacent nodes of the truck path in an arbitrary position in a tentative solution. In Figure 4 the consecutive nodes 5 and 3 are relocated first in example (a) not affecting the route of the drone and in example (b) increasing the sub-route with a drone trip.
Exchange
This neighborhood is set to swap a customer with another one in a tentative solution. A violation of the prohibition 2b would occur if the launch and return nodes remain the same. Thus, customer 2 was chosen to be the new returning node of the drone's trip, this way, the trip {5, 7, 1} became {5, 7, 2}. Moreover, the second drone trip {1, 4, 6} continues the same, as the distances are symmetric.
Exchange(2,1)
This neighborhood is set to swap two adjacent customers with another customer in a tentative solution. Following the explained in neighborhood 4.4, the first trip {5, 7, 1} must have another return node to not violate prohibition shown in Figure 2b. In this, case node 2 was the one selected. Furthermore, the trip {6, 3, 1} was inverted, and the travel distance of the truck enhanced, increasing the waiting time of the drone.
Exchange(2,2)
This neighborhood is set to swap two adjacent customers with another two adjacent customers in a tentative solution. In the example illustrated by Figure
2-opt
This move performs a 2-opt on the solution: two edges are removed from a tentative solution and the two paths created are reconnected in the only possible way to keep a valid tour. The removed edges must be truck-only or mixed nodes, and if one of these edges are under a drone trip another return node must be selected (represented by trip {5, 7, 1} that changed to {5, 7, 6} in Figure 8). Figure 8 represents the 2-opt move where edges (5, 3) and (2, 8) are removed. Thus edges (5, 2) and (3, 8) are created, reversing the path between nodes 5 and 8.
Relocate Customer
This move is based on the method Shift(1,0) described in Penna et al. (2013). The purpose of this method is to reduce significantly delivery time, therefore removing a customer from the route of the truck and, then, inserting in new drone trip. First, sub-routes are formed envisioning not to violate the prohibitions. For example, in Figure 9 the sub-routes are {0, 5} and {1, 6, 4, 2, 8}. Afterwards, an evaluation is made to determine the launch, the new drone customer, and the returning node. If a triple combination of nodes is feasible, it is then accepted, and the node which is going to be visited by the drone node is removed from the route of the truck and inserted in the route of the drone between the launch and return node. In the example, the customers 1, 4 and 6 were chosen to be the launch, visit and return of the drone, respectively.
Computational Experiments
A series of computational experiments was performed to test the effectiveness of the HGVNS. The algorithm was coded in C++ (g++ 5.3.1) and executed on an Intel R Core TM i7 Processor 3.6 GHz with 16 GB of RAM running Ubuntu Linux 16.04. The MIP solver used to find a optimal TSP solution was Concorde 1 3.12.19 with CPLEX 12.6.3. HGVNS was tested in three benchmark sets. Two sets found in literature, advised by Ponza (2016) and by Agatz et al. (2016). A new set of instances was developed based on the TSPLIB 2 well-known instances for the TSP.
In the tables presented hereafter, the column Inst.denotes the instance name and n represents the number of nodes (the depot and n − 1 customers). Moreover, BKS indicates the best-known solution value found in the literature. Columns s F ST SP and T ime denote, respectively, the best solution value and the computational time in seconds associated to the MIP running time plus the GVNS algorithm, gap states the difference between s F ST SP and BKS. Column s * T SP describes the optimal tour obtained with the TSP Solver Concorde. Finally, s F ST SP is the average solution cost of ten runs and gap is the gap between the average solutions and the BKS. A negative gap indicates an improvement.
Results of HGVNS in Ponza (2016) instances
The first set of instances was provided by Ponza (2016) based on the original formulation of Murray and Chu (2015). The instances were generated over a map of 32km × 32km, to reflect the idea of some companies about a 10 km endurance of the drone and thus be able to perform some feasible routes. In the map, the depot is always at coordinate (0, 0) and the locations of the customers are randomly generated.
The parameters previously defined in this work assumed different values: the service time to the launch (s l ) and return (s r ) of the drone is 0.6 minutes and 0.5 minutes, respectively. The drone speed is 80.47 km/h (50 mph), and the truck speed is 56.32 km/h (35 mph). The endurance of the drone is 24 minutes. Additionally, the travel time matrix of both vehicles is calculated based on the same road network, and, finally, the percentage of feasible drone customers is 80%. This experiment was performed by running HGVNS ten times for each instance. Tables 3 -4 provide the results. Table 3 advises the improvement of addressing the FSTSP, i.e., truck and drone in the delivery process, over the classical TSP. According to the results, it is possible to reduce the total travel time up to 30.38% in this set of instances and get an average improvement of 19.50%. Table 4 compares the FSTSP results provided by Ponza (2016) with HGVNS results. It is possible to observe that HGVNS acquired better results in all instances, achieving an improvement of 24.84% in instance 150.2. Regarding computational time, HGVNS is executed within a short runtime, 10.15 seconds on average. However, it is difficult to precisely compare to the method runtime of Ponza (2016) as the work does not report the computer configuration. Agatz et al. (2016)
instances
We also address the TSP-D, a problem introduced by Agatz et al. (2016) which presents different restrictions compared to the FSTSP described in this work. The FSTSP defines an endurance and service time at the launch and return of the drone to the truck. Thus, some of the constraints of the FSTSP have been relaxed to adapt to the problem. First, the drone has unlimited endurance (e = ∞), therefore, as result of this definition, the period a vehicle can wait for the other to arrive at the return node is indefinite. Lastly, service time is not required for the launch or return of the drone, i.e., s l = s r = 0.
The following experiments are using the instances for the TSP-D provided by Agatz et al. (2016). However, the authors did not provide a complete review of the experiments, thus, we were unable to compare the results obtained with HGVNS and the method used by Agatz et al. (2016).
In the original paper, it is explained with details how to generate the coordinates of the locations. The authors proposed three sets of instances changing coordinates distribution. The first distribution type is called uniform. In this instances, the coordinates for every location are drawn independently and uniform. The second type of instance is single-center that uses an angle α and a distance r from a normal distribution to obtain instances with locations closer to the center (0, 0). Finally, double-center instances use the same strategy of single-center, but every location is translated by 200 distance units over the x-axis with a certain probability.
For each type of instance three different scenarios exist, which determines different speed configurations for the drone. A value of α = 1 determines that both vehicles have the same speed. The drone speed is twice as fast as the truck when α = 2 and, finally, the drone speed is three times as fast as the truck when α = 3.
Tables 5 -7 are broken down into the scenarios labeled α. The tables present for each group of n the average of ten distinct instances. The table's columns follow the same definitions aforementioned. The complete table with the results running HGVNS can be seen in the supplemental material.
According to the tables it is possible to observe that scenario α = 1 obtained the worst results in all set of instancesuniform, single-center and double-center when compared with the other two subcategories of alpha. The different solution cost is due to the higher speed of the drone that enables it to visit a greater number of customers, and also visit customers within a further distance. The higher improvement occurred in the instance with 75 customers distributed following the configuration single-center. The instance achieved an improvement of 62.24% in comparison to the TSP optimal solution. Furthermore, the instances presenting a smaller number of customers and the same speed for both vehicles reported the smallest improvement. Additionally, the set unif orm showed the lowest average improvement. As concerns computational time, the different distribution of customers does not affect runtime, however, when the vehicles present the same speed, and they are uniformly distributed runtime increases.
Results of HGVNS to new instances from TSPLIB
Here we propose a new set of instances associated with the original problem specifications described by Murray and Chu (2015). The proposal of new instances set is due to three main reasons. First, the instances introduced by Murray and Chu (2015) present a small number of customers, up to 10 customers in the FSTSP description. Further, Ponza (2016) considered in his work the same road network for both vehicles, thus to compare our results with the ones of his work, the travel distance was calculated using Euclidean distance. Finally, in Agatz et al. (2016) different restrictions to the FSTSP are stated, such as the endurance of the drone and service time (see Section 5.2). Therefore, we introduce instances based on the well-known instances of TSPLIB. A total of 25 instances containing between 51 and 200 nodes were selected and appropriately adapted to the problem. Drones are usually faster than trucks, since they are not affected by congestion, and they can fly in a straight line not following the street network. Unlike drone, trucks must respect traffic sign regulation and follow the street network. Therefore, it is reasonable to consider different road network for the vehicles. The Euclidean metric is used to describe the drone travel distance considering the straight line flight, and in order to simulate the city block distance, it is used Manhattan distance to represent the truck travel distance. From the TSP coordinates, two different matrices were calculated to represent the road network that each vehicle travel. The truck matrix matrix was computed using Manhattan distance, and another was estimated using the Euclidean metric to describe the drone travel distance. Additionally, both distances were divided by the speed of each vehicle to obtain the time required to travel among the customers.
The customers considered eligible to drone delivery are randomly generated such that for every instance there are between 85% and 90% serviceable customers and the other 10% to 15% are truck-only customers due to geographical limitations, exceedingly heavy parcels or other criteria.
Furthermore, we considered that both vehicles speed is 40 km/h and the drone endurance is 40 minutes. All these FSTSP characteristics are based on the ones presented by Murray and Chu (2015).
This experiment was performed by running HGVNS ten times for each instance. Table 8 provides the results obtained with HGVNS. The table follows the definitions mentioned before.
HGVNS was able to find better solutions when compared to TSP optimal solutions for all instances. For instance pr107, HGVNS obtained the best solution cost, presenting an improvement of 4%. Meanwhile, instance d198 had the minimal improvement with 0.35%, when compared to the TSP optimal solution value. Figures 10 and 11 shows the solution for pr107 instance, where the colored lines represent the UAV routes while the continuous black line represents the truck route. HGVNS manages to assign 16 delivers to the drone. It is possible to observe that the triangulations performed by the truck in the route illustrated by Figure 10 were, in its majority, replaced by a drone trip illustrated by Figure 11, transforming the truck trip into a more orthogonal one. Average -13.49 -11.26 23.53
Concluding Remarks
The Flying Sidekick Traveling Salesman Problem (FSTSP) concerns a variant of TSP which has been showing potential over the last years through the constant investments of companies such as JD.com, Amazon, Mercedes-Benz, among others. The problem consists in the use of Unmanned Aerial Vehicles (UAV), also known as drones, working collaboratively with trucks in parcel delivery. We presented a hybrid heuristic using the complementary characteristics of truck and drone to perform deliveries with reduced time. The algorithm implemented in this work, named HGVNS, initially uses a MIP solver to obtain the optimal TSP tour, whose solution is subsequently enhanced by the metaheuristic General Variable Neighborhood Search (GVNS).
HGVNS was tested in three benchmark sets, two sets found in literature: the instances presented by Ponza (2016) and the instances introduced by Agatz et al. (2016). The third set is a new one developed based on TSPLIB instances for the TSP.
In the instances introduced by Ponza (2016), HGVNS improved the solution of the majority instances achieving an improvement up to 24.84% to the previously best-known solution values.
A variant of the FSTSP proposed by Agatz et al. (2016) called TSP-D was also studied. The computational tests using their 1383 instances evidenced that the speed of the drone interferes with the total delivery time, however, the effect of double or triple truck speed is very similar. That said, the best improvement occurred in one instance with 75 customers where the drone traveled twice as fast as the truck, and the least improvements were observed for instances where both vehicles presented the same speed.
A new set of instances based on the well-known instances of TSPLIB is proposed to fulfill the need large instances following the original model of Murray and Chu (2015). The best solution found in these instances shows an improvement of 45.48% when over the optimal TSP tour value.
A new modality of parcel distribution is raising from the increasing development of drones and the effort of companies to perform deliveries faster at a reduced cost. Thus, this work has plenty to contribute demonstrating that collaborative work of truck and drone can drastically decrease delivery times up to 67.79%.
Future research directions include formulating a Mixed Integer Linear Program for the Flying Sidekick Traveling Salesman Problem. Moreover, it opens a huge field of research in distribution and logistics area. For example, one can study the capacitated version of the problem with multiple delivery trucks and multiple drones per truck. | 2018-04-26T23:26:33.835Z | 2018-04-11T00:00:00.000 | {
"year": 2018,
"sha1": "18ebe58d2a45d9d69e86f6f67b742f9ec1cfdc6b",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1804.03954",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "18ebe58d2a45d9d69e86f6f67b742f9ec1cfdc6b",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
53686328 | pes2o/s2orc | v3-fos-license | Dynamic aperture limitation in $e^+e^-$ colliders due to synchrotron radiation in quadrupoles
In a lepton storage ring of very high energy (e.g. in the $e^+e^-$ Higgs factory) synchrotron radiation from quadrupoles constrains transverse dynamic aperture even in the absence of any magnetic nonlinearities. This was observed in tracking for LEP and the Future Circular $e^+e^-$ Collider (FCC-ee). Here we describe a new mechanism of instability created by modulation of the particle energy at the double betatron frequency by synchrotron radiation in the quadrupoles. Energy modulation varies transverse focusing strength at the same frequency and creates a parametric resonance of the betatron oscillations with unusual properties. It occurs at arbitrary betatron frequency (the resonant detuning is always zero) and the magnitude of the parameter modulation of the betatron oscillation (strength of the resonance driving term) depends on the oscillation amplitude. Equilibrium between the radiation damping and the resonant excitation gives the boundary of the stable motion. Starting from 6d equations of motion we derive and solve the relevant differential equation describing the resonance, and show good agreement between analytical results and numerical simulation.
I. INTRODUCTION
Two future electron-positron colliders FCC-ee (CERN) [1] and CEPC (IHEP, China) [2] are now under development to carry experiments in the center-of-mass energy range from 90 GeV to 350 GeV. In these projects strong synchrotron radiation (power P ∝ E 4 ) is a source of effects negligible at low energy but essential at high energy, which influence beam dynamics and collider performance. One example is luminosity degradation caused by the particle radiation in the collective field of the opposite bunch (beamstrahlung [3]) either due to the particle loss [4] or because of the beam energy spread increase [5]. Another example is about reduction of the transverse dynamic aperture due to synchrotron radiation from quadrupole magnets. John Jowett is the first who pointed out this effect in LEP collider with maximum beam energy about 100 GeV [6]. Switching on the radiation from quadrupoles in the particle tracking decreased the stable betatron amplitude as compared to the radiation from bending magnets only. Jowett gave a description of this effect: "Here I shall briefly describe a new effect which I propose to call Radiative Beta-Synchrotron Coupling (RBSC). It is a non-resonant effect. A particle with large betatron amplitude makes an extra energy loss by radiation in quadrupoles. If you imagine that its betatron amplitude does not change much over a number of synchrotron oscillations (that is not essential to the effect), you can say that its effective stable phase angle will change to reflect the greater energy loss. The particle will * A.V. Bogomyagkov@inp.nsk.su tend to oscillate about a displaced fixed point in the synchrotron phase plane. This results in a growth of the oscillation amplitude which may eventually lead the particle outside the stable region in synchrotron phase space." Jowett illustrates above assertion with synchrotron phase trajectories for two stable particles (denoted by P and Q in Figure 1) and one unstable (denoted by R) [7]. The tracking incorporates only radiation damping (quantum noise is absent) from both bending and quadrupole magnets.
In [8] Jowett has mentioned that the RBSC rarely occurs in isolation: "Most often some other effect limits the dynamic aperture before the RBSC limit is reached. In the standard (LEP) lattice the horizontal dynamic aperture is limited by a rather strong shift of the vertical tune with the horizontal action variable, bringing Qy down onto the integer.
Our interests to the subject was inspired by the FCCee lattice study. With the help of SAD accelerator design code [9] K. Oide demonstrated FCC-ee transverse dynamic aperture reduction due to radiation from quadrupoles [10], "While the radiation loss in dipoles improves the aperture, especially at tt, due to the strong damping, the radiation loss in the quadrupoles for particles with large betatron amplitudes reduces the dynamic aperture. This is due to the induced synchrotron motion through the radiation loss.
We crosschecked the simulation made by Oide using MAD-X PTC [11] and the homemade software TracKing [12] including SR from quadrupoles and found good agreement between all three codes. Nevertheless, detailed consideration has shown different nature of the particle loss in horizontal and vertical planes. Radiation from quadrupoles at large horizontal amplitude in-FIG. 1. The vertical RBSC instability in LEP at 90 GeV projected into synchrotron phase space. Three lines show the motion of three particles P, Q and R with different initial conditions. P starts with zero betatron amplitude and large longitudinal deviation. It remains stable and damps to the equilibrium synchrotron phase. Q and R start with longitudinal coordinates corresponding to the closed orbit but with vertical amplitude 5.5 mm and 6 mm respectively. Q is stable while Rs amplitude grows in few turns until it is lost. A fourth particle has been tracked with quantum emission to give the cloud of points representing the core of the beam around the closed orbit.
deed greatly shifts the synchronous phase, induces large synchrotron oscillation, excites strong synchro-betatron resonances and, finally, moves the horizontal tune toward the integer resonance (due to the nonlinear chromatic and geometrical aberrations) according to the mechanism described by Jowett and Oide. However, in the vertical plane the picture of the particle loss was quite different. The energy loss from radiation in quadrupoles for the vertical plane is substantially smaller than for the horizontal plane and does not provide large displacement of the synchronous phase and synchrotron oscillation. Instead, we found that increase of the vertical betatron oscillation amplitude modifies the vertical damping until, at some threshold, the damping changes to rising and the particle gets lost.
This new effect is a parametric resonance in oscillations with friction; radiation from quadrupoles modulates the particle energy at the double betatron frequency; therefore, quadrupole focusing strength also varies at the doubled betatron frequency creating the resonant condition. However, due to friction, resonance develops only if oscillation amplitude is larger than a certain value. The remarkable property of this resonance is that it occurs at any betatron tune (not exactly at half-integer) and hence can be labeled as "self-inducing parametric resonance.
We will derive particle equations of motion in presence of the radiation from quadrupoles, consider particle loss for both transverse planes and compare results with computer simulation.
II. PARAMETERS VALUES AND OBSERVATIONS FROM TRACKING
For the FCC-ee lattice "FCCee z 202 nosol 13.seq" at 45 GeV Figure 2 shows dynamic aperture obtained by MADX PTC [11] tracking with synchrotron radiation from all magnetic elements and without, and obtained by homemade software (TracKing [12]) tracking with synchrotron radiation from dipoles only and with radiation from dipoles and quadrupoles. The observation point is interaction point (IP).
Inclusion of synchrotron radiation in quadrupoles into tracking software decreases dynamic aperture • in vertical direction from R y = 142σ y to R y = 57σ y , FCC-ee lattice has two IPs and Table I gives the parameters relevant to our study. Table II lists total synchrotron radiation energy loss from different type of magnets. For particles with vertical amplitude energy loss in final focus (FF) quadrupoles dominates the loss in the arc quadrupoles. For particles with horizontal amplitude energy losses in FF and in the arc quadrupoles are comparable and significantly larger than for vertical amplitudes. Averaged over betatron phases radiation from quadrupoles is where Γ = Cγ 2π denote averaging over circumference . . . = . . . ds/Π, and For understanding the reasons of particle loss, we studied particle trajectories, obtained from tracking, in vicinity of dynamic aperture border. Figure 3 shows phase and time trajectories of the first unstable (with accuracy to our step) particle with initial vertical coordinate y = 58σ y and remaining five coordinates are zero. In the longitudinal plane {P T, T } synchrotron oscillations excited by additional power loss from quadrupoless are damped to zero but suddenly something forces particle to walk away. Since, the longitudinal oscillations are damped they can not be the source of instability, the most probable suspect is vertical motion. In spite of initial horizontal coordinates being zero, horizontal motion is excited by nonlinear transverse coupling, however the amplitude of stable motion is not large (< 5σ x top left plot on Fig. 3).
Unexpected observations come from Figure 4 showing the change of envelope evolution for particles with initial vertical coordinate around the dynamic aperture boundary y = {50; 55; 57.5; 58} × σ y , horizontal coordinates are zero, longitudinal are chosen with respect to the new synchronous point. For the small initial amplitudes, vertical oscillations experience exponential damping, as expected, but with increase of the initial vertical amplitude and contribution of radiation power loss from quadrupoles, the envelope changes shape (left bottom plot on Fig. 4) until damping is replaced by excitation. Figures 5 and 6 show phase and time trajectories of the first unstable particle with initial horizontal coordinate x = 67.1σ x and remaining five zero. There is no damping and walking away in the longitudinal plane {P T, T } as in case of vertical initial conditions Figure 3. On Figure 6 notice the right plot showing phase advance per turn with respect to turn number; the particle action starts to grow after phase advance per turn reaches an integer. Before studing FCC-ee transverse dynamic aperture decreased by the radiation in the quadrupole magnets, we looked at the dynamic aperture caused by the lattice nonlinearities only. The transverse dynamic aperture is limited by the sextupoles for linear chromaticity correction, Maxwellian magnet fringe fields [13] and kinematic terms reflecting non-paraxiallity of particle motion in the first order. All chromatic sextupoles are combined in pairs with the I optical transformation in between [10]. Such arrangement cancels quadratic geometrical aberrations; therefore, the leading terms of nonlinear perturbation are cubic ones. The dynamic aperture is optimized by going through the sextupole pairs setting with a downhill simplex method scripted within SAD. It is assumed, that each sextupole pair in the arcs has individual feeding; therefore, the total optimization degrees of freedom are around 300. Figure 7 shows betatron tunes as functions of initial amplitude. Both tunes move toward the nearest integer FIG. 6. Action and phase evolution for two particles with initial conditions: stable particle (red) with x0 = 66σx and unstable particle (blue) with x0 = 67.1σx, the remaining five initial coordinates are zero. Square root of action (left). Action beating due to synchro-betatron coupling is clearly visible. Phase advance (right). Particle becomes unstable when the phase advance crosses integer value.
resonance ν x = 269, ν y = 267 with increase of initial amplitude. However, due to the symmetry of the potential,
We start from Hamiltonian
where c is the speed of light, p 0 and E 0 are the reference momentum and energy, e is the electron charge, Bρ = −e/p 0 c is the rigidity, K 0 = B y (0)/Bρ is the reference orbit curvature, K 1 = (dB y /dx)/Bρ is the normalized quadrupole gradient, K 2 = (d 2 B y /dx 2 )/Bρ is the normalized sextupole strength, p σ = ∆E/p 0 c is the longitudinal momentum, p x,y = P x,y /p 0 are the normalized transverse momenta, V 0 , λ RF are the RF cavity voltage amplitude and wave length, s is the azimuth along the orbit, σ = s − ct is the longitudinal coordinate conjugate to the longitudinal momentum p σ , s 0 is the position of point like RF cavity, φ s is the phase of RF field. Radiation power with assumption of negligible electron where B 2 = (B y + xdB y /dx) 2 + y 2 (dB y /dx) 2 and we dropped terms with p 2 σ and 4K 0 K 1 xp σ , 2K 2 1 p σ (x 2 + y 2 ). The next step is to expand Hamiltonian (2) up to third order in all variables, neglect the term K 0 x(p 2 x +p 2 y )/2 due to its smallness, and obtain equations of motion where radiation is included by hand with the term describing the change of momenta, where Γ = Cγ 2π E 4 0 p0c , and we expanded RF related cos(. . . ) to first order of σ. Note, that radiation from quadrupoles produces nonlinear terms ΓK 2 1 p x,y x 2 , ΓK 2 1 p x,y y 2 in (5) and (7) similar to the ones produced by quadrupole fringe [13]. However, their influence is small in our case and we drop them.
IV. SOLUTION OF LONGITUDINAL EQUATIONS OF MOTION
At first, we will solve longitudinal equations of motion (8) and (9) considering motion in the vertical plane and neglecting motion in the horizontal plane. Due to the fact that longitudinal motion is much slower than transverse (synchrotron oscillation frequency is lower than betatron), we consider vertical oscillation amplitude independent of time and solve decoupled equations. Splitting horizontal motion into betatron part and dispersion part x = x β +ηp σ , p x = p xβ +ξp σ , neglecting betatron motion x β = 0, p xβ = 0 yields equations Vertical motion through nonlinear coupling excites horizontal oscillations (top left on FIG. 3), however small (≈ 5σ x for y 0 = 58σ y ), and, according to Table III (second column, multiplying by (5/67) 2 ≈ 6 · 10 −3 ), excited by horizontal motion longitudinal oscillations are by order of magnitude smaller than the ones produced by vertical motion directly. Hence, we omit horizontal betatron oscillations in this section. This consideration and latter numerical oscillations will prove validity of our approximation in neglecting the nonlinear transverse coupling.
Averaging of the obtained equations over the revolution period (as usually done for synchrotron motion) introduces familiar quantities: momentum compaction the relative energy loss from dipoles per turn wave vector of synchrotron oscillations longitudinal damping decrement where Π = 2πR is he ring circumference, R is the average radius, angular brackets denote averaging over circumference . . . = . . . ds/Π, ν s is the synchrotron oscillations tune, the RF field phase is chosen according to (−eV 0 ) sin φ s = U 0 , I 4 and I 2 are the synchrotron integrals [14]. The factors ξ 2 and K 2 1 η 2 are small, and multiplication by p 2 σ makes them even smaller; therefore, we neglect them.
In order to deal with the terms y 2 and p 2 y , we use the principal solution of the vertical motion equation [15] y = A y f y + A * y f * where constant amplitude A y depends on initial conditions, f y is Floquet function with following properties where i is imaginary unit, β y is beta function, ψ y is betatron phase advance. Hence, where action relates to amplitudes as J y = 2A y A * y , Twiss parameter gamma is γ y = (1 + α 2 y )/β y , α y = −β y /2 and the subscript prime denotes d/ds.
In order to use Krylov-Bogolyubov averaging method we expand p 2 y and ΓK 2 1 y 2 into Fourier series: where k y = 2πν y /Π = ν y /R is a wave vector of vertical betatron oscillations with tune ν y , Applying averaging method and keeping constant and slowly oscillating terms (Jowett kept constant, but omitted oscillating terms in [16]) yields equations of motion where n = −[2ν y ] is the negative integer part of the double betatron tune and is the only slow oscillating harmonic.
A. Synchronous phase
Equating the right parts of the equations (28) and (29) to zero and eliminating the oscillating terms results in synchronous longitudinal point where the term with Γ corresponds to additional energy loss from radiation in quadrupoles, the other terms come from lengthening of particle trajectory. Jowett obtained similar equations in [6] and [17]. Particle with not adjusted initial conditions will develop synchrotron oscillations with respect to the new synchronous point. Using the longitudinal invariant yields maximum energy deviation
B. Solution without oscillating terms
Solution of equations (28) and (29) without oscillating terms is known and consists of the constant term describing the shift of synchronous energy, and two terms describing damping synchrotron oscillations (only for p σ ) (34)
C. Particular solution
Introducing ae y = (2ν y + n)/R and transforming the system of first order differential equations (28) and (29) into the the second order equation gives Particular solution of (35) is Since ae y k s α σ , we can rewrite solution as Apparently, solutions (36) and (39) should not depend on the initial betatron phase ϕ y , because in the averaging over the revolution period we lose all the information regarding particle initial transverse phase. Therefore, we replace complex betatron amplitude A y = |A y | exp(iϕ y ) with its absolute value |A y |. Putting it in the form comfortable for the future use we have where and χ 0 = arg(c n ).
V. SOLUTION OF VERTICAL EQUATIONS OF MOTION
With the same assumptions as in the previous paragraph equations (6) and (7) are where D = 2K 2 0 + 2K 0 K 1 η + K 3 0 η and for machines with separate functions magnets is negligible, we neglected the small term Γp y K 2 1 η 2 p 2 σ . We may apply Krylov-Bogolyubov averaging method directly to equations (42), (43), but it is more illustrative to apply it to y equation. During derivation of y equation we neglect the terms containing p σ , because it either oscillates with synchrotron tune or with double fractional part of betatron frequency, and after derivation will receive a small factor. The desired equation is (44) This is an equation of parametric oscillator with friction; the second term depends on p σ which contains terms oscillating at fractional double betatron frequency (40). It is also a Van der Pol oscillator (nonlinear friction, the the third term). Jowett obtained Van der Pol equation for nonlinear wiggler (combined quadrupole and sextupole) in [17]. We did not find large influence of nonlinear friction (Van der Pol oscillator) and, therefore, omitted it.
Substituting expression for p σ , we neglect the constant shift and damped synchrotron oscillations (34), and keep only particular solution (40) oscillating on fractional part of double betatron frequency, i.e. we consider only parametric resonance. Substituting principal solution for y (16), averaging and keeping only slowly oscillating terms yields equation for amplitude evolution The terms ΓK 2 1 β y α y and ΓK 2 1 β y are small and we neglect them, obtaining The real part of the obtained equation describes evolution of the |A y | (e.g. damping), the imaginary part describes the change of the betatron tune. In order to solve equation (46) we introduced coefficients where expression in angular brackets of B 2 is local chromaticity, which does not vanish when global chromaticity is compensated.
Distinguishing modulus and argument of amplitude A y = a y e iϕy , B 1 = |B 1 | e iϕ1 , B 2 = |B 2 | e iϕ2 and substituting in (46) results in two equations where |B 1 | sin(ϕ 1 ) = Im(B 1 ) = 1 2 ΓK 2 0 α y ≈ 0 is small and describes the change of vertical betatron tune because of damping; this is equivalent to ϕ 1 = 0. The second term in (50) describes tune dependence on amplitude. Equations (49) and (50) have complex topology in {a y , ϕ y } space (see Appendix), which has two stable points providing ϕ y = 0 where n is integer. At these points the modulus of amplitude is and using J y = 2A y A * y = 2a 2 y gives action .
The plus sign describes always damping amplitudes (stable), the minus sign, depending on initial action, describes either damping solutions (stable) or rising (unstable). This boundary action defines the border of dynamic aperture and is Existence of initial amplitudes with stable motion at parametric resonance is due to the friction (radiation damping).
VI. LONGITUDINAL AND HORIZONTAL MOTION
Equations of coupled horizontal and longitudinal motion (4), (5),(8), (9) with y = 0 and p y = 0 are similar to vertical and longitudinal (6) (7) with x β = 0 p xβ = 0. The unique for horizontal motion terms K 0 p σ in (5) responsible for dispersion and −K 0 x β in (8) will produce a synchro-betatron resonance at ν x ± ν s = integer. This resonance plays an important role, but out of scope of our work. Table III shows that the shift of synchronous point and amplitude of synchrotron oscillations are significantly larger for horizontal oscillations (second column) than for vertical (third column) at the boundary of dynamic aperture, if initial longitudinal coordinates are not adjusted to the new synchronous point. Observation of phase advance per turn (right) on Figure 6 suggests that particle is lost when phase advance reaches an integer (turn 65) and it happens when p σ = 7σ δ . Using the detuning coefficient and its chromaticity with given initial conditions we calculated the shift of the tune from each term Table IV. The sum of last three lines is exactly zero, which means that the tune is equal integer. Jx The border of dynamic aperture (54) is which needs to be compared with the tracking result R y = 57σ y . Scrutiny of tracking results showed that transverse nonlinear coupling decreases effective amplitude of vertical motion; therefore, the amplitude of longitudinal harmonic producing parametric resonance is about two times smaller than our predictions. Consideration of this correction increases dynamic aperture R y ≈ 37.2 × √ 2σ y = 52.6σ y , which corresponds well to tracking results.
Resemblance p σ = 2.8×10 −2 σ δ , which closely corresponds to the value p σ = 2.4 × 10 −2 σ δ on the right plot of Figure 11. Figure 12 and Figure 13 compare vertical action evolution from tracking and calculation with (53). The boundary of stable motion is 57.5σ y from tracking and 52.6σ y from calculations by (54). The harmonic |c n | for horizontal motion is about 30 times smaller than for vertical; therefore, modulation of the longitudinal motion happens at larger amplitudes, which are already unstable due to nonlinear dynamics. This is proven by spectra of horizontal and vertical motion for particle with initial condition x = 95.5σ x on Figure 14. The longitudinal harmonic at double betatron frequency is too small to be observed.
VIII. CONCLUSION
In horizontal plane, additional energy loss due to radiation in quadrupoles, shifts synchronous point and develops large synchrotron oscillations. Horizontal betatron tune dependence on amplitude and chromaticity of this detuning shift the tune toward the integer resonance resulting in particle loss. This is similar to Radiative Beta-Synchrotron Coupling (RBSC) proposed by Jowett [6].
Dynamic aperture reduction in the vertical plane with inclusion of synchrotron radiation in quadrupoles in FCC-ee is due to parametric resonance with modulation amplitude dependent on the square of oscillation amplitude. Radiation from quadrupoles modulates the particle energy at the double betatron frequency; therefore, quadrupole focusing strength also varies at the doubled betatron frequency creating the resonant condition. However, due to friction, resonance develops only if oscillation amplitude is larger than a certain value. The remarkable property of this resonance is that it occurs at any betatron tune (not exactly at half-integer) and, hence, can be labeled as "self-inducing parametric resonance. Our calculations give the border of dynamic aperture R y = 52.6σ y , which corresponds well to the tracking result R y = 57σ y . and A y = a y e iϕy , B 2 = |B 2 | e iϕ2 . Equations (A4) and (A5) have two stable points with ϕ y = 0 where n is integer. At these points the modulus of the amplitude is a y (s) = a y,0 e ±|B2|s .
As expected, all trajectories are diverging. Adding the damping term in the equation of the vertical motion yields y − (K 1 − (K 1 − K 2 η)p σ ) y + ΓK 2 0 y = 0 .
Because of damping we have different behavior depending the strength of the modulation amplitude: if modulation amplitude is small then all trajectories are stable (FIG. 16), if modulation amplitude is large then all trajectories are diverging (FIG. 17). 16. Evolution of the average particle trajectories, solution of equations (A4) and (A5) with the same initial amplitude and different initial phases. Initial amplitude corresponds to y0(ϕy = 0) = 50σy, with small modulation amplitude. 17. Evolution of the average particle trajectories, solution of equations (A4) and (A5) with the same initial amplitude and different initial phases. Initial amplitude corresponds to y0(ϕy = 0) = 50σy, with large modulation amplitude.
Appendix C: Parametric resonance with damping and amplitude dependent modulation In the realistic case of equation (44) with coefficients (47) and (48), the modulation amplitude depends on the square of the oscillation amplitude. Therefore, depending on initial amplitude either all trajectories are stable, or some are stable and others are unstable, or all unstable. Figures 18 (all trajectories are stable), 19 (some trajectories are unstable) and 20 (majority of trajectories are unstable) show numerical solution of equations (49) and (50) on the plane of the average particle trajectories y/σ y = 2 |A y | cos(ϕ y )/ √ ε y and p y /σ py = 2 |A y | sin(ϕ y )/ √ ε y , with three different initial amplitudes and uniformly distributed ϕ y between (0; 2π). All trajectories are stable for y0(ϕ y = 0) = 37σ y , and with larger initial amplitude number of unstable trajectories increases.
FIG. 18. Evolution of the average particle trajectories, solution of equations (49) and (50) with the same initial amplitude and different initial phases. Initial amplitude corresponds to y0(ϕy = 0) = 37σy FIG. 19. Evolution of the average particle trajectories, solution of equations (49) and (50) with the same initial amplitude and different initial phases. Initial amplitude corresponds to y0(ϕy = 0) = 40σy | 2018-11-12T04:50:50.000Z | 2018-11-12T00:00:00.000 | {
"year": 2019,
"sha1": "9d71c97ca1bd5f5fd96b2ca5544a28b5384b571f",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevAccelBeams.22.021001",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6ce0ec48aae9f147bb2ab1d5bcf0af2dd89adcea",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
54528605 | pes2o/s2orc | v3-fos-license | Experimental Research of Dynamic Response of Laser-Induced Film-Substrate System
Film-substrate’s interfacial bonding strength is closely related to film quality. An excellent interfacial bonding strength is the premise for the well use of film. The laser detecting technique of discrete scratches based on laser shockwave effect is a new method, which can measure interfacial bonding strength. With this technique, film-substrate system is of transient load of different laser energy, the relation between the dynamic response characteristics of such film-substrate system and film-substrate’s interfacial bonding strength is a core problem to be solved urgently. On this basis, this paper conducted research on the dynamic response characteristics of film-substrate system during laser loading process using detecting technique of PVDF patch sensor. Results show that under the irradiation of different laser energy, it can detect dynamic responses of theory models of different film-substrate system using PVDF patch sensor, wherein shockwave dynamic response and dynamic strain response are included. Laser energy and interfacial bonding strength are of a regular influence to the dynamic response of film-substrate system theory model.
Introduction
The laser detecting technique of discrete scratches is developed on the basis of laser spallation and laser scratch [1]- [4].It changes the film-substrate system by utilizing the pulsed laser shockwave effect.The induced highvoltage shockwave (stresswave) transmits to the internal film-substrate system and a dynamic respond between the interface of film-substrate system and internal materials occurs, while the detected dynamic signals can reflect the intrinsic bonding strength of dynamic interface.Delving into dynamic response characteristics of filmsubstrate system under irradiation of pulsed laser could monitor the propagation rule of shockwave and filmsubstrate interface effect at the same time, which was conductive to revealing the failure mechanism of film-substrate system.
The researches on detection of interfacial bonding performance and measurement of interfacial bonding strength by using the laser shockwave effect can date back to the 90s.American scientists Gupta et al. [5] measured the interface tension strength between two reinforced ceramic composites in 1992 and at the same time, ABAQUS software was also adopted for simulation of elastic wave propagation and for process of transient heat transfer.Interiorly, He Pengfei et al. [6] from Shanghai Jiaotong University probed into microscopic damages of composite microstructures under the irradiation of pulsed laser in 1994, explored the correlation between failure mechanism of interfacial bonding strength and interface cracking strength, proposed the laser detecting technique of discrete scratches based on above scientific achievements and made some progress.Shi Fen [7] and Shi Fen et al. [8] delved into stress-strain characteristics of film-substrate system to concretely analyze the filmsubstrate's interfacial bonding status and influence rule on dynamic system strain by laser energy.Besides, an experimental research on the distribution law for residual stress of film-substrate system was also carried out.The complexity of stress on the surface of laser-induced materials not only reflects on the effect of laser shockwave, but also on the comprehensive non-linear superimposed result of laser shockwave effect, laser thermal effect and thermal effect.
This paper detected the laser shockwave by bonding PVDF on the back of matrix based on the established theoretical film-substrate system and delved into both the laser energy and the influence on dynamic respond of film-substrate system by interfacial bonding strength.At this time, the detection results are free from the effect of laser thermal and thermal stress without any complex consideration, which has provided a reference to reveal the failure mechanism of film-substrate system.
Experimental Methods and Materials
This experiment adopted the widely used AZ31B magnesium alloy as matrix and 316 stainless steel thin-foil as coating, wherein the thickness of film is 0.02 mm.besides, matrix and film were bonded by different bonders, which can represent different interfacial bonding strength due to the difference in viscosity.Based on these, a theoretical model of film-substrate system was established to replace the actual model.The main performance indexes of AZ31B magnesium alloy are shown in Table 1 and chemical composites of 316 stainless-steel are shown in Table 2. Before this experiment, cutting lines of AZ31B magnesium alloy were disposed into 12 samples sized 40 mm × 40 mm × 1.5 mm, polished by 100#-800# abrasive paper for metallograph, cleaned by alcohol and dried by cold-blast air to get samples with thickness of 0.5mm.This experiment established three film-substrate interface models based on the difference in bonding strength of bonders.(1) Model A: using the water layer to bond 316 stainless-steel foil and AZ31B magnesium alloy; (2) Model B: using 502 glue to bond 316 stainless-steel foil and AZ31B magnesium alloy; (3) Model C: using a double-sided tape to bond 316 stainless-steel foil and AZ31B magnesium alloy.There are respectively 4 samples in each model with a successively increased interfacial bonding strength.Besides, this experiment should ensure the good interfacial bonding status in the process of model construction and avoid vacuoles on contact surface.A black tape with thickness of 200 μm was bonded to surfaces of 316 stainless-steel foil as absorbed layer before being shocked by laser.Such a black tape is sized 3 mm × 3 mm and directed to the center of laser spot.The thickness and size of PVDF patch sensor are respectively 30 μm and 5 mm × 5 mm.PVDF patch sensor was bonded on the back of AZ31B magnesium alloy and directed to laser shock areas, wherein two ports of sensor are parallel resists with the resistance value of 50 Ω.The concrete layout is shown in Figure 1.
This experiment adopted YAG SGR-series pulsed laser (LSP parameters are shown in Table 3) and DL9140 digital oscilloscope with the frequency bandwidth of 1GHZ and maximum sampling rate of 5GS/s to collect dynamic signals detected by the patch sensor.The photodiode triggered oscilloscope to record piezoelectric signals detected by PVDF patch sensor when laser signals were detected.The four samples of each film-substrate system model were shocked once by laser with energy of 400 mj, 600 mj, 800 mj and 1000 mj.The concrete pulsed LSP parameters are shown in Table 3.The following rules can be observed form the figures: (1) The larger laser energy is in each film-substrate system model, the stronger detected piezoelectric and shockwave signals and larger vibration amplitude would be.As shown in Figure 2(a), under the effect of 1000 mj laser energy, the highest amplitude value of shockwave piezoelectric signal can up to 2.9 V and the highest amplitude value of shockwave pressure can up to 9 Mpa.However, as shown in Figure 2(d), under the effect of 400 mj laser energy, the highest amplitude value of shockwave piezoelectric signal can only up to 1.3 V and the highest amplitude value of shockwave pressure can only up to 3.8 MPa.
(2) Larger laser energy contributed to a longer time of system equilibrium state and the whole process from the signal detection to the eventual system stability was finished in microseconds.
The analysis suggested that, if strong-pulsed laser-induced shockwave passes through the film at a certain velocity and transmits in depth direction, reflection and transmission phenomena may occur on shockwave at the Film-substrate interface [9].If the film-substrate interfacial bonding strengths are the same, then the energy proportion of reflection wave and transmission wave can be deemed as the same.Hence, the larger the laser energy is, the more transmission waves transmitted to the internal matrix via film, the more obvious extrusion effect between particles of matrix materials, the stronger the piezoelectric signals detected by PVDF and the larger amplitude of fluctuation would be.After these changes, the back and forth transmission of shockwave in the film-substrate system has led to the continuous change of detected dynamic response curve.The shockwave may have reflection and superposition phenomenon on the free of film and matrix in the transmission period, while reflection, refraction, transmission and superposition phenomenon occurs at film-substrate interface.The larger the energy is, the more intense dynamic response of these phenomena would be and the longer time for reaching up to the system equilibrium state would spend.Due to movement inertia of material particles and counter-acting force between particles, the eventual quiescent time of structural response would be far beyond the laser pulse bandwidth [10] and the dynamic period may remain for microseconds.b.Delving into the influence on dynamic strain response characteristics o by laser energy PVDF piezoelectric film can perceive crosswise and lengthwise piezoelectric effect.The crosswise piezoelectric effect refers to stretching vibration effect of film surface that is parallel to piezoelectric materials, which is represented by d 31 , d 32 .The lengthwise piezoelectric effect refers to the vibration effect of material particle that is perpendicular to the film surface, which is represented by d 33 .The charge output on upper and lower surfaces of PVDF piezoelectric film satisfies the following relation [11]: In this equation, j ε is the strain (j = 1 -3), d 3j is the strain constant (j = 1 -3, C/N), EPVDF is elasticity modulus of PVDF film (N/m²) and S is the area of PVDF film (m 2 ).PVDF patch sensor in this experiment was bonded on the back of sample which directs to the center of laser shock area, so the dynamic response of materials particles perpendicular to the film surface were detected during the laser shock process, wherein the crosswise vibration effect can be neglected.Hence, it can be simplified as the following equation [12]: For quantity of electric charge transmitted by the oscilloscope during the collecting dynamic process, the following equation is met between voltage signal V(t) and transmitted quantity of electric charge Q(t) at time t: This paper united Equations (2.2) and (2.3) and put in voltage signal V(t) detected by oscilloscope to acquire curve of −t. Figure 6 is the dynamic strain curve of film-substrate system under different laser energy.The following rule can be observed from Figure 6: PVDF detected the stretching strain at first, then the stretching strain gradually lessened and the compression strain occurred along with a continuous change; the larger the laser energy is, the larger peak value of stretching strain and stretching strain of eventually generated after the system quiesce would be.
Based on the analysis, the reason why PVDF detecting stretching strain at first is that laser-induced shockwave transmission effects upon the back of sample.The compression wave achieved the mutual extrusion of material particles, so protuberant distortions of a certain amount were even generated on the back of sample and the stretching strain was detected.The reflection springback of compression waves formed stretching waves [13] and stretching strain was then released to form into compression strain during the pressure uploading period, making the strain curve transform from stretching strain to compression strain.If the laser energy is enlarged, those high-energy stress waves may tighten the compression between material particles.Therefore, the dynamic behaviors would be more severe, the curve would be more fluctuant and the peak value of stretching strain would be larger.Thanks to the attenuation of energy, the reciprocal conversion between the stretching strain and compression strain eventually reached to a status of equilibrium and the accumulative stretching strain effect would also enlarge the amount of stretching strain generated by the system.(2) The piezoelectric wave curve detected in model A is much smoother with little small-amplitude fluctuation embossment, while model C had the least smooth piezoelectric wave curve.Judging from the pressure curve, model A needed the most time to reach the equilibrium state and dynamic process, while model C needed the least time and model B fell in between the two.
Influence on Dynamic Response Characteristics of
The following conclusions are made based on the analysis of this paper.The differential properties of film-substrate materials may contribute to the reflection effect at interfacial bonding areas when laser shockwave transmitted to the film-substrate interface.A part of compression waves may be reflected as stretching waves to effect on the interfacial bonding together for action by overcoming the interfacial bonding strength.Film and matrix were bonded by water film in model A with the weakest interfacial bonding strength, so compression waves and stretching waves would generate the most compression waves energy directing to the depth of matrix after overcoming the interfacial bonding strength.Hence, it obtained the largest peak values of piezoelectric waves and pressure waves detected by PVDF.In a similar way, the bonding strength of film and matrix bonded by double-sided tape in model C was the strongest and they needed to overcome the most bonding strength for acting, so it obtained the least peak values of piezoelectric waves and pressure waves detected by PVDF.Model B fell in between the two.The piezoelectric wave curve is a reflection after the comprehensive coupling of shockwaves transmitted, reflected by the interface and rebounded by the control layer.Model A had the strongest shockwave after overcoming the interfacial bonding strength and because other waves had little impact on its loading and unloading, so the piezoelectric wave curve was much smoother.At the same time, model A had the largest pressure after the pressure coupling, the most intense dynamic response, the slowest attenuation and longest time to reach the state of equilibrium.And for model C, it had the weakest shockwave pressure after overcoming the interfacial bonding strength and it vulnerable to the impacts from the coupling of other waves during the transmission process, so it had the more abrupt piezoelectric wave, faster curve attenuation and shortest time to reach the state of equilibrium.Model B fell in between the two.
b. Delving into the influence on dynamic strain response characteristics by interfacial bonding strength Figure 12 is the dynamic strain curve under different interfacial bonding strength and the same laser energy.It can be seen from the figure that under the same laser energy effect, model A had the largest variable quantity of stretching strain generated after the system quiesce, C had the smallest and B fell in between the two.The analysis suggested that it may have something to do with energy carried by the stress wave.The interfacial bonding strength was the weakest when the water film was used to bond the film and matrix.The stress wave needed little energy to overcome the interfacial bonding strength, so more stress wave energy tightened material particles more tense.Hence, model A with the lowest energy consumption had the largest strain variable quantity, while model C with the highest energy consumption had the smallest strain variable quantity.
Conclusions
(1) The larger the laser energy is, the stronger the detected piezoelectric and shockwave signals, the larger the vibration amplitude and the longer for the system to reach the equilibrium state would be.The process from signal detection to the eventual system stability can be finished in microseconds.
(2) PVDF detected the stretching strain at first after the effect of laser, then the stretching strain gradually lessened and the compression strain occurred with a continuous change; the larger the laser energy is, the larger peak value of stretching strain and stretching strain of eventually generated after the system quiesce would be.
(3) Under the same laser energy effect, the model using a water film as the bonder has the largest peak values of both piezoelectric and pressure waves, smoother piezoelectric wave curve with little small-amplitude fluctuation embossment and longest time for system to reach the equilibrium state.The model using a double-sized tape as the bonder has the smallest peak values of piezoelectric and pressure waves, the least smooth piezoelectric wave curve with little small-amplitude fluctuation embossment and shortest time for dynamic process.And the model using the 502 glue as the bonder falls between the two.The model using a water film as the bonder has the largest variable quantity of stretching strain and the model using a double-sized tape as the bonder has the smallest after the system quiesce.
Figure 6 .
Figure 6.Dynamic strain curve of three kinds of models under different laser energy: (a) A model; (b) B model; (c) C model.
Figure 7 .
Figure 7. Voltage signal of three models under 1000 mj laser energy: (a) A model; (b) B model; (c) C model.
Figure 8 .Figure 9 .
Figure 8. Voltage signal of three models under 800 mj laser energy: (a) A model; (b) B model; (c) C model.
Figure 10 .Figure 11 .
Figure 10.Voltage signal of three models under 400 mj laser energy: (a) A model; (b) B model; (c) C model.
dynamic pressure signals detected by PVDF occurred in model A, B was followed by A and C had the minimum peak values of piezoelectric and dynamic pressure signals.
Table 1 .
Main performance indexes of AZ31B magnesium alloy.
Experimental Results and Analysis 3.1. Influence on Dynamic Response Characteristics of Film-Substrate System by Laser Energy
a. Delving into the influence on dynamic response characteristics of shockwave by laser energy Figures 2-4 are voltage signals of model A, B and C under different laser energy.Figure 5 is pressure curve of shockwave after transformation. | 2018-12-02T19:31:23.219Z | 2016-03-09T00:00:00.000 | {
"year": 2016,
"sha1": "57d7367dad96533b459a360112d3652cb37846df",
"oa_license": "CCBY",
"oa_url": "https://www.scirp.org/journal/PaperDownload.aspx?paperID=65049",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "57d7367dad96533b459a360112d3652cb37846df",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
119072708 | pes2o/s2orc | v3-fos-license | Superconductor-insulator transition in fcc-GeSb2Te4 at elevated pressures
We show that polycrystalline GeSb2Te4 in the fcc phase (f-GST), which is an insulator at low temperature at ambient pressure, becomes a superconductor at elevated pressures. Our study of the superconductor to insulator transition versus pressure at low temperatures reveals a second order quantum phase transition with linear scaling (critical exponent close to unity) of the transition temperature with the pressure above the critical zero-temperature pressure. In addition, we demonstrate that at higher pressures the f-GST goes through a structural phase transition via amorphization to bcc GST (b-GST), which also become superconducting. We also find that the pressure regime where an inhomogeneous mixture of amorphous and b-GST exists, there is an anomalous peak in magnetoresistance, and suggest an explanation for this anomaly.
Introduction
GeSb2Te4 (GST) is a phase-change-material, whose unusual physical properties [1,2,3,4,5,6] promise many potential applications in the electronics industry [7,8,9,10]. One of the newly discovered properties of the GST is the emergence of the superconductivity under elevated pressures [11]. In our previous highpressure study of GST [11], superconductivity was observed in amorphous GST (a-GST), orthorhombic GST (o-GST) and in bcc GST (b-GST). In addition we have demonstrated [11] that hexagonal GST remained in the normal state for the entire range of available temperatures and pressures. However, the transport properties of fcc GST (f-GST) at elevated pressure and at low temperatures remained unexplored. This paper is devoted to the study of the properties of GST material in the fcc phase at high pressure and low temperatures. We demonstrate that f-GST undergoes a superconductor to insulator transition (SIT) at low temperatures when the pressure is applied as an external control parameter. We find that the superconducting transition temperature vanishes linearly with pressure, while the GST remains in the f-GST phase, strongly suggesting a second-order quantum phase transition (QPT) with a critical exponent close to unity. The observed appearance of superconductivity is preceded by a significant change in the normal state resistance of the samples by a few orders of magnitude. Furthermore, we demonstrate that superconductivity with somewhat higher Tc appears at higher pressures, when b-GST starts to form. In the region where these two phases coexist, an anomalous behavior of the magnetoresistance is observed, whereby a sharp resistance peak appears in the vicinity of the upper critical field. We suggest an explanation for this behavior.
Experimental
In our transport and XRD experiments, we have used the following procedure for the preparation of f-GST samples. Initially, the few micron thick GST films were sputtered from a commercial target of h-GST (hexagonal GeSb2Te4). As we reported earlier [11], the films sputtered onto a room temperature substrate are amorphous (a-GST). An atomic composition and morphology of the as-prepared a-GST film was checked by scanning electron microscopy (SEM), energy dispersive X-ray spectroscopy (EDS) and X-ray photoelectron spectroscopy (XPS) analysis [11]. The annealing of the sputtered films at 146 ℃ causes the transformation of the a-GST into an fcc polycrystalline phase. An X-ray diffraction (XRD) analysis confirming the formation of the fcc phase of the annealed films is shown in Fig. 1(a) at 0 GPa. Finally, a powder of f-GST was prepared by the mechanical removal of the f-GST film from the substrate. Pressure was exerted using miniature diamond anvil cells (DACs) [12] with diamond anvil culets of 250 µm. A pre-indented stainless-steel or rhenium gasket was drilled and then filled and covered with a powder layer of 75% Al2O3 and 25% NaCl for electrical insulation. The powder of f-GST was placed onto the culets. A Pt foil with a thickness of 5-7 m was cut into triangular probes connecting between the sample and copper leads allowing the electrical transport measurements at elevated pressures. In each DAC 6-8 probes were placed. Fig. 2(a) depicts a setup of 6 Pt foils (bright areas) between the diamond and the sample (dark areas) in a four-probe configuration. Ruby was used as a pressure gauge.
Electrical transport measurements were performed using a 4 He cryostat. The sample was compressed up to 44 GPa in increments of 2 GPa on average, and cooled down from ambient temperature down to 1.4 K. After each pressure increment a temperature cycle was performed.
Synchrotron XRD measurements of f-GST powder were performed at room temperature up to 47 GPa at the beamlines 13ID-D and 13-BM-C of APS (Argonne, IL, USA), with wavelengths of = 0.3738 and 0.434 Å, respectively, in angledispersive mode with patterns collected using a MAR CCD detector. The image data were integrated using DIOPTAS [13] and the resulting diffraction patterns were analyzed with the GSAS+EXPGUI [14,15] program. XRD data at ambient temperature and pressure have been collected in symmetric Bragg-Brentano geometry with CuKα radiation ( = 1.5406 Å) on Bruker D8 Discover Θ:Θ X-ray diffractometer equipped with one-dimensional LynxEye XE detector.
Experimental results of XRD and transport studies
We start the description of our experimental results with XRD and transport studies at room temperature. As depicted in Fig. 1 Fig. 1(a). The observed amorphization as well as the formation of the bcc phase are consistent with previously reported results [16]. We would like to note that the appearance of the intermediate orthorhombic phase reported in [16] was not detected in our data. The change in density versus pressure which is depicted in Fig. 2(c) is fitted to the second order Birch-Murnaghan (BM2) equation of state (EOS) [17] with the extracted parameters indicated in the figure labels. As depicted in Fig. 2(b), the room temperature resistance of f-GST drops very sharply (by more than 2 orders of magnitude) as a result of the application of just a few GPa. This sharp decrease is followed by a more moderate drop of one order of magnitude as a result of compression of the sample to about 8 GPa. For pressures above 8 GPa, the resistance remains roughly constant. We would like to emphasize that the resistance drop is not accompanied by any crystallographic change as already mentioned above (Fig. 1). The observed slight increase in resistance (by a factor of 2) corresponds to the pressure range where the amorphization is observed, namely coinciding with the region between f-GST and b-GST. For pressures above 25 GPa the value of the resistance remains roughly constant. We now present the results of our transport measurements at low temperatures. The resistance versus temperature at pressures between 1-6 GPa reveals a clear SIT type behavior in f-GST, as depicted in Fig. 3. One can clearly see the large resistance change during the transition from an insulating state at 1.0 GPa to the full superconducting state at 6.0 GPa. It is also evident that the onset of superconductivity appears at 4.0 GPa. Fig. 2(b) in purple region). Throughout the paper, the definition for critical temperature is that of the temperature at which the resistance equals half of the normal state resistance immediately above the transition. The superconducting critical temperature Tc increases roughly linearly from 1.8 K at 4.0 GPa to 5.8 K at 10.4 GPa. The linear dependence is emphasized by the straight dashed trend line in Fig. 4(b), which extrapolates to zero temperature at Pc,0=3.1 GPa. As will be discussed below, this linear dependence is expected from a Ginzburg-Landau type mean field theory. Upon further increase of the pressure, in the range between 12 GPa and 27 GPa, the superconducting transition temperature shows saturation, as can be inferred from Fig. 5(a). In this pressure range, the pressure induced amorphous phase forms (Fig. 2) and one may expect the coexistence of f-GST and the amorphous phase due to some pressure inhomogeneity inside the pressure cell. The fact that there is only a single transition in the curves for this pressure range, along with the moderate increase in Tc up to 27.0 GPa, suggest that the pressure induced amorphous phase is similar to f-GST, at least in its superconductivity properties. This assumption is in good agreement with recent theoretical simulations [18], which show a formation of an amorphous structure of cubic framework for GST at pressures above 18 GPa (hereafter referred to as a'-GST), characterized by the collapse of long range order, formation of homopolar bonds and slight increase of coordination numbers. Furthermore, according to [19], at ~27 GPa strong distortions in the crystal structure are observed resulting in formation of bcc phase. These results correspond well with our experimental observations. For pressures exceeding 27.0 GPa, two distinct transitions appear in the resistance vs. T curves, as shown in Fig. 5(b). This signifies the appearance of the b-GST phase, in accordance with the XRD data ( Fig. 2(b) in blue region). These double transitions can be interpreted as coexistence of a'-GST with b-GST, both being superconductors at different temperatures. The observed coexistence of both phases throughout a wide range of pressures is most probably due to inhomogeneous pressure distribution inside the cell (Al2O3+NaCl is considered a poor pressure medium relative to Ne which is used for XRD measurements). We associate the higher Tc value with b-GST, since it is apparent that the critical temperature of a'-GST has already been saturated at about 6.6 K and the higher value for b-GST is consistent with our previously reported results for this phase [11].
FIG 5. Superconductivity transitions of (a) a'-GST and (b) a'-GST and b-GST mixture. The double transitions observed in (b) are interpreted as a mixture of a'-GST and b-GST.
A summary of the critical temperature dependence on pressure results in the T-P phase diagram shown in Fig. 6. The two distinct critical temperatures are deduced from the analysis of Fig. 5(b), where the Tc for each phase is defined by the mid-value of the corresponding resistance drop. The appearance of the double transitions is accompanied by the observation of anomalous magnetoresistance at T= 4.2 K at different pressures, as shown in Fig. 7. Fig. 7(a) reveals that for pressures below 29.0 GPa the magnetoresistance behaves as expecteda distinct normal transition from the superconducting state to the normal state for all pressures for which Tc>4.2 K, with a well-defined upper critical field Hc2. However, at higher pressures where a considerable fraction of the sample transforms into b-GST, we observe an anomalous behavior, where the resistance sharply increases above the normal state resistance, followed by a drop to its normal value ( Fig. 7(b)). This peak starts appearing at 35.0 GPa, becomes most-pronounced at 36.0 GPa (where the peak reaches 1.5 times the value of the normal state resistance), and then gradually decreases, practically disappearing at 43.3 GPa, where the entire sample is probably in a single b-GST phase.
Discussion and Analysis of the results
We now turn to the analysis of our experimental findings. Let us start with the linear increase of for pressures immediately above the zerotemperature critical pressure, ,0 = 3.1 GPa (Fig. 4(b)). In Ginzburg-Landau theory [20], in the absence of a magnetic field, the superconducting part of the free energy density can be expanded near the transition to 4th order in the order parameter , = ( , )| | 2 + ( , )| | 4 , where the coefficients and are now not only functions of the temperature , but also of the pressure . As usual, > 0 to ensure the finiteness of | | at the minimum, hence its exact and dependence is irrelevant near the transition. As for , it is positive in the normal phase, and negative in the superconducting phase. Since it vanishes at the transition, in its vicinity it can be expanded to linear order in temperature and pressure, ( , ) ∼ + + = ( − ( )), where > 0 (as usual), and furthermore, ( ) = −( / )( − ,0 ) with ,0 = − / (and hence < 0, > 0). Thus, in Ginzburg-Landau Theory should indeed be a linear function of close to the zero-temperature critical pressure ,0 . While this is a mean-field prediction, Ginzburg-Landau theory is known to give a good quantitative description of the superconducting transition in 3D, due to the typical extreme smallness of the Ginzburg number. Moreover, in the vicinity of the quantum critical point at = 0, = ,0 , the system is effectively 4-dimensional (counting also the time axis), and mean-field theory becomes an even better approximation [21].
Let us now turn to the anomalous peak in magnetoresistance ( Fig. 7(b)). It occurs at the pressure range where b-GST starts to form. In this range, as we already mentioned, we observe a double transition as a function of temperature at zero field, indicating the coexistence of the a'-GST and b-GST phases. It is reasonable to associate the appearance of the anomalous magnetoresistance with the formation of an inhomogeneous mixture of these two phases. One possible scenario for such an anomalous peak in the magnetoresistance has been discussed in theoretical papers [22,23,24] trying to explain the huge magnetoresistance peak observed in superconductor thin films of InO [25,26] and TiN [27,28]. In these models, the experimental system is viewed as a 2D array of Josephson-coupled superconducting islands at zero magnetic field. It is argued that such a system possesses a highly resistive state when the magnetic field is large enough to suppress the coherence between the islands, while not being large enough to destroy the superconductivity in each island. Although it is possible that the anomalous MR observed in our 3D system has a similar origin, there is an alternative explanation which might be more relevant to our system. In a system where two structural phases coexist, there should exist a range of magnetic fields where one phase is superconducting while the other is normal. The finite superconducting gap suppresses the transmission of quasi-particles between the superconducting and normal regions at low temperature. On the other hand, Andreev reflections are still allowed. In this process Cooper pairs are transmitted into the superconductor while the electrons are reflected as holes into the normal phase. However, when the transparency of the interface between the phases is low, the tunneling probability of pairs is strongly suppressed [29]. This implies that the resistance of a percolating phase in the normal with non-percolating islands of a different phase might be larger when these islands are superconducting (intermediate magnetic fields) than when the islands are normal (high magnetic fields). Now, in our samples we observe two superconducting transitions as a function of temperature at zero field and in the pressure range of 29-40 GPa ( Fig. 5(b)), it is reasonable to assume that the b-GST, which has a higher transition temperature, and thus presumably also a higher critical field, is not percolating between the contacts, since otherwise there would be only one transition, when -GST becomes superconducting. Therefore, the anomalous MR observed in our samples for some pressure values in the abovementioned range can be explained as follows. For low magnetic fields, both percolating a'-GST regions of the sample and isolated islands of b-GST are in the superconducting state, resulting in the zero resistance of the sample. When the upper critical field of a'-GST is approached, the resistance starts to rise and reaches the values above the normal state resistance, since the b-GST remains in the superconducting state. The sample resistance starts to decrease towards the normal state resistance only after the superconductivity is destroyed in the b-GST islands, namely when the upper critical field of b-GST is reached. 8
Conclusions
To summarize, we demonstrated that the polycrystalline GeSb2Te4 in fcc phase becomes a superconductor at elevated pressure. The linear variation of the superconductor transition temperature versus pressure indicates a secondorder quantum phase transition. The linear extrapolation to zero temperature gives the value of the quantum critical pointthe critical pressure of ,0 = 3.1 GPa. In addition, we demonstrate that at higher pressures the f-GST goes through structural phase transition via amorphization to b-GST, with all phases exhibiting superconductivity. We also provided a possible explanation for the peak in magnetoresistance observed in the pressure range where inhomogeneous mixture of a'-GST and b-GST is present. | 2019-04-13T19:07:57.372Z | 2017-09-06T00:00:00.000 | {
"year": 2017,
"sha1": "74420a89a3082982103db31905597b07670f7c9b",
"oa_license": "publisher-specific, author manuscript",
"oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.97.024513",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "74420a89a3082982103db31905597b07670f7c9b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
6157130 | pes2o/s2orc | v3-fos-license | EPIREVIEW MENINGOCOCCAL DISEASE IN NEW SOUTH WALES , 1991 – 2002
Meningococcal disease is caused by invasive infection with the bacteria Neisseria meningitidis. Humans are the only natural reservoir for N. meningitidis, 5–10 per cent of whom have naso-pharangeal colonisation of the bacteria at any given time. The bacteria are transmitted between people by secretions from the naso-pharynx. Disease occurs in rare instances when a virulent strain of the bacteria invades through the naso-pharynx. Disease can present in a variety of syndromes, usually meningitis and/or septicaemia, and more uncommonly pneumonia, otitis media, septic arthritis, urethritis, and purulent pericarditis.
In 1999, the United Kingdom was the first country to introduce a large-scale national immunisation program for serogroup C meningococcal disease.At the time, concerns were raised regarding the potential effects of decreasing the incidence of serogroup C, and the potential for 'serogroup switching' by the bacteria, thereby causing an increase in serogroup B infections.However, subsequent studies have not shown a significant increase in 'serogroup switching', in the United Kingdom or elsewhere. 8,9tablishing the endemic incidence of meningococcal disease in NSW, prior to the introduction of the national vaccination program, will allow future analysis of changes in meningococcal epidemiology, and to detect potential trends in the incidence of various serogroups.The epidemiology of meningococcal disease notifications in NSW between 1991 and 1999 has previously been reviewed. 10This article presents previously unpublished findings for that period, as well as a comparison with meningococcal disease notifications for the years 2000-2002.
METHODS
In NSW, a case of meningococcal disease is defined according to national guidelines. 2 Case definitions changed in the late 1990s, with the acceptance of nucleic acid test methods and serology as evidence of infection.We analysed data for cases of meningococcal disease from the statewide database for the years 1991 to 2002. 11The characteristics of cases notified for the years 2000, 2001, and 2002 were compared with cases notified for the period between 1991-1999.
Cases were analysed by year of onset, place of residence, gender, age group, indigenous status, disease syndrome (meningitis-septicaemia), serogroup, disease outcome, and diagnostic method.The analysis for the age of cases reflected the anticipated distribution of the disease in the population.Consequently, cases aged less than five years were analysed by year of age, cases aged between 5-24 years in 5-year age bands, cases aged between 25-64 in 20-year age bands, and the remainder of the population 65 years and over were included in one age group.Place of residence was categorised by the 'Greater Sydney' area health services and the 'Rural NSW' area health services.The Greater Sydney category covered all the major urban areas in NSW and included the Sydney and Central Coast Area Health Services, and the Hunter and Illawarra Area Health Services.Factors associated with the death of cases were examined, but this examination was restricted to notifications between 1997 and 2002 because the data was not complete for preceding years.
Descriptive analysis was performed using the statistical programs SAS and Microsoft Excel 2000 Version 9.The relative risk of death was calculated for the period 1997 to 2002 using the epidemiological software Epi Info version 6.04d.We used the Health Outcomes Information Statistical Toolkit (HOIST), maintained by the Centre for Epidemiology and Research of the NSW Department of Health, to calculate crude incidence rates using Australian Bureau of Statistics year-specific mid-year population data for NSW, 11 and rates for Aboriginal and Torres Strait Islander people using Australian Bureau of Statistics population estimates for 2001. 12For cases aged less than five years, crude incidence rates for 2001 and 2002 were calculated using mid-year population estimates for the year 2000. 11
Incidence
From 2000 to 2002, 693 cases of meningococcal disease were notified, which represents an average of 231 cases per year and a crude incidence rate of 3.5 per 100,000 people.This incidence is considerably higher than for the previous study period (1991 to 1999), when an average of 160 cases (2.6 per 100,000) were notified each year (Tables 1 and 2).Annual peaks of notifications occurred consistently during winter and spring (Figure 1).
Serogroup
From 2000 to 2002 serogroup B notifications were almost twice as common as serogroup C; the incidence of serogroup B was 1.5 cases per 100,000 population (n=288) and for serogroup C incidence was 0.8 per 100,000 (n=155).These rates are higher than for the previous study period 1991-1999 (Table 1).The proportion of meningococcal disease notifications due to an unknown serogroup was substantially lower in 2000-2002 (34 per cent of cases) than for 1991-1999 (63 per cent of cases).
Age and Serogroup
From 2000 to 2002, the highest notification rates occurred in children aged less than one year (34.4 per 100,000).In the same period, children aged 1-4 years had an annual average rate of 11.0 per 100,000 people and adolescents 15-19 years had an annual average rate of 9.0 per 100,000.
From 1991 to 1999 the age distribution of cases was similar. Between
Sex
In 2000-2002, 54 per cent of notifications were male, this was similar to the previous study period (1991-1999). 10
Place of residence
In rural NSW, for 2000-2002, the rate of meningococcal disease notifications (3.4 per 100,000 people) was similar to that for Greater Sydney (3.5 per 100,000 people).In rural NSW between 1991-1999, the notification rate was slightly higher (2.9 per 100,000) than in Greater Sydney (2.3 per 100,000).
Aboriginal and Torres Strait Islanders
The annual average rate of meningococcal disease notifications among Aboriginal and Torres Strait Islander people was 8.9 per 100,000 in 2000-2002, compared with 7.0 per 100,000 in 1991-1999.Almost half of the notifications in 2000-2002 were serogroup B (n=15), 25 per cent were serogroup C (n=8), and the remaining cases were due to an unspecified serotype.
Diagnostic method
Laboratory confirmed cases comprised 84 per cent (n=585) of all notifications between 2000-2002, and 61 per cent (n=880) in 1991-1999.Bacterial culture remains the most common laboratory method used to diagnose the disease, with the use of serological and nucleic acid (for example, polymerase chain reaction or PCR) techniques steadily increasing in recent years (Table 1).
Syndrome
In 2000-2002, 38 per cent of cases (n=262) were reported to have meningitis, 40 per cent septicaemia (n=280) and for 22 per cent (n=151) the nature of their presentation was not specified.Overall, septicaemia was the most common presentation for cases less than 15 years of age (46 per cent), above 65 years of age (55 per cent), and in males (43 per cent).Meningitis was the most common syndrome for cases between 15 and 64 years of age (42 per cent).The incidence of meningitis and septicaemia in serogroup B disease was similar.Septicaemia was the most common presentation in serogroup C disease, occurring in 48 per cent (n=75) of reported cases, compared to 32 per cent (n=49) with meningitis.
Incidence
Between 2000-2002, 40 deaths due to meningococcal disease were reported, which represents 5.8 per cent of all cases for this period.There were no deaths reported of indigenous cases.The proportion of cases that died was generally higher in: males; older adults; those from rural NSW; cases with serogroup C infections; and cases with septicaemia (Table 1).
Between 1997-2002, there were no significant associations between the death of cases, and their sex or place of residence.J u l -9 1 J a n -9 2 J u l -9 2 J a n -9 3 J u l -9 3 J a n -9 4 J u l -9 4 J a n -9 5 J u l -9 5 J a n -9 6 J u l -9 6 J a n -9 7 J u l -9 7 J a n -9 8 J u l -9 8 J a n -9 9 J u l -9 9 J a n -0 0 J u l -0 0 J a n -0 1 J u l -0 1 J a n -0 2 J u l -0 2 Notification data for meningococcal disease are limited in their scope.Information describing the various risk factors associated with developing disease is not collected.A close correlation between notification and hospitalisation data suggests that notifications are a good estimate of incidence since the degree of underreporting of cases is very low. 10 The epidemiology of meningococcal disease in Australia has been described previously, 13 and the national surveillance program reports annually. 14Perhaps the most notable difference between NSW and several other Australian states is that meningococcal B disease is the most common presentation in NSW. 14 Exposure to tobacco smoke has been identified as a risk factor for developing meningococcal disease, and may play a role in a third of cases. 4While reducing exposure to tobacco smoke is an effective public health strategy to control the incidence of meningococcal disease, vaccination is likely to have a more immediate affect.
The ongoing identification and reporting of the serogroups responsible for meningococcal disease cases by the surveillance system is of particular importance following the introduction of meningococcal C immunisations.Ongoing monitoring of the epidemiology of the disease is essential to measure the effectiveness of the vaccination program, and to detect any trends in capsular switching that may be promoted, which may increase the incidence of serogroup B notifications.
NSW has begun an enhanced surveillance program for meningococcal disease.This program seeks to improve the completeness and quality of the case data, and collect data on a wider range of risk factors and outcomes than previously gathered.
Early diagnosis and treatment is thought to reduce the risk of death.However, the data available for this analysis are limited, and likely to represent the experience of the severe end of the disease spectrum.More detailed investigations would assist the interpretation of these findings, such as enhanced surveillance to determine the influence of known risk factors and the spectrum of disease severity.Describing the long-term sequelae of meningococcal disease for patients may increase our understanding of the affect of this disease on the NSW population.
CONCLUSION
This study used surveillance data to describe the epidemiology of meningococcal disease in NSW, and to identify groups at increased risk of infection and mortality.Surveillance data can be used to compare the epidemiology of meningococcal disease before and after the introduction of the meningococcal C vaccination program.
1 )
Notes: * For purposes of this analysis Greater Sydney includes the Sydney and Central Coast Area Health Services and the Illawarra and Hunter Area Health Services.Source: Communicable Diseases Branch, NSW Notifiable Diseases Database (HOIST).Centre for Epidemiology and Research, NSW Department of Health.
FIGURE 1 NUMBER
FIGURE 1 NUMBER OF NOTIFICATIONS OF MENINGOCOCCAL DISEASE BY MONTH OF ONSET, NSW, 1991-2002
TABLE 2 NOTIFICATIONS OF MENINGOCOCCAL DISEASE, NSW, BY YEAR 1991-2002, ANNUAL AVERAGE RATE, AND CASE FATALITY RATE Year Cases % total Average Deaths Case annual rate fatality per 100,000 rate
Communicable Diseases Branch, NSW Notifiable Diseases Database (HOIST).Centre for Epidemiology and Research, NSW Department of Health.Notes: For purposes of this analysis Greater Sydney includes the Sydney and Central Coast Area Health Services and the Illawarra and Hunter Areas Health Services. Source: | 2017-09-06T10:41:26.729Z | 2004-03-01T00:00:00.000 | {
"year": 2004,
"sha1": "e7fe70b304b85617cc9ca0cc0497343fde531eda",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.phrp.com.au/wp-content/uploads/2014/10/NB04011.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e7fe70b304b85617cc9ca0cc0497343fde531eda",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247067599 | pes2o/s2orc | v3-fos-license | A quintessence dynamical dark energy model from ratio gravity
Based on the work of ratio gravity developed in 2018, which postulates the deformation of the cross ratio to associate with the physical model of gravity, we develop a mechanism to generate dynamical dark energy - a quintessence field coupled with gravity. Such model causes the dark energy behaving differently in early and late time universe. In the radiation-dominated-era and matter-dominated-era, the related analytical solutions of the quintessence field have an interesting property - starting as a constant field, then oscillating as the universe expands. By Markov Chain Monte Carlo search of the parameter space with the local measurement (Type Ia supernovae) in the Bayesian framework, the probed range of {H_0} (within 1{\sigma}) overlaps the {H_0} value inferred from Planck CMB dataset by {\Lambda}CDM model.
I. INTRODUCTION
The accelerated expansion of the universe was discovered in 1998 by the observation of supernova [1]. The puzzle of dark energy is one of the greatest problems in cosmology. Theorists propose different explanations; for example, a simple one is the cosmological constant, Λ, of the ΛCDM model, which is a widely accepted model because it is part of the theoretical framework of general relativity, GR, and consistent with the observations in great detail [2,3]. Alternative theories such as scalar field model of dark energy [4] are also compelling cosmological models.
The measurements of the early universe and the local measurements (i.e. late-time universe) [3,5] probe the Hubble constant for different values with increasing accuracy so both results seem contradict, which is called the Hubble tension. It leads to many active research because of the potential implication for new Physics for our understanding of the gravity and cosmology [6].
There are several possible explanations to the tension such as statistical fluke, or the emerging spatial curvature effect from the cosmological model of the relativistic and nonlinear effect [7]. The study of the Planck data [8] shows that, one of possible solutions to the Hubble tension, is that the equation of state of parameter of the dark energy, w, is not equal to -1. Another proposed resolution is the early dark energy model, EDE. EDE models drive the expansion of the early universe (usually in radiation-dominated-era and/or matter-dominated-era), so that the expansions of the universe in the early era and late time era behave differently. In the work of Refs. [9,10], one of the proposed axion models initially freezes at constant field value, then evolves to oscillate after the critical redshift so the EDE effect drives the early and late time of the expansion periods differently. Poulin et al. [10] analyse the models against the Planck CMB dataset * chjliu@connect.ust.hk and Type Ia supernovae dataset to show that the Hubble constant, H 0 , probed by the model is consistent with the H 0 inferred by Planck CMB dataset. Furthermore, another proposed dark energy model is the acoustic dark energy model [11]. It is related to the dark fluid that a scalar field converts its potential energy to kinetic energy during the matter-radiation equality.
The theory of ratio gravity, RG theory, is a newly developed theory [12] that postulates the deformation of the cross ratio to associate with the physical model of gravity in the framework of Newman-Penrose formalism [13]. In the present work, we deploy a different approach from Ref. [12] that we derive the physical models of fermion and scalar fields from the core equations of the RG theory in sections 2 and 3. We develop a generic framework to obtain the scalar field, which has the property of the symmetry breaking for vacuum expected value that is similar to the ordinary φ 4 theory (in section 3).
In this paper, we explore the interesting property of a quintessence dynamical dark energy model originated by the work of ratio gravity. We study the prediction of the model in the the second part of the paper accordingly.
In section 4, by considering the scalar field as the quintessence dark energy model with CDM, we show the correspondence to the ΛCDM model in radiationdominated and matter-dominated eras. In such eras, we found the related analytical solutions of the field. The solutions have an interesting property -starting as a constant field, then oscillating as the universe expands. The models of axion dark energy [9,10] suggest similar scenario: the axion starts with "frozen" phase then transits to the oscillating phase.
In the last section, we perform the Markov Chain Monte Carlo search for the parameters of the qCDM model -quintessence and CDM model -with the Pantheon dataset -1048 Type Ia supernovae (SNe Ia) [14]. The probed Hubble constant is approximately equals to 67±4 km/s/Mpc. Although we probe the model parameters by the dataset of late time universe, the probed range of H 0 (within 1σ) surprisingly overlaps the H 0 deduced from ΛCDM by Planck CMB dataset [3]. Due to the limited data analysis in this work, we make no conclusion to the possibility for resolving Hubble tension by this model. Additional data analysis with more dataset such as BAO is recommended in the future work.
II. INTRODUCTION TO THE FRAMEWORK
In this section, we introduce the principle of ratio gravity from the previous work [12] and two core equations used throughout this paper. We develop a new framework that relies on the basic principle of Ref. [12], while modify the interpretation of the connection to gravity. We explain the difference at the end of this section.
The definition of the cross ratio over Riemann sphere is: where z 1 , z 2 , z 3 are complex numbers of the poles, and z is the reference point over the Riemann sphere. A cross ratio consists of many equivalent representations while represents the same value. The arbitrariness of the same cross ratio allows four degrees of freedom because only four free parameters for three movable poles. By Ref. [15], one of the representations of the cross ratio is the hypergeometric differential equation of three regular singular poles. Since the hypergeometric differential equation can be written as a second order linear differential equation in two-by-two matrix form and one can express it as the integrable system [16]: We can further introduce the gauge transformation to Y and B matrices with re-definitions of B matrices to yield: where and Λ µ are trace-less twoby-two Hermitian matrices. Eq.
The D operator obeys Leibniz rule for derivation. One can further transform Y equation by tensoring a Hermitian map to yield the form: Eq. (2) is the original form of Y equation in Ref. [12] with spinor index ab. Note that the Hermitian map can be associated with the metric components according to Newman Penrose formalism (NP formalism [13]). The D operator can be defined in more general way as the form D ν = f νµ ∂ µ 1 as long as it is one-to-one corresponding to the D operator of Y equation (1) by associated automorphism. In this paper, we use capital index, e.g. A, to denote the index abstractly to reserve the generalization for D operators. The gauge transformation of Eq. (1) allows generating different representations of the Y equation trivially so it is an automorphism -the transformed Y equation and the original Y equation are in the same space (i.e. the same mathematical structure). Such automorphism allows we describe the same cross ratio with different equivalent representations.
Galois transformation is introduced to provide another form of transformation that respects the automorphism [12]. The Galois transformation is defined by a Galois operator,ρ, that obeys: Eq. (3) is called the Galois equation. The definition of the Galois transformation and Galois equation originate from the Galois differential theory 2 [16] -the theory studies the Galois groups of the differential equations. In the context of ratio gravity, we focus on how Galois equation provides the transformation of Y equation and the related automorphism, so it requires no intensive knowledge of Galois differential theory.
In section 3, we derive the equation of motion from Y equation and introduce a generic Galois operator that solves Galois equation and leads to the related scalar field equation.
Unlike the previous work [12], we use original framework of General Relativity instead of NP formalism. In previous work, the NP formalism is connected to set of Galois equations via the introduction of Bianchi constraints.
In this new framework (section 3), we first find the equation of motion of Y equation (1), and the related scalar field equation(s) from Galois equation (3) to define the associated Lagrangian of matter -L m . Then, as the ordinary treatment of general relativity, we consider L m as the source of gravity to define gravitational energy momentum tensor [17], i.e. using Einstein equation as the constraint equation to fix the degree of freedom of the metric.
In the context of RG, we use both Galois transformation and the continuous transformation of the metric elements by general relativity (GR) to find the space of transformed Y equations, i.e. associated cross ratio representations. Note that the automorphism in the context of RG is not the same as the one in GR context -the general covariant transformations defined as the automorphisms of fibre bundles; RG requires the automorphisms applying to the space of Y equations, i.e. the cross ratio representations.
III. Y -FERMIONS AND THE VACUUM
In order to find the equation of motion of Y equation (1) and the associated Galois equation (3) in this section, we apply the gauge transformation to Eq. (1), make use of Dirac equation, and find the related Galois equations. In the middle and last parts of this section, we explain the interpretation of the equation of motion of Y equation and related Galois equation in the context of quantum field theory. The purpose of this section is to define the physical models in the RG context under the framework of Lagrangian.
By applying a gauge transformation to Eq. (1), one of the four components of Y matrix can be gauged out because there are 3 degrees of freedom of the gauge in SU2. Therefore, there are four possible cases to choose the zeros of the components of Y matrix. We classify them as four categories: CAT 1→4 of Y matrix as follow * 0 * * , In order to define the equation of motion by the eigen solutions of Eq. (1), we introduce the parameterization to B and Λ matrices: where we use matrix structure of sl 2 algebras (ê,f ,ĥ), p aµ are dimensionless parameters, and a Φ are complex functions. We define y for the column matrix of three dimensions to represent the non-zero components of Y matrix. We can re-write the Y equation for CAT 1 → CAT 4 as the following form: where P µ are the three-by-three matrices. There are constraints of { a Φ} needed to be satisfied to obtain Eq. (6); for instance, the explicit form of P µ matrices for CAT 1 is where m y is the mass coupling, L = L(p)l, R = R(p)r and l * are the doublet of first and second eigen-vectors, and r is the third eigen-vector of y respectively 4 , and we denote φ as the doublet form of {φ 1 , φ 2 }. We naturally define the equation of motion of y as the equation of motion for Y fermion. The equation of motion cannot be solved because the value of φ are not constrained, so we rely on Galois equation Eq. (3) to fix it next.
In the context of quantum field, we interpret that the excitation of multiple Y fermions by the Dirac Lagrangian associated with Eq. (8) is corresponding to the set of multiple representations of the related Y equation. It is merely the interpretation to relate the context of quantum field from the RG theory's perspective.
Cassidy [16] defines the Galois map (automorphism π) to transform as π : x → y, where x and y are elements of the space constructed by Y matrix and the derivatives of Y matrix, and π∂ = ∂π as Eq. (3). We define a generic Galois operator,ρ, similarly:ρ := X A D A , wherê ρ satisfies Galois equation, i.e.
and X A are two-by-two-matrix functions because the operator D acts on two-by-two matrices. Certainly, one can define a more complicated Galois operator (e.g. higher derivative operator) with the potential cost of less solvability of Galois equation. To ensureρ being associated with automorphism, we use the exponential map: exp( ρ), such that Y equation (1) transforms invariantly and infinitesimally by if Eq. (9) is satisfied. Because of the parameterization for Y equation, the Galois equation (9) can be generally expressed as (for CAT 1 and 4): Φ is zero, and X(x) A = ω(x) A 1 2 + f (x) Af . For CAT 2 and 3, equations (10) are the same form by the transformation: f (x) A → e(x) A and f A → e A . We notice the equation above can be realized in a symbolic form as: where (Φ) and (ΦΦ) denote the terms with the coefficients for the powers of ( a Φ) and ( a Φ b Φ) respectively, the equation is likely in the form for scalar field(s) with nonzero vacuum expected value, vev. The rest of this section is to prove this observation and construct the associated symmetry-breaking Lagrangian.
By applying the rest frame condition onto Eq. (10) for the dimensionless momentum, p = (p O , 0), of y fermion, we obtain: where O denotes the time-axis-index, O = C, f · ∇ denotes directional derivate f A ∂ A , and µ := p O is the dimensionless constant of the theory. Eq. (12) is only a specific solution of Eq. (10) when we consider the case of singlet φ 1 solution. (The doublet equation is not covered in this paper.) In the rest of this paper, we denote φ as the singlet field. In order to apply to a specific coordinate system for cosmology, we consider the D O operator by D O = a(t)∂ t for FRW cosmology. The Laplacian of the timedependent-only φ is 5 where θ denotes f O , and the associated Lagrangian is where φ is expressed in real and imaginary parts: φ = χ 1 − i χ 2 , B = −2iA, C = µθ 2aθ and A =θ 2 2θ 2 −θ 4θ . The Lagrangian above is not yet the physical model we look for. The problematic complexness does not respect the Hermiticity of Lagrangian. So, we add the Hermitian conjudge terms. Because the Hermitian map of D operator (8), the associated Y fermion respects the symmetry of positive and negative energies. Therefore, we model the Lagrangian terms associated with the Galois equation in the current theory as L φ = L +φ + L −φ such that it respects positive-negative-vev-symmetry, just like the ordinary φ 4 theory respects Z 2 symmetry. Obviously, this artificial symmetry breaks down if the Y -fermionic sector does not obey such symmetry. We consider this possibility to be the future development. Finally, we obtain the effective potential V χ : where m χ (θ) 2 := A.
In this section, we show how to obtain the fermionic model of theory, i.e. Y fermion, associated with Y equation (1) for CAT 1→4, and the symmetry-breaking scalar field potential (15) by the generic Galois operator and Galois equation.
IV. THE QUINTESSENCE FIELD AND COSMOLOGICAL MODEL
In this section, we apply the framework of previous section to the application of cosmology -simplify the χ potential (15) to construct the quintessence field model that coupled to gravity. We show that such qCDM model corresponds to well-accepted ΛCDM with a derivation mechanism. Brief comparison to several established models [9,10] is covered.
The vacuum expected value of χ potential (15) is µ/a . It is not fixed because of the degree of freedom by m χ (θ), and it is dynamical as the scale factor varies. By requiring the vev of χ fixed, the m χ (θ) term should be proportional to 1/a so the vev becomes mχ µ , and the χ potential becomes where m χ is a constant parameter. In the rest of the paper, we consider the simple case to define the quintessence field, that χ is a real scalar field which recovers as φ 4 potential at a = 1, and it has the minimum degree of freedom needed to solve Friedmann equations. The quintessence field potential is re-written as We consider the Lagrangian of the quintessence 6 , L q := Z χ L χ , as the source of gravity to define gravitational energy momentum tensor for the quintessence field, and apply the usual variation on L q to get the related density and pressure of the quintessence field 7 [17]: where we expand the χ field around vev, χ = mχ µ + q, so we can deploy weak field limit next. We notice that V χ (a 2 ) contributes because of the variation, and absorb Z χ factor for terms, Z χ µ 2 → µ 2 and Z χ m 2 χ → m 2 χ . With the re-definition of the constants, the Z χ factor is merely the rescaling factor for the time/energy scale between the quintessence and χ fields. We further assume the validity of weak-field-limit, i.e. quadratic-terms-dominated, to yield: In order to study the dark energy behavior, we define the dark energy density parameter, ξ := Ω q , so where µ 2 := for the ease of parameter probing next. By Friedmann equations (without curvature k and cosmological constant terms), we havė where F := (2α 2 µ 2 q + a 2 λZ χq ), w is the equation of state of the matter component, and r = 3, 4 for matter and radiation-dominated eras respectively. We found if we define the equation of motion for q then, when F = 0, the weak field limit is valid up to a long period of cosmological time span. The Friedmann equations together with Eq. (20) can be expressed as 7 Prime denotes the derivative with respect to a 2 .
Eq. (21) are the equations of the qCDM model of the quintessence theory. The ΛCDM model is effective and supported by many observations. We need to verify the validity of qCDM model analytically against ΛCDM model. It is clear iḟ ξ is zero, i.e. α 2 = 4λq 2 , then we recover the case of the cosmological constant and the first equation of (21) is simply the Friedmann equation with the cosmological constant so the ΛCDM correspondence is satisfied. Therefore, q must be approximately constant and equals to α 2 √ λ in order to justify the validity of ΛCDM correspondence. It can be achieved by the quintessence field staying approximately constant for long period of time or oscillating very slowly.
In both radiation-dominated-era and matterdominated-era, i.e. Log(a) ∝ Log(t), we can solve q analytically. In matter-dominated-era, constants, and t 0 is the present time, a(t 0 ) = 1; in radiation-dominated-era, where J 1 is a Bessel function of the first kind, Y 1 is a Bessel function of the second kind, ω = 2 T = t/t 0 , and c 3 , c 4 are integration constants. Both analytical expressions lead to the constant mode as t → 0, so in the early universe, the quintessence field is asymptotically constant and later evolves to oscillate. The Eq. (23) is similar to the scalar field model in Ref. [18] that showed ultra-light scalar fields affect growth of structure in the Universe as well as the expansion rate. In the limit that, a(t) ∝ t p , i.e. in both radiation-dominated and matter-dominated eras, the analytical form of the axion-like particles is similar to (but not the same as) Eq. (23).
Given that the quintessence is nearly constant, q c , ξ is solved where ξ 0 is an effective cosmological constant term and C = as a constant parameter associated with the term scaling as the spatial curvature (i.e. a −2 ). We have shown by the analytical form of the quintessence field that the near-constant-approximation is applicable in the early universe (in radiation-dominated-era and matter-dominated-era), and then the field oscillates shown in FIG. 1; the dark energy density parameter ξ has an effective cosmological constant term in Eq. (24). Interestingly, the models of dark energy [9,10] suggest similar scenario (but not exactly the same): the axion starts with "frozen" phase as the cosmological constant then transits to the oscillating phase, FIG.1 of Ref. [9].
V. DATA ANALYSIS
In our data analysis section, we first identify a smaller set of parameter spaces of Eq. (21) that reduces the qCDM model to an effective and simplified qCDM version; following the same procedure as in [10,19], we use the Pantheon dataset of 1048 SNe Ia [14] to probe the model parameters with Ω m dominated by matter component only.
The original parameters of qCDM are {H 0 , Ω m , Z χ , λ, α, q 0 , q 1 }, where q 0 and q 1 denote the q(t 0 ) andq(t 0 ) of the present day. We fix Z χ = µ 2 because it allows that λ is in scale within unity and α is in scale of H 0 . We also assume the effectiveness of ΛCDM in the present day such thatξ(t 0 ) and q 1 are effectively zero so 4q 2 0 = α 2 λ. The parameters of our simplified qCDM model are {H 0 , Ω m , λ, α}. 8 The sampling by the numerical solving for Eqs. (21) consumes the computation resources seriously; therefore, we impose the prior-assumption. By applying Markov Chain Monte Carlo, MCMC, parameters searching 9 and assuming the flat priors 10 on {H 0 , Ω m , λ, α}, we perform the initial MCMC exploration on binned data (40 datapoints), then we identify the preferred prior region for α ≤ 0.25, whereas the small-valued α is theoretically suggested because ξ becomes effectively the cosmological constant as α → 0 by Eq. (21). The full MCMC run on the Pantheon dataset (1048 SNe Ia) [14] shows the region of convergence from the 1D and 2D posterior distributions in FIG. 2 with Gelman-Rubin criterion R − 1 < 0.024. 8 The base unit of α and q is set as 73.9 km/s/Mpc. 9 An open sourced MCMC Mathematica package with modification for the our model (https://github.com/joshburkart/mathematica-mcmc). 10 We use the preferred prior from the known cosmological parameter: 0.24 < Ωm < 0.36, 57 < H 0 < 85 km/s/Mpc.
FIG. 2.
1D and 2D posterior distributions of H0, α, Ωm probed by the late time measurement (Pantheon dataset of 1048 SNe Ia). The mean H0 (±1σ) of qCDM model is approximately equals to 67±4 km/s/Mpc; the referenced H0 of Planck [3] and the referenced H0 of the late time measurements [5] are shown in red and blue respectively.
Finally, we use the Eq. (24) and the mean-best-fitparameters to check briefly if the C term of qCDM model is consistent with the CMB power spectrum. Without a complete probe of qCDM parameters, we only change the Λ term, cosmological constant of ΛCDM, by the Eq. (24). The value of q c is 0.071, and the order of magnitude of C is −4.9. We obtain the values of χ 2 of CMB power spectrum against the dataset of PlanckTT and WMAPTT 12 . The χ 2 of PlanckTT and WMAPTT are only shifted by 0.02 and 0.07 respectively (χ 2 for PlanckTT and WMAPTT are 493.37, 42.54 respectively). However, the complete parameter probe of qCDM against CMB power spectrum is not covered in this work.
VI. DISCUSSION
In this work, we introduce the framework of ratio gravity that postulates the transformation of cross ratio is related to different representations of the associated fermion and scalar models. The theory provides the mechanism to generate the symmetry-breaking scalar fields naturally, that leads to the quintessence field to drive the dark energy behaving dynamically. The presented qCDM model can reproduce the ΛCDM model with a derivation mechanism. The data analysis of the model with the supernovae dataset suggests H 0 = 67±4 km/s/Mpc, which is aligned with the latest Planck observation [3]. Yet, a further analysis with complete set of qCDM parameters against CMB and BAO dataset is suggested.
The theory can be extended to the domain of complex singlet and doublet models of the scalar field. As the mass scale of the quintessence field is in H 0 suggested in the section 5, the possibility of light-massive boson because of the complex phase of the singlet model is worth to be studied. | 2022-02-24T16:18:45.295Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "e1aa64b4e84bfa24320c75ae674852acee3b06a3",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-022-10134-1.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "3830fb3d2c30d30bc55ac2edc5ce5e89d28105b3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
210949130 | pes2o/s2orc | v3-fos-license | RNA Editing as a Therapeutic Approach for Retinal Gene Therapy Requiring Long Coding Sequences
RNA editing aims to treat genetic disease through altering gene expression at the transcript level. Pairing site-directed RNA-targeting mechanisms with engineered deaminase enzymes allows for the programmable correction of G>A and T>C mutations in RNA. This offers a promising therapeutic approach for a range of genetic diseases. For inherited retinal degenerations caused by point mutations in large genes not amenable to single-adeno-associated viral (AAV) gene therapy such as USH2A and ABCA4, correcting RNA offers an alternative to gene replacement. Genome editing of RNA rather than DNA may offer an improved safety profile, due to the transient and potentially reversible nature of edits made to RNA. This review considers the current site-directing RNA editing systems, and the potential to translate these to the clinic for the treatment of inherited retinal degeneration.
Introduction
Programmable editing of nucleic acids offers significant therapeutic potential for a wide range of genetic diseases. The development of the Clustered Regularly Interspaced Short Palindromic Repeat-CRISPR-associated genes (CRISPR-Cas) system for facile gene editing in mammalian cells has led to a wide range of approaches for gene manipulation including gene silencing, repair and editing [1]. An area of promise for genetic therapy is for the treatment of inherited retinal degenerations. Inherited retinal diseases are an important cause of blindness that result from dysfunction and death of cells in the outer retina ( Figure 1) due to mutations in a heterogenous array of genes [2]. Treatment of the underlying genetic cause of inherited retinal degenerations could halt or reverse vision loss in these patients.
Due to the retina having relative immune privilege, easy accessibility for vector delivery and non-invasive functional and structural endpoints for assessing treatment efficacy, retinal disorders are excellent candidates for genetic therapies [3]. Gene replacement therapy to introduce the normal copy of a gene to cells lacking gene expression in X-linked and autosomal recessive conditions has shown excellent outcomes, with an FDA-approved therapy to replace the RPE65 gene for Leber's congenital amaurosis [4] and many others being evaluated in clinical trials [3]. Currently, adeno-associated viral (AAV) vectors are the most commonly used delivery mechanism to express genes in retinal cells [3]. AAV vectors offer high tropism for photoreceptors and RPE, the commonly targeted cells in retinal degeneration [5]. Furthermore, their low immunogenicity and the safety profile established across multiple clinical trials make them attractive delivery vehicles. For the many patients who have a disease-causing mutation in genes too large for the~4.7 kb packaging capacity of AAV, however, a different treatment strategy is required. The cDNA for genes such as ABCA4 and USH2A do not fit in a single AAV, and in the USA mutations in these two genes account for 25% of families with inherited retinal degeneration [6]. While alternative approaches using other delivery strategies such as using dual AAV vectors, alternative viral vectors or non-viral vectors are being investigated [7,8], a substantial number of inherited retinal degenerations still cannot be treated with AAV gene therapy. Furthermore, titrating transgene expression in the retina can be challenging, and overexpression of transgenes from strong promoters may lead to cellular toxicity, making it desirable to edit the endogenous gene [9][10][11][12].
Gene editing aims to correct the endogenous genetic sequence rather than delivering a replacement gene. The DNA editing capabilities of enzymes of the class II CRISPR systems such as Cas9 and Cas12 have been widely used for programmable RNA-guided DNA targeting [1]. In the eye, this has been used to knockdown dominant genes in animal models [13][14][15][16], reprogram photoreceptors [17], target angiogenic factors [18] and excise a deep intronic splicing mutation in the CEP290 gene, a strategy moving forward to clinical trial this year [19]. Correction of coding mutations in large genes using DNA editing requires a different approach. Strategies that require the introduction of a correct donor template, such as homology-directed repair of double-stranded breaks have low in vivo efficiency [20][21][22]. Approaches such as DNA base editing [23] or prime editing [24] can relatively efficiently introduce specific corrections without double-stranded breaks but use large constructs that are not currently deliverable with AAV vectors. While efforts to improve in vivo editing efficiencies and reduce off-target mutations continue, the threat of introducing permanent off-target mutations in DNA elsewhere throughout the genome is a significant challenge for all DNA targeting strategies [25].
Unlike DNA, messenger RNA (mRNA) molecules exist transiently within the cell and transmit the genetic information that encodes for the production of proteins. A strategy to edit RNA rather than DNA is, therefore, highly appealing, as this allows for the editing of pathogenic mutations at a transcript level, without the risk of creating permanent off-target mutations in the genome. Furthermore, the transient nature of RNA means that RNA editing is potentially reversible and controlled over time.
Recent advances in RNA editing technologies have enabled the development of engineered enzymes capable of either adenosine-to-inosine (A-I) or cytosine-to-uracil (C-U) edits. Site-directed RNA editing uses molecular tools to recruit RNA editing enzymes to target sites of interest and enables the targeted correction of G>A and T>C mutations in coding sequences of RNA. Given that G>A and T>C mutations comprise up to 61% of all pathogenic point mutations in humans [26], these tools are an exciting therapeutic prospect for inherited retinal degeneration.
Here, we review the potential for using RNA editing enzymes for site-directed RNA editing, and how this might be applied to correct mutations in large genes implicated in inherited retinal degenerations not amenable to AAV gene therapy.
ADAR Expression, Structure and Function
The majority of strategies to edit RNA utilize the activity of the naturally occurring human adenosine deaminase acting on RNA (ADAR) enzymes. ADARs act to convert adenosine residues to inosines (A>I editing) in double-stranded RNA. Inosine is structurally similar to guanosine and is read as a guanosine by most cellular machinery including during translation, splicing and reverse transcription, effectively creating an A>G edit in RNA [27] (Figure 2). Two ADAR enzymes have been identified in humans to carry out A>I editing activity, ADAR1 and ADAR2. ADAR1 is expressed as two isoforms, a shorter 110kDa isoform (ADAR1 p110) and a larger 150 kDa isoform (ADAR1 p150). ADAR1 p110 is constitutively expressed and localizes to the nucleus, while ADAR1 p150 is inducible by activation of the innate immune sensing system and localizes to the cytoplasm [28]. ADAR1 is expressed ubiquitously including in the retina [27,29]. ADAR2 is expressed predominantly as a single isoform and nuclear localization signals mediate the localization of ADAR2 primarily to the nucleus and nucleolus [30]. ADAR2 is most strongly expressed in the lung and brain [27,29]. Evidence suggests that ADAR2 RNA is expressed in the retina and immunohistochemistry demonstrates ADAR2 localised to retinal ganglion cells. However, protein expression in other cells types has not been demonstrated [29,31]. Given that editing of pre-mRNA is observed and can be dependent on double-stranded RNA structures formed by introns, it is likely that RNA editing of mRNA occurs in the nucleus, before or concurrently with splicing [27,32].
A>I editing one of the most common post-transcriptional modifications in humans [27]. Most A>I editing occurs in non-coding sequences, including in the 5 and 3 untranslated regions (UTRs) and within intronic retrotransposon units such as Alu and long-interspersed elements (LINE). In mammals, many of these regions are repetitive regions and editing of repetitive regions is principally mediated by ADAR1 [33]. A>I editing in coding sequences occurs more rarely, but can lead to codon changes and amino acid alterations [27]. Editing in these non-repetitive regions is principally mediated by ADAR2 [33]. This site-specific editing of coding sequences is strikingly demonstrated in the glutamate receptor Gria2 (also known as GluR2) in mice [34]. Loss of ADAR2 recoding of the Q/R site is lethal post-natally, and rescue results in viable animals [34]. Altered expression of ADAR or changes to tissue A>I editing activity has been associated with a range of diseases [35]. However, the physiological role of the majority of ADAR editing beyond Gria2 remains to be elucidated [36].
All ADARs have two common structural motifs. A double-stranded RNA-binding domain (dsRBD) makes direct contact with double stranded RNA (dsRNA), while a C-terminal deaminase domain (ADAR DD ) catalyses hydrolytic deamination where an amino group is replaced by a hydroxyl group, converting adenosine to inosine ( Figure 2). This allows inosine to act as guanosine and pair with cytidine by Watson-Crick base-pairing. A base-flipping mechanism rotates the target adenine base from the RNA helix into the enzyme catalytic pocket to allow the enzyme to attack the target C6 carbon [37].
In natural perfectly matched duplex RNA, ADARs deaminate approximately 50% of adenosines. However, factors such as the local sequence context and complex secondary structures can affect editing rates [38]. Wildtype ADARs have a preference for adenosines with a nearest neighbour 5 -A or -U and a 3 -G as interactions of these near-neighbour nucleotides with the deaminase domain appear to promote stabilization of the flipped base [37].
A-I Editors
To improve A-I editing efficiency for genome engineering, rational and random mutagenesis has produced a number of ADAR mutants. The E488 residue of ADAR2 stabilizes the base-flipped structure by taking the space of flipped adenosine and hydrogen bonding to the opposite base [37]. The E488Q hyperactive mutant displays increased editing activity, likely because glutamic acid (Q) is relatively more protonated than glutamine (E) and thus makes it a better hydrogen bond donor to bind an opposing cytidine base [37]. Common to most strategies, it was identified that ADAR editing preferentially occurs of adenosines that are mismatched with a cytidine as the opposite base in the guide RNA target (rather than a thymidine), and this A-C mismatch improves editing efficiency at the target site [39]. Conversely, an A-G mismatch reduces editing efficiency [40]. In the ADAR1 DD , the E1008Q mutation has been identified as a hyperactive mutation with similar activity to the E488Q mutation in ADAR2 DD [41,42].
Further engineering has produced variants with improved specificity. The T375 residue helps stabilize the edited strand in the cleft containing the active site through hydrogen bonding to the 2'-hydroxyl of the flipped base and the phosphate between the flipped base and the +1 base [37]. An engineered E488Q/T375G ADAR2 DD mutant displays much greater specificity and relatively preserved efficiency for editing of the target adenine [43].
C-U Editors
To develop the scope of RNA editing to other bases, Abudayyeh et al. developed a deaminase capable of cytosine (C) to uracil (U) RNA editing by undertaking mutagenesis of the ADAR2 DD [44]. Using insights from the structural homology of human ADAR2 and the naturally occurring cytosine deaminase from Escherichia Coli, they undertook rational mutagenesis of the hyperactive human ADAR2 DD , and then subsequently evolved this using random mutagenesis to yield a C to U editing ADAR DD (C-U), which retains A-I editing capabilities.
Further mutagenesis was used create a more specific ADAR DD (C-U) mutant with a S375A mutation to yield a higher specificity variant with fewer A-I off-targets. This S375A substitution is at the same amino acid position as the higher specificity ADAR2 DD mutant (T375G), supporting the role of this position in ADAR DD A-I editing specificity [44]. Unlike A-I ADAR editing used in a number of systems, these C-U editors have only been utilized so far in the RESCUE CRISPR-Cas13 based system described later.
RNA Editing with ADARs
First proposed by Woolf and colleagues almost 25 years ago [45], the A-I editing activity of ADARs can be used for site-directed RNA editing. Recently, many new approaches have been developed to harness these enzymes. In order to develop a treatment for an inherited retinal degeneration caused by mutations in a long gene, it is necessary to understand and evaluating the many site-directed RNA editing strategies and mechanisms across a broad range of disease applications tested to date.
To enable site-directed ADAR editing, it is necessary to create a double-stranded RNA structure at the target region and draw the ADAR to this site. Strategies, therefore, broadly comprise two parts: (1) a guide RNA (gRNA) sequence antisense that can hybridize to the target RNA, and (2) an effector mechanism that recognizes the gRNA and recruits the ADAR enzyme. These strategies may recruit the whole native ADAR enzyme including its dsRNA binding domains or express only the deaminase domain (ADAR DD ). Strategies using the whole ADAR enzyme may utilize the naturally expressed endogenous ADAR or introduce this with exogenous expression. Strategies using the ADAR DD typically fuse this to another protein effector that is capable of programmable RNA binding to enable site-directed RNA editing. Figure 3 graphically illustrates each approach, while Table 1 present a summary of the results obtained with each tool discussed.
BoxB-λN-ADAR
One of the first demonstrations of engineering ADAR for site-directed RNA editing used the interaction between the BoxB RNA hairpin and the Lambda N protein (λN). First described in bacteriophage as an anti-terminator of transcription [46], the λN protein naturally binds short stem-loop RNA structures called BoxBs. For site-directed RNA editing, the 22 amino acid binding site of the λN protein was fused to the ADAR2 DD ( Figure 3A). A guide RNA incorporates BoxB hairpins in the middle of an antisense guide sequence with a mismatched C at the target adenosine. Binding of the λN peptides to the BoxB-gRNAs recruits the ADAR DD to the editing sites. Montiel Gonzales et al. reported that in Xenopus oocytes injected with the cDNA of the cystic fibrosis gene CFTR containing a premature stop codon, the λN-BoxB-ADAR2 DD system had a 20% correction efficiency with partial functional restoration of the CFTR protein [47]. Subsequent optimization of this system has improved editing rates, including by using four copies of the λN binding peptide and using two rather than one BoxB hairpin in the gRNA structure to achieve 20-70% efficiency editing reporter constructs in HEK293T cells [48].
Sinnamon et al. have also employed a similar BoxB-λN system delivered via AAV to edit the Mecp2 gene involved in Rett syndrome in primary murine neurons [49]. Using an AAV1/2 vector containing a single λN binding peptide fused to an ADAR2 DD (E488Q) together with a BoxB-gRNA expression cassette containing six copies of the guide, they demonstrated a 72% on-target correction of a missense Mecp2 mutation and partial restoration of MECP2 protein [49].
A significant drawback of λN-BoxB systems has been substantial off-target editing events, especially with the 4λN system with ADAR2 DD (E488Q) [50]. Use of a nuclear localization signal (NLS) to target primary RNA transcripts in the nucleus and lower amounts of guide RNA may reduce these [50]. . Schematic figures demonstrating the RNA editing approaches of (A) BoxB-λN-ADAR with a dual BoxB design to recruit the λN peptide (B) SNAP-tag-ADAR fusion with a O6-benzyl-guanine (BG) conjugated adRNA (C) Glu-adRNA approach where the GluR2 R/G hairpin recruits exogenous or endogenous full length ADAR (D) MS2-MCP-ADAR stem-loop approach with dual MS2 stem-loop hairpins recognized by the MS2 bacteriophage coat binding protein (MCP) (E) An example of endogenous ADAR recruitment using long LEAPER-arRNAs and G-A mismatches within the guide region to reduce off-target editing (F) The REPAIR system with dPspCas13b-ADAR fusions recruited by a guide RNA with a direct repeat. Note the A-C mismatch at the target adenosine in each strategy to promote editing at the target site. gRNA = guide RNA; adRNA = ADAR guiding RNAs; λN = Lambda N protein; LEAPER = Leveraging Endogenous ADAR for Programmable Editing of RNA; REPAIR = RNA Editing for Programmable A to I Replacement; dPspCas13b = deactivated Cas13b orthologue derived from Prevotella sp. P5-125.
SNAP-ADAR
The SNAP-ADAR system uses a SNAP-tag fused to an ADAR DD [51]. SNAP-tag enzymes are engineered to recognize O6-benzyl-guanine (BG) as a substrate and form a covalent linkage. Stafforst and colleagues generated modified nuclease-resistant guide RNAs conjugated to BG [52]. The BG-guide RNA is covalently bound by the SNAP-ADAR DD fusion via the SNAP-BG reaction, and this guides the deaminase to the RNA target ( Figure 3B). A recent study has demonstrated that with hyperactive mutants of the ADAR1 and two deaminase domains, impressive editing efficiencies of up to 90% of endogenous transcripts have been observed in a HEK293 cell line expressing SNAP-ADAR under a doxycycline-inducible promoter [53]. Editing rates were significantly influenced in this study by near-neighbour preferences, with target bases with 5 G demonstrating much lower rates of editing. A significant drawback of this system, however, is that due to the chemical modifications required for the guide RNA, it is not genetically encodable for stable viral delivery, for example within AAV. Clinical translation would likely require repeated administration of the guide RNA as used in antisense oligonucleotide therapeutics. If penetration of the gRNA to the relevant cell (e.g., photoreceptors) that express a SNAP-ADAR construct is feasible, this would theoretically allow for adjustable dosing much like a traditional drug.
GluR2-ADAR
The GluR2-ADAR system uses a naturally occurring GluR2 sequence to recruit the full-length ADAR2 protein. The GluR2 mRNA hairpin (termed an R/G motif) is a naturally occurring substrate strongly recognized by the dsRBD of ADAR2. This was harnessed to engineer an adRNA (ADAR-associated RNA, functionally equivalent to a gRNA) composed of the R/G motif fused to an antisense guide sequence. The dsRBD of a full-length ADAR protein recognizes the R/G motif and is recruited to the target site ( Figure 3C). First demonstrated by the Stafforst group with plasmid delivery of wildtype ADAR2, editing rates of up to 10% in the Parkinson's associated gene PINK1 were reported in HeLA cells [54]. Recent further work by Katrekar et al. used ADAR2(E488Q) overexpression together with a targeting adRNA, enabled on-target editing rates of endogenous transcripts of 10-40% in HEK293T cells [42].
As the full length ADAR2 protein is only 2.1 kb, it can be packaged in AAV together with the GluR2-adRNA sequences. Katrekar et al. attempted targeted correction of G>A mutations in two mouse models using AAV8-ADAR2-GluR2adRNA constructs [42]. The correction of a stop codon in the mdx mouse model of Duchenne's muscular dystrophy required the editing of two adenosines to correct a termination codon (TAA) to tryptophan (TGG). Intramuscular AAV injection of GluR2-adRNA + ADAR2 (both with and without the E488Q mutation) resulted in TAA>TGG correction in~0.8% of RNA transcripts and dystrophin partial protein restoration. In the second model, the authors targeted a G>A splice variant in the spf ash mouse model of ornithine transcarbamylase (OTC) deficiency. Systemic injection of GluR2-adRNA + ADAR2 resulted in 4.6-8.2% correction of the pre-mRNA and a reduction in the incorrectly spliced fraction of mRNA, confirmed by a 2.5-5% restoration of ornithine transcarbamylase protein. Correction rates were noticeably higher in those systemically injected with the E488Q mutant. However, the authors observed significant toxicity in these mice not observed in those injected with the wildtype ADAR. While the mechanism for this toxicity has not been established, it raises concerns for the systemic use of the E488Q mutant.
MS2-MCP-ADAR
Recruitment of the ADAR deaminase domain has also been achieved using the bacteriophagederived MS2-MCP tagging system. The MS2 bacteriophage coat protein (MCP) naturally binds the MS2 stem loop from its genome. This MS2 stem loop can be attached to a short guide sequence to create an MS2-adRNA, and this adRNA recruits an MCP-ADAR DD fusion protein to the editing site of interest ( Figure 3D). Katrekar et al. evaluated a MCP-MS2 system using an adRNA composed of a 20 nucleotide guide sequence with a mismatched C in the 6th position flanked by an MS2 stem-loop structure each side of the guide [42]. Systematic evaluation of the on-target efficiency using Sanger sequencing and RNA-seq analysis of different MCP-ADAR DD constructs on eight endogenous transcripts in HEK293T cells found that both MCP-ADAR1 DD and MCP-ADAR2 DD constructs could achieve editing rates that ranged from 10% to 80% in both coding and untranslated regions. Constructs with an ADAR1 DD , a nuclear export signal (NES), and deaminases with hyperactive mutations each demonstrated higher editing efficiencies. However, each of these factors also contributed to higher off-target editing rates. In an mdx muscular dystrophy mouse model, an intramuscularly delivered AAV8-MCP-ADAR1 DD (E1008Q)-NLS construct delivered with an MS2 gRNA demonstrated 2% on-target efficiency and partial restoration of dystrophin expression [42].
Another group found that an MCP-ADAR1 DD system that used a 21 nucleotide gRNA with a single MS2 stem loop was found to have a 7% editing efficiency for a premature termination codon in an EGFP reporter system in HEK293 cells [55]. This paper did not investigate the use of features known to optimize RNA editing efficiency such as the use of a mismatched C in the gRNA at the target nucleotide, the use of nuclear localization or export signals, or linker sequences for the MCP-ADAR DD fusion.
Endogenous ADAR Approaches
Although editing rates are noticeably improved by the overexpression of exogenous ADARs, evidence from a number of recent papers suggests that recruitment of endogenous ADAR to edit double-stranded RNA is a potentially viable strategy.
Initial work by Fukuda and colleagues demonstrated that A to I editing of RNA molecules could be achieved using ADAR guiding RNAs (AD-RNA) and recombinant ADAR in an in vitro reaction [56]. The custom AD-RNA was constructed with a short stem-loop structure and further developed with insights from the design of the GluR2 hairpin. They demonstrated that this system could be used in HEK293 cells to edit GFP expression constructs and repair a premature stop codon [56]. The LEAPER editing system (Leveraging Endogenous ADAR for Programmable Editing of RNA) developed by Qu et al. utilizes their observation that expression of long RNA (71-191 nucleotide) guides creates regions of dsRNA long enough to enable endogenous ADAR1 binding and editing [40]. LEAPER guides were designed without any domains for ADAR recruitment, and use a single A-C mismatch at the target site to direct editing ( Figure 3E). Guanosine mismatches placed at non-target adenosine sites reduce off-target editing within the long guide binding region. Plasmid expression of a 151 nucleotide guide from a U6 promoter was used to target a range of endogenous targets in HEK293 cells, with editing rates of up to 50% in the 5 UTR and reduced editing rates in coding regions of up to 20%. In primary cells lines editing rates of 30-80% were achieved in the 5 UTR of PPIB, a chosen candidate endogenously expressed gene that encodes a cyclophilin enzyme. Long 111 nucleotide guides, chemically stabilized with 2 -O-methyl and phosphorothioate linkage modifications, were also electroporated into fibroblasts derived from patients with Hurler syndrome (a severe form of mucopolysaccharidosis type 1) with a Trp402Ter mutation. This restored previously deficient IDUA enzyme activity and was found to have higher A>I editing activity and enzyme activity restoration when targeting pre-mRNA (30%) compared to mRNA (10%). Lentiviral delivery of LEAPER guides was also attempted but only achieved 6% editing rates in the 5 UTR of PPIB in HEK293 cells [40].
In contrast to the long guides of the LEAPER system, the RESTORE editing system (Recruiting Endogenous ADAR to Specific Transcripts for Oligonucleotide-mediated RNA Editing) recently published by Merkle and co-workers uses shorter 20-40 nucleotide chemically modified anti-sense oligonucleotides (ASO) with an R/G ADAR recruiting domain to edit transcripts with endogenous ADAR1 [57]. Using 20 nucleotide ASOs as guides, editing rates of the 3 UTR of GAPDH of 4-34% across a range of immortalized cell lines and 10-63% in primary cell lines were observed, but editing in the open reading frame was not possible. Further engineering and lengthening (40 nucleotide) of the ASO enabled a 10-20% correction of an α-1-antrypsin deficiency mutation in the open reading frame of the SERPINA1 gene in HeLa cells, and a 7-21% editing rate of tyrosine 701 in the STAT1 gene in primary fibroblasts and RPE. They noted all editing rates could be improved by induction of expression of the ADAR1p150 isoform by administration of interferon alpha [57].
From these studies, it appears that the recruitment of endogenous ADAR is possible either with long gRNAs or with recruitment domains. Minimal editing activity is seen when using antisense guides without recruitment domains that are less than 60 nucleotide [42] or 70 nucleotide [40]. As with other RNA based strategies, delivery of a stable RNA that can be genetically encoded may be challenging. Further work to optimize these parameters and evaluate them with an appropriate delivery strategy in vivo is required. In vivo recruitment of endogenous ADAR has been observed, although the correction rates were almost undetectable (~0.6%) in OTC mice injected with a short 20 nucleotide AAV-GluR2-adRNA alone without ADAR overexpression [42].
Using the capacity of dCas13 to create targeted double-stranded RNA substrates, Cox and colleagues used a Cas13b orthologue derived from Prevotella sp. P5-125 (PspCas13b) fused to ADAR2 DD to create a programmable RNA editor termed REPAIR (RNA Editing for Programmable A to I Replacement) [43]. A guide RNA comprised of a spacer sequence and a 3 direct repeat region is used to direct dCas13b binding to the sequence of interest ( Figure 3F). Guides with homology regions of between 30 and 84 nucleotides and with an A-C mismatch to the target adenosine placed across a range of positions within the homology region were found to be functional. However, 50 nucleotide guides with a mismatch placed 32-36 nucleotides from the 5 end was found to be generally most efficacious.
A key advantage of Cas13 is that, unlike Cas9 and Cas12 nucleases, Cas13 does not require a protospacer adjacent motif (PAM) sequence local to the gRNA binding site [59,63]. Although evolved Cas9 variants have now created a wide range of PAM specificities [69], this remains an important restriction in the targeting range of Cas9, particularly for base editing. In bacterial screens, Cas13a and Cas13b enzymes exhibit a variety of preferences for nucleotide motifs 5 and 3 to the homology region, termed protospacer flanking sequences (PFS) [70]. For Cas13b RNA knockdown and editing in human cell lines, however, a preferred PFS sequence could not be identified from experiments using a library of PFS sequences [43]. The lack of requirement for a PFS greatly broadens the targeting scope of Cas13b mediated base editing.
REPAIRv1 uses the hyperactive ADAR2 DD (E488Q) (dPspCas13b-ADAR2 DD (E488Q)) and displayed editing rates of 10-40% across a range of overexpressed gene fragments and up to 29% of two endogenously expressed RNA transcripts [43]. The REPAIRv1 construct also appears to have a broad codon scope, with relaxed near-neighbour flanking nucleotide preferences compared to those observed with wildtype ADAR. Whilst the nucleotides 5 and 3 to the target A do influence the editing rate, editing rates between 10% and 30% were observed for all combinations of flanking sequences.
As previously observed, the hyperactive ADAR2 DD (E488Q) can produce numerous off-target editing events, so further rational mutagenesis was undertaken to generate a more specific ADAR2 DD (E488Q/T375G) deaminase fused with dPspCas13b (termed REPAIRv2). This modification produced a 900-fold decrease in off-target events, however, with an approximate 2-fold loss of on-target efficiency [43]. This double mutant specific ADAR DD has not been evaluated in many other systems but was reported to have low efficiency in the SNAP-tag system [53].
The direct repeat hairpin of the gRNA is proposed to recruit the Cas13-ADAR fusion to the target site. Vogel and colleagues found that they could achieve low editing rates with a 50 nucleotide guide without the 35 nucleotide direct repeat (DR) region using the REPAIRv1 construct [53]. RNA editing rates are markedly improved by inclusion of the DR, however [53], and in further work, Abudayyeh et al. did not observe A-I editing using the RESCUE system [44] (discussed below) in 30 or 50 nucleotide guides without the direct repeat. In another study 70 nucleotide Cas13-gRNAs were sufficient to allow endogenous ADAR editing in the dsRNA region formed by the guide [42]. The mechanism to explain these conflicting findings remains unresolved and may occur due to the hyperactivity of the ADAR DD -E488Q mutant editing at dsRNA sites. It is likely, however, that a majority of the RNA editing rates observed using Cas13-ADAR editing are due to recruitment by the guide RNA.
RNA editing with REPAIR constructs has been directly compared to other genetically encodable RNA editing systems relevant for clinical use. REPAIRv2 (dCas13b-ADAR DD -E488Q/T375G) was compared to the λN-BoxB-ADAR DD (E488Q) and similar on-target efficiencies were observed but as expected when using the E488Q ADAR, λN-BoxB had substantially more off-targets [43]. Direct comparison of on-target activity with similar GluR2 and Cas13b constructs with hyperactive ADAR2 against the same endogenous targets revealed similar editing efficiencies [42].
3.6.2. C to U Editing RNA Editing for Specific C to U Exchange (RESCUE) is a cytosine (C) to uracil (U) RNA editing platform based on the work of Abudayyeh et al., created through serial mutagenesis of the ADAR DD to create the ADAR DD (C-U) described earlier. Fused with a different Cas13 orthologue, deactivated Cas13 from Riemerella anatipestifer (dRanCas13b), the resultant dRanCas13b-ADAR2 DD (C-U) was named "RESCUE". They reported C-U editing efficiencies of up to 30% of endogenous transcripts and up to 42% of synthetic reporter constructs in HEK293FT cells. RESCUE editing was also used to activate STAT and Wnt/β-catenin pathways and alter cell proliferation phenotypes through C-U editing of key phosphorylation sites in transcripts in HEK293FT and human umbilical vein endothelial cells (HUVECs). The RESCUE editor still maintains A-I editing activity and is capable of multiplexed editing of the same transcript with different guides to achieve both A-I and C-U editing. As seen with REPAIR, RESCUE also demonstrates A-I off-targeting, as well as C-U off-targeting although to a lesser extent.
Further mutagenesis was used to create a S375A mutation in the ADAR DD (C-U), and this system was termed RESCUE-S. This yielded an approximately 12-fold reduction in A-I off-targets and~1.8-fold reduction in C-U off-targets while maintaining specificity.
Synthetic CRISPR-Like RNA Editors
Recent work has also been undertaken to engineer a synthetic RNA targeting system made from human-derived proteins that mimic the gRNA binding, ssRNA recognition and effector functions of Cas13 proteins [71]. The CRISPR-Cas-inspired RNA targeting system (CIRTS) fuses effector proteins including ADAR2, to a gRNA hairpin binding protein and a protein designed for ssRNA binding. The CIRTS8 system using ADAR2 DD (E488Q) was delivered via plasmid transfection to HEK293T cells and demonstrated 47% efficiency repairing a premature termination codon in a luciferase reporter. While more work is required to validate this in other cells lines, on endogenous targets and in vivo, due to its small size (<3kb), the CIRTS system could easily be delivered with AAV [71].
Gene Targets for RNA Editing in Inherited Retinal Degeneration
In theory, base editing might be used to correct any gene implicated in inherited retinal degeneration with G>A or T>C single-base mutations. Some diseases may be more amenable to RNA editing than others, however. Dominant diseases require knockdown or correction of mutant alleles. It is likely that for dominant diseases, in vivo editing rates higher than those that are currently observed will be required for a therapeutic effect, while small recessive genes are probably more effectively treated with AAV gene replacement. The most immediate application, therefore, is for recessive genes larger thañ 4.2 kb, which are unlikely to be treatable with AAV-mediated gene replacement. The most common large genes implicated in recessive inherited retinal degeneration and their relative frequencies in the cohort of 1000 consecutive patients diagnosed with inherited retinal degeneration by Stone et al. are shown in Table 2 [6]. Patients with mutations in genes with sizes larger than the coding capacity of AAV make up 28% of the patients within the cohort, with ABCA4 and USH2A mutations accounting for the vast majority.
Distribution of Targetable Mutations with RNA Editing
Of all mutations in these genes, a proportion are amenable to single-nucleotide base editing with the currently available tools. G>A mutations are among the most common single-nucleotide mutations, with over 20,000 in the genome [43]. An evolutionary bias towards transition mutations, where bases remain as purines (G,A) or pyrimidines (C,T), over transversions that change bases from purines and pyrimidines may explain why these mutations are more prevalent [72]. To investigate the landscape of mutations across large retinal genes, we analysed all unique variants classed as "Pathogenic" or "Likely Pathogenic" in the ClinVar database ( Figure 4) [73]. The proportion of targetable mutations (either G>A or T>C) range from 9% in CEP290 to 32% in ABCA4. Many of the mutations within ABCA4 are missense mutations (18%), while donor/acceptor splice mutations (8%) and the creation of premature stop codons (6%) are more common unique pathogenic mutations in USH2A. Overall, there are a greater proportion of G>A than T>C mutations. In both ABCA4 and USH2A, there are at least 75 unique pathogenic G>A mutations. Of all targetable mutations, G>A mutations in tryptophan codons that produce premature stop codons such as TGG>TGA/TAG/TAA (p.Trp>Ter) are the most obvious initial targets as they are more easily defined as pathogenic, and restoration of the full-length mRNA should restore protein expression. In theory, the spliceosome will recognize inosine at the same strength and manner as guanosine if donor/acceptor splice sites are repaired, but this remains to be further investigated. Missense mutations will need to be carefully targeted on a case by case basis.
A Case Study of RNA Editing for Inherited Retinal Disease
A 50-year-old female presented for further investigation of her deteriorating vision. Diagnosed with congenital bilateral moderate deafness aged four, she subsequently developed increasing nyctalopia and was diagnosed with syndromic retinitis pigmentosa at age 15. Her parents and children are unaffected, but her brother experienced deafness and vision loss from an early age. On assessment, she was bilaterally pseudophakic, and her best corrected visual acuity was reduced to 6/24 bilaterally. Fundus examination revealed moderate pigmentary retinopathy and narrowing of the retinal vasculature ( Figure 5) and optical coherence tomography showed outer retinal thinning. She had grossly constricted visual fields and a non-recordable ERG. Sequencing revealed heterozygous mutations in USH2A: the common c.2299delG mutation and the c.11864G>A which result in a codon change from tryptophan (TGG) to a premature termination codon (TAG). Given that this was a targetable mutation with RNA editing, we cloned a synthetic sequence of exon 60 of USH2A cDNA encoding the c.11864G>A mutation into a plasmid driven by a ubiquitous promoter. A 50 bp guide RNA with a 32 bp distance between the direct repeat and a mismatched cytosine was designed to target the mutation. HEK293T cells were transfected with the USH2A plasmid, guide RNA and the REPAIRv2 expression construct dPspCas13b-hDAR2 DD (E488Q/T375G). At 48 h post transfection, cells were harvested, and isolated RNA was sent for Sanger sequencing. Analysis of the Sanger sequencing peak heights with trace decomposition revealed 43% correction of the target adenosine, repairing the termination codon to tryptophan. No off-target editing of adjacent adenosines was observed 100 bp either side of the mutation.
As RNA editing rates are incomplete, ranging from 1% to 80% editing across different targets, cell lines and expression systems, diseases where partial restoration of a functional protein are required are ideal targets. While the level of correction required for the adequate expression of Usherin protein expression remains unknown, given the large size and stability of the Usherin protein in the photoreceptor cilium, it is possible that only a small amount of protein is required to improve or slow down degeneration.
Clinical Considerations of RNA Editing
The potential for RNA editing has now been demonstrated in vitro and in vivo for pathogenic mutations in genes related to cystic fibrosis, Duchenne's muscular dystrophy, Hurler's syndrome, and Ornithine transcarbamylase (OTC) deficiency, among others [42,47,49,54,74].
RNA editing may be advantageous compared to genome editing when moving towards clinical translation. Due to the transient nature of RNA transcripts, RNA editing is potentially reversible or titratable depending on the formulation or delivery of the editing system. For example, inducible RNA editors, systems with a built in "off" switch, or systems that require continual dosing such as using ASOs, have inbuilt safety mechanisms that allow a finer degree of control than DNA editing. This is balanced, however, against the attractiveness of a single-treatment DNA editing approach, and different approaches may suit different clinical scenarios.
As ADAR proteins are of human origin, there are less concerns for their immunogenicity, although this may still be a factor with construct containing elements such as the viral origin MS2-MCP and bacterial origin CRISPR-Cas systems. Immune reactions to editing proteins may neutralize their editing efficiency or cause potential harmful immune responses.
For therapeutic efficacy and safety, sufficient and persistent on-target editing will need to be balanced against rates of off-target editing. Although off-target effect will occur in transitory transcripts, further work is required to understand where and how often these off-targets are likely to occur and what phenotypic effects these may have. While off-targeting editing often occurs in non-coding regions or causes a silent variant that results in a synonymous amino acid, editing of off-target sites may cause undesired changes in the transcriptome.
Endogenous and Exogenous ADAR Strategies
Using endogenous ADAR strategies are attractive as this limits concerns for immunogenicity and appear to have lower off-target editing events compared to strategies using exogenous ADAR [40,57]. It remains unknown how efficacious an endogenous strategy may be in vivo, however. For clinical applications in the retina, further work to understand the expression and activity of endogenous ADAR across cell types is required. While ADAR editing has been observed in the inner retina [31,75], little is known about photoreceptors or RPE. Further, formation of dsRNA regions is similar to RNA interference mechanisms, where a small dsRNA is used to induced targeted RNA degradation [76]. While in the LEAPER system RNA interference was not observed [40], this was observed by Katrekar et al. using long 100 bp adRNAs [42].
For the correction of pathogenic mutations, editing of the coding sequence is required. As natural ADAR editing predominantly occurs in the UTR, it is not surprising that strategies using endogenous ADAR editing demonstrate higher efficiency in the UTRs and editing is limited in the coding sequence. Editing in the coding sequence may be reduced by translation, as inhibition of translation with puromycin has been noted to increase A-I editing [53]. Furthermore, editing scope will likely be limited by the codon preferences of endogenous ADARs, and endogenous ADAR strategies cannot mediate edits other than A-I.
Constructs expressing and recruiting exogenous ADAR or its deaminase domain with shorter gRNAs offer a number of advantages compared to endogenous strategies. Exogenously delivered ADAR may overcome reduced targeting scope of endogenous ADAR due to the codon preferences and potentially are better able to edit the open reading frame. Furthermore, exogenous strategies may benefit from future enzyme engineering work to expand the base targeting possibilities from solely A-I editing, as demonstrated by the recently developed C-U editors. The robust on-target editing rates observed with all of these exogenous systems are counterbalanced by relatively high off-target rates both within the guide region and throughout the transcriptome. Ongoing optimization will likely reduce this, as seen with the development of the E488Q/T375G variant. Protein localization may also play an important role in the balance between on-target and off-target editing. Natural ADAR editing likely occurs in the nucleus or nucleolus. In experiments investigating active Cas13 knockdown of RNA without a fused ADAR construct, higher RNA cleavage rates are observed when Cas13b is fused to a NES rather than a NLS [43], and a similar effect is seen with other ADAR-based RNA editing tools [42,50]. An advantage of nuclear localized ADAR editors is that these may reduce the frequency of off-target editing of flanking adenosines [42,50]. Evidence from other Cas13 constructs suggests that Cas13 may localize poorly to the nucleus using common NLS signals such as a single SV40 NLS and the nucleoplasmin NLS, and other more effective NLS strategies may need to be explored [77].
Delivery Challenges
Delivery of both DNA and RNA base-editing constructs continues to be a challenge for therapeutic use. AAV-mediated delivery is the most readily clinically translatable delivery route, and low rates of editing using AAV-MCP-MS2 and GluR2 systems have been achieved in muscle [42]. The MCP and GluR2 systems are small, and a key advantage is that they are easily packaged in a single AAV capsid. An AAV-Cas13b-ADAR DD construct packaged with appropriate tissue specific promoters targeting natively expressed transcripts has yet to be demonstrated. Cas13b orthologues identified to date for RNA targeting in mammalian cells are already relatively small (~3.2-3.5 kb) and appear to tolerate C-terminal truncations with little loss in RNA editing activity. Fused with the ADAR DD (1.15 kb), CRISPR-RNA editors delivered via a single AAV are a viable option, unlike the large DNA base editor constructs. Although the full-length REPAIR fusion (dPspCas13b-ADAR2 DD ) is 4.47 kb and unable to fit in the AAV coding capacity together with the required regulatory elements, the dCas13b∆984-1090-ADAR2 DD variant (4.15 kb) has relatively preserved editing efficiency compared to the full length construct and is theoretically small enough to package in AAV with short regulatory sequences. The C-terminal truncated mutant tested with the RESCUE system, dRanCas13b-del892-1095 is the smallest identified Cas13b-ADAR used for RNA editing to date (3.88 kb) and maintained similar editing efficiency to the full-length protein. Efforts to identify smaller Cas13 orthologues are ongoing. For methods requiring chemical stabilization of the RNA guide [40,53], AAV delivery of the guide is not possible, but there is potential for this to be delivered separately to the ADAR construct.
Conclusions
RNA editing is an exciting therapeutic prospect for the treatment of point mutations in genes implicated in inherited retinal degeneration that are too large for gene replacement therapy with AAV. The growing armament of tools for RNA editing will allow the interrogation of a range of approaches for safety and efficacy in vivo. As this requires a mutation-specific approach, it will be important to elucidate design principles for guide RNAs in order to expand this to a wide range of individual mutations in their individual sequence context. | 2020-01-30T09:13:55.827Z | 2020-01-25T00:00:00.000 | {
"year": 2020,
"sha1": "e1c4e702eb1b0a7dd4cb6d54471f7e7b5299fabe",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/21/3/777/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f1e0352e376404aa31c51988ebd1fd0c0b2217cd",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
209959000 | pes2o/s2orc | v3-fos-license | The cost’s indicators of the homogeneous reactions in the cascade of perfect mixing reactors prediction by mathematic modelling
One of the typical chemical reactors with a set of nonlinear dynamic characteristics is the perfect mixing reactor of continuous action (PMR-C). In this regard, it is interesting to explore the known process of homogeneous reactions in PMR-C by mathematical means.
Introduction
Modeling (in the broadest sense) is one of the basic methods of research in many areas of knowledge as with its help one can provide a reliable assessment of characteristics of quite complex systems. One of the typical chemical reactors is the perfect mixing reactor of continuous action (PMR-C). Of special scientific and practical interest is examining it in real time by the means of mathematical modeling. The development of mathematical model, however, is often gives knowledge about reducing the economic costs of conducting the process. World production of acetic acid is currently over 4.0 million t per year. Acetic acid is one of the basic products of industrial organic synthesis and its derivatives are widely used in food, chemical and other industries. That is why predicting the cost's indicators of the hydrolysis process of acetic anhydride in PMR-C with the application of modern mathematical models is required.
The Results of the Development Mathematical model of the dynamics of homogeneous reaction in PMR-C is built based on the thermal and material balance, taking into account volume magnification factor [1] and reaction kinetics [2]. It is represented in the form of equations of a change in the molar share of substance over time and a change in the inner energy of ideal flow of substance.
In this case, the following assumptions are taken into account: 1) physical magnitudes of the substance are constant; 2) total reaction volume is constant; 3) the level of fluid in the reactors is the same; 4) homogeneous reaction is the reaction of first order; 5) flow rate for each of the reactors is the same; 6) thermal consumption for the reactor insulation was neglected.
is the volumetric substance consumption at the inlet and outlet of reactor, respectively, m 3 /h; is the concentration of acetic anhydride at the outlet of reactor, mol/m 3 ; is the concentration of acetic anhydride at the inlet to reactor, mol/m 3 , -mean weighted value of reaction volume magnification factor, -; k is the Arrhenius constant, s -1 ; V is the reaction volume, m 3 ; u0,i is the molar inner energy of substance at temperature T0, J; k is the Arrhenius constant, s -1 [3]. Calculation by the model was conducted by the Runge-Kutta method of third order at the following initial data: the number of rectors: one, three, five, ten of total volume 1000 ml, in the temperature range 293 -410 K. As illustrated in Tables 1-4, the cost of the process of acetic anhydride hydrolysis reduces with changing the temperature from 293 K to 338 K, and then it gradually increases. The optimum is achieved at a temperature 338 K. It is obvious that with the increased volume of the mixture, the reaction rate grows. At the same time, the speed of achieving the necessary degree of conversion decreases. In other words, at the increased reaction volume, the depth of the course of reaction decreases.
Conclusion For the cascade of reactors, the speed of achieving maximum degree of conversion compared with one perfect mixing reactor of the same volume is much higher. In comparison one with three the cost of conducting the process of acetic anhydride hydrolysis decreases considerably, then more moderately, which confirms the recommendation, for considerable reaction volumes in industry, applying the cascade of reactors or the ideal-displacement reactor. | 2019-10-31T09:06:27.910Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "8636ad98b6329ab950e6fbd983556726652a6500",
"oa_license": "CCBY",
"oa_url": "https://openreviewhub.org/sites/default/files/paper/2018/lea-2018/734/prymyskathecostsindicatorsofthehomogeneousreactionsinthecascadeofperfectmixingreactorspredictionbyma.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "13c735aaf2e6515260512b93ce5b1eab9f238eb4",
"s2fieldsofstudy": [
"Chemistry",
"Engineering",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
260876839 | pes2o/s2orc | v3-fos-license | Use of a Hindfoot Nail Without Separate Subtalar and Tibiotalar Joint Preparation to Treat Geriatric Ankle and Distal Tibia Fractures: A Case Series
Introduction Ankle fractures in geriatric patients can be devastating injuries, as they limit an individual’s mobility, autonomy, and quality of life. This study examines the functional outcomes and complications related to hindfoot nails (HFN) in geriatric patients who have suffered an ankle malleolar or distal tibia fracture. Materials and Methods This is a single-surgeon case-series of patients who underwent HFN for acute fixation or delayed reconstruction after an ankle or distal tibia fracture. Demographic information, comorbidities, baseline functional status, AO/OTA classification, surgical indications, need for external fixation, total operative time, length of stay (LOS), ambulation at discharge, and discharge disposition were recorded. Primary outcomes included 30-day complications, ambulation at follow-up, and time to fracture union and fusion. Results There were 22 patients, with average age 80.8 years. Mean LOS was 7.0 days, and 68.2% were discharged to subacute rehabilitation. Within 30 days, 1 patient developed a deep vein thrombosis and bilateral pulmonary emboli, and 2 experienced wound dehiscence requiring antibiotics. At 6-weeks, 1 patient sustained a fall with periprosthetic fracture requiring HFN revision, and another developed cellulitis necessitating hardware removal. Fracture healing was seen in 72.7% at 19.4 weeks, while radiographic fusion occurred in 18.2% at 43.0 weeks. 72.7% were ambulating with an assistive device at discharge, and 100.0% at 12-weeks post-operatively or last follow-up. Upon final examination, all patients were ambulating without pain. Discussion HFNs provide a reliable alternative to traditional open reduction internal fixation and have the ability to improve quality of life for geriatric patients through a faster return to weight-bearing. Additionally, radiographic fusion rates show that patients have favorable functional outcomes even without formal arthrodesis. Conclusion HFN is beneficial for elderly patients with low functional demand and complex medical comorbidities, as it allows for early mobility after sustaining an ankle or distal tibia fracture.
Introduction
Ankle fractures are the third most common musculoskeletal injury in the growing elderly population. 1 Depending on the characteristics of the fracture itself and the patient's goals for recovery, there are multiple approaches for addressing ankle fractures in the geriatric population. In patients with an unstable, displaced fracture, open reduction internal fixation (ORIF) remains the standard of care. 2 Though it does delay return to weight bearing by up to 12 weeks, ORIF allows many patients to achieve near-baseline levels of activity, while limiting the development of posttraumatic osteoarthritis. [3][4][5] Still, this injury can have a devastating impact on senior patients, as it limits their ability to independently perform activities of daily living and has been associated with 1-year mortality rates as high as 12%. 1 Some of the morbidity and mortality may be due to prescribed, prolonged non-weight-bearing to the affected limb after ORIF. 6,7 Studies have shown that patients older than 65 years are only compliant with non-weight-bearing restrictions 22% of the time. 8 Noncompliance can lead to prolonged bed rest and complications from immobility or failure of fixation and fracture displacement. 8 Given the reduction in quality of life and potential for serious morbidity, effectively treating ankle fragility fractures is crucial.
Recently, there has been momentum in the orthopaedic community to develop techniques and protocols that allow for immediate weight-bearing as tolerated. 6,7 Augmented ORIF and hindfoot nailing (HFN) have been proposed as alternatives that may permit earlier weight-bearing in the geriatric ankle fracture population. 2,7 HFN has been shown to provide more stability in those with poor bone quality and allow for immediate weight-bearing, but at the expense of ankle and subtalar motion. 9,10 It has also been associated with shorter hospital stays. 6 For these reasons, HFN has been explored as an alternative for acute and delayed reconstruction of ankle and distal tibia fractures. 6 Al-Nammari et al 11 reported that in their cohort of 48 elderly patients treated with HFN, 90% returned to their pre-injury level of function. Similarly, based on their randomized controlled trial, Georgiannos et al 6 concluded that there was no difference in rate of return to baseline functionality between those treated with HFN vs ORIF. Additional studies have demonstrated complication rates of HFN that are comparable to other ankle fracture fixation methods. 2 However, there is still a paucity of studies addressing the utility and safety of HFN in elderly patients, and consequently, the optimal management of fragility ankle fractures remains controversial.
Given the lack of consensus within the orthopaedic surgery community on when and in which patients to use HFN for definitive fixation of ankle and distal tibia fractures, each study contributes much-needed evidence to the question at hand. 3 Through this case series, we aim to determine whether hindfoot nails without open subtalar and tibiotalar joint preparation reliably achieve favorable outcomes, with minimal complications, in geriatric patients who have suffered an ankle malleolar or distal tibia fracture.
Materials and Methods
This study is a case-series of patients who underwent HFN as definitive treatment for fractures of the ankle or distal tibia. All procedures were performed as inpatient surgeries by a single trauma-trained orthopaedic surgeon between April 2020 and December 2021. Each patient received a Stryker T2™ Ankle Arthrodesis Nail (Kalamazoo, MI, USA), implanted with the assistance of intraoperative fluoroscopy and without a separate procedure to prepare the tibiotalar and subtalar joints. The procedure was the same for fixation of acute fractures and delayed reconstructions, as shown in Figure 1 and Figure 2. HFN was performed with the patient lying supine on a regular operating room (OR) table. The patient was positioned with the legs off the bottom of the table, from mid-calf. An intraoperative positioner (Bone Foam® Leg Ramp, Bone Foam, Corcoran, MN, USA) was used to ensure proper leg position and adequate access during placement of the nail. When removal of prior hardware was not needed, the surgery was performed in a minimally invasive, percutaneous fashion. Directly following surgery, patients were made weight-bearing as tolerated in a controlled ankle motion (CAM) boot and instructed to follow up in clinic 2 weeks post-operatively. Once the incisions healed (14-21 days), the patients were allowed to wean themselves from the boot to regular sneakers. Physical therapy was provided for patients who could not be weaned from the boot by 12 weeks post-operatively or by patient request.
In January 2022, we used the database of a single surgeon, practicing at a private health system in New York, and identified 22 patients who had undergone the above procedure. The electronic medical record was used to perform a retrospective chart review of relevant data. This included demographic and comorbidity information such as age, sex, Charlson Comorbidity Index (CCI), American Society of Anesthesiologist (ASA) class, and baseline level of mobility. 12,13 Data regarding the fracture itself included the AO/OTA classification, indications for surgery, and the need for external fixation prior to definitive treatment. 14 Outcomes to characterize the patient's hospital stay included length of stay (LOS), length of operation, ambulation at discharge, and discharge disposition. Complications were collected for 30 days postoperatively and for the duration of the patient's follow up. These included deep vein thrombosis (DVT), pulmonary embolism (PE), pneumonia, myocardial infarction, infection, hardware failure, and death. Our long-term outcomes included ambulation status and use of mobility aids at each followup visit, time to fracture union, and time to fusion. Office radiographs were used to assess union and fusion. Microsoft Excel was used for both data organization and statistical analysis.
Our cohort contained 22 patients, comprised of 15 women and 7 men, with a mean age of 80.8 years, mean CCI of 5.3 (correlating with an estimated 10-year survival of 13.2%), and mean ASA of 2.9 (Table 1). There were 2 patients with diabetes, with peri-operative HbA1c of 7.0 and 7.6. Seven patients were former cigarette smokers, 1 was a current smoker, and the average number of pack-years, reported for 62.5% of the patients with a smoking history, was 43.2 (range 17 to 120 pack-years.) The pre-operative details of each fracture are displayed in Table 2. Of the 4 patients who underwent HFN due to nonunion, 2 had an infected ORIF requiring removal of hardware and external fixation prior to definitive fixation with a hindfoot nail. Of the 18 acute ankle fractures that underwent HFN, 3 required external fixation prior to definitive fixation due to soft tissue trauma.
Results
Total operative time for our cohort was available for 21 patients and averaged 83.1 min (Table 3). There were 6 cases with OR times longer than 100 min, including 2 infected distal tibial shaft nonunions, 2 external fixation removals, 1 concurrent contralateral tibial intramedullary nailing, and 1 removal of hardware from a remote prior fracture. Excluding cases with these extenuating circumstances, the mean operative time was 67.4 minutes.
The average LOS was 7.0 days (range 2-12 days), excluding 1 patient with a 44-day LOS (Table 3). This patient sustained an open, peri-implant distal tibia fracture around a bimalleolar ankle fracture ORIF, complicated by infected nonunion. The patient presented in septic shock and underwent removal of hardware and HFN. The postoperative course was complicated by COVID-19 infection with bilateral PE's, eventually necessitating placement of an inferior vena cava filter. Upon discharge, 31.8% of patients were sent home, while 68.2% were discharged to a subacute rehabilitation facility (Table 3).
Within 30 days after surgery, 1 patient was diagnosed with a DVT and bilateral PE's, and 2 patients experienced wound dehiscence requiring antibiotics without need for operative debridement (Table 4). Both patients who developed wound dehiscence originally presented with open fractures, nonunion from failed ORIF, and required external fixation prior to HFN. One of those patients also had an infected nonunion prior to HFN. Late complications occurred at an average of 6-weeks post-surgery. One patient sustained a fall with peri-implant tibial shaft fracture that required revision HFN with a longer intramedullary component. A second patient developed cellulitis with backing out of a screw, which necessitated screw removal in the office followed by late removal of hardware with sequestrectomy for osteomyelitis. The average length of follow-up was 27.6 weeks (range 6.4 to 80.1 weeks). One patient never presented for outpatient follow-up. During Table 4). The 5 fractures that did not achieve radiographic union during the study period presented for fewer than 10 weeks of follow-up. Four of these 5 patients had transsyndesmotic bimalleolar fracture equivalents. All patients who presented for follow-up were ambulating without pain upon final examination (Table 4). Radiographic evidence of both tibiotalar and subtalar fusion was seen at a mean of 43.0 weeks in 4 patients ( Table 4). Zero patients demonstrated fusion of the tibiotalar or subtalar joints in isolation.
Prior to surgery, 40.9% of patients ambulated without assistive devices (Table 1). At the time of hospital discharge, 16 (72.7%) patients were able to ambulate with use of a walker. At 2-week follow-up, 18 of 21 patients were ambulatory with either a cane or walker. All patients who presented for follow-up were ambulatory by 6 weeks postoperatively (Table 4). At the time of latest follow up, all patients remained ambulatory, though all required some form of assistive device.
Discussion
As patients are living longer with more medical comorbidities, it is important for the management of fractures to evolve in a way that maximizes quality of life. This case series demonstrates that hindfoot nails are an effective tool to treat ankle and distal tibia fractures in geriatric patients who have multiple medical comorbidities and are consequently at increased risk for perioperative complications. 3,6 Of the 22 patients treated with HFN, the majority were indicated for acute closed bi-and trimalleolar fractures of the ankle. This finding is consistent with other literature demonstrating that in geriatric patients, unstable ankle fractures are the typical indication for surgical reduction and fixation with a hindfoot nail. 6,15 Functional outcomes in the inpatient and outpatient setting demonstrated both benefits and challenges resulting from HFN. In our cohort, 68.2% of patients were discharged to a subacute rehabilitation facility. This is comparable to a recent review of Medicare data which found that 59.2% of geriatric ankle fractures were discharged to a nursing facility. 16 After surgery, patients were immediately able to bear weight as tolerated in a CAM boot, and at discharge, 72.7% were able to walk with a rolling walker. By their 2-week follow-up, 81.8% of patients were ambulatory. The ability to immediately resume weight-bearing after HFN is critical for geriatric patients, as early mobilization has been associated with improvement in quality of life and functionality following ankle fracture. 17 In already frail patients, this also reduces the risk of muscle atrophy, which can develop with just a few weeks of disuse. 17 Of the patients who did not demonstrate radiographic fracture healing, all were ambulating without pain. It is likely that fibrous union, along with the stability afforded by the load-sharing hindfoot nail, is sufficient to produce satisfactory results in this low-demand population.
While there was no formal open or arthroscopic cartilage resection to facilitate fusion, 4 patients demonstrated fusion of their subtalar and tibiotalar joints. One of the criticisms of HFN for geriatric fractures is that, typically, no formal arthrodesis is performed. However, studies have shown that for low-demand elderly patients, fusion is not required for good outcomes. 18,19 Only one patient required removal of hardware for screw loosening, indicating that even without arthrodesis, these patients were able to achieve a stable construct sufficient for their activities of daily living.
Lastly, in concordance with other similar studies, 13.6% of our patients experienced complications within 30 days post-surgery. 3 Elmajee et al. 3 reported that based on a systematic review comprised of 7 studies, with 194 patients undergoing HFN, the overall complication rate was 16.5%, of which the most common adverse events were nail or screw breakage, return to the OR, and infection. Consistent with these results, infection requiring antibiotics, without operative intervention, was the most common complication in our series. While there is further work to be done in mitigating these negative outcomes, the complication rates seen with HFN may be lower than that of ORIF, which are reportedly as high as 36%. 6,20 These findings should be interpreted in light of the limitations inherent to a retrospective case series, namely selection and recall bias. Data extraction is limited by the information documented in the medical record at the time of patient care. Follow-up times varied greatly-ranging from 2.7 to 80.1 weeks-which limits our ability to comment on long-term complications for many patients. All surgeries in this case series were performed by a single surgeon to highlight a specific surgical technique; however, this can inhibit generalizability of findings to a broader population. HFN is also a procedure with rather narrow indications, leading to a small cohort of patients available for analysis and a limited capacity to detect complications which occur at lower rates.
Conclusion
For geriatric patients with low functional demand and complex medical comorbidities, the hindfoot nail provides a reliable means of fixation for ankle and distal tibia fractures that otherwise would have been treated with an extended period of non-weight-bearing. Patients have good functional outcomes even without formal arthrodesis. Larger prospective studies with a control group are needed to determine the most relevant factors in predicting which geriatric ankle fractures would benefit from a hindfoot nail, rather than traditional ORIF.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article. | 2023-08-14T15:02:42.869Z | 2023-08-01T00:00:00.000 | {
"year": 2023,
"sha1": "2a81899ba2488c50e277896db942362ba5d9a8d5",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1c4b7d0f0f6c2bd741e3cb40b8a2483ef8f0df76",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
231693201 | pes2o/s2orc | v3-fos-license | Stochastic Modeling of an Infectious Disease Part III-A: Analysis of Time-Nonhomogeneous Models
We extend our BDI (birth-death-immigration) process based stochastic model of an infectious disease to time-nonhomogeneous cases. First, we discuss the deterministic model, and derive the expected value of the infection process. Then as an application we consider that a government issues a decree to its citizens to curtail their activities that may incur further infections and show how the public's tardy response may further increase infections and prolong the epidemic much longer than one might think. We seek to solve a partial differential equation for the probability generating function. We find, however, that an exact solution is obtainable only for the BD process, i.e., no arrivals of the infected from outside. The coefficient of variation for the nonhomogeneous BD process is found to be well over unity. This result implies that the variations among different sample paths will be as large as in the negative binomial distribution with r<1, which was found in Part I for the homogeneous BDI model. In the final section, we illustrate, using our running example, how much information we can derive from the time dependent PMF (probability mass function) P_k(t)=Pr[I(t)=k]. We present graphical plots of the PMF at various t's, and cross-sections of this function at various k's. A mesh plot of the function over the (k, t) plane summarizes the above numerous plots. The results of this paper reinforce our earlier claim (see Abstract of Part II) that it would be a futile effort to attempt to identify all possible reasons why environments of similar situations differ so much in their epidemic patterns. Mere"luck"plays a more significant role than most of us may believe. We should be prepared for a worse possible scenario, which only a stochastic model can provide with probabilistic qualification. An empirical validation of the above results will be given in Part III-B.
t 0 (λ(u) − µ(u)) du plays a central role, with or without the external arrivals. In Section 2, we illustrate a numerical example, by assuming a hypothetical scenario, in which a government issues a decree to its citizens to curtail their activities that may incur further infections. We show how the public's tardy response may further increase the number of infections and prolong the epidemic much longer than one might think without a quantitative analysis.
In Section 3, we seek to obtain the probability generating function for the nonhomogeneous BDI process, by solving a partial differential equation, similar to what we conducted in Part I for the homogeneous BDI process model. We find, however, that an exact solution is obtainable only for the BD process, i.e., no arrivals of the infected from the outside. We find that the coefficient of variation for the nonhomegenous BD process to be well over unity, practically for all t. This result implies that the variations among different sample paths will be as large as in the negative binomial distribution with ν/λ < 1, which we found in Part I for the homogeneous BDI model.
In the final section, we illustrate, using our running numerical example, how much information we can derive from the time-dependent PMF (probability mass function) P k (t) = P[I(t) = k]. We present graphical plots of of the PMF at various t's, and cross-sections of this function at various k's, along the axis parallel to the t axis. A mesh plot of the three dimensional array Z(k, t) P k (t) over the (k, t) plane is shown to summarize the above numerous plots.
will keep the value of the function µ(t) intact, while their insufficiency will lead to a decline or drop in the µ(t) value. The so-called "lock-down" policy is equivalent to an attempt to let both λ(t) and ν(t) reduce towards zero promptly.
We start with generalizing the model discussed in Parts I & II and obtain the expected value of the stochastic process I(t).
Derivation of I(t):
The expected value of the Infection Process I(t) Recall Eqn. (22) of Part I, Section 3.2, i.e., Let the model parameters be generalized to arbitrary functions of time t, i.e., where a(t) = λ(t) − µ(t).
The function a(t) largely determines the deterministic model, i.e., the expected value of the stochastic process I(t). This function can be alternatively written as or where τ (t) is the inverse of µ(t), representing the expected period that an infected person at time t remains infectious until his/her recovery, removal or death: and R(t) is the effective reproduction number : The value at t = 0, R(0), is referred to as the basic reproduction number (cf. [4], Eqn. (6)). A standard technique for solving the above differential equation is to obtain its homogeneous differential equation 2 first: 2 A homogeneous differential equation for y involves only y and terms involving derivatives of y, such as in d 2 y dx 2 + a(x) dy dx + b(x)y = 0. If we have c(x) instead of 0 in the RHS, it is called a non-homogeneous differential equation.
which readily leads to where Then we multiply (2) by e −s(t) , obtaining which can be written as Thus, we find where the integration constant C can be determined by setting t = 0 in the above, yielding C = I(0) I 0 : where If the arrival rate function is homogeneous, i.e., Then where In the homogeneous BDI process model studied in Parts I & II, a(t) = a = λ − µ and ν(t) = ν for all t. Then s(t) = at, and Σ(t) = 1 − e −at a .
Copyright ©Hisashi Kobayashi 2021 Thus, (14) becomes which we obtained in Part I. The above expression can be somewhat simplified, depending on the initial value I 0 of infected person at time t = 0, and whether the security control at the boundaries is perfect or not.
Let us further assume that ν(t) = ν 0 and a(t) If a(t) = λ(t) − µ(t) becomes zero at some point t * such that λ(t * ) = µ(t * ), and if a(t) < 0 for all t > t * , then eventually s(t) becomes negative at some point in time, and e −s(u) of the integrand of (15) grows exponentially. However, the multiplier e s(t) of (22) decreases negative exponentially. Consequently, e s(t) e −s(u) 1 for t > t * , resulting in In the homogeneous case, the above formula reduces to which, interestingly enough, makes (24) an exact formula (c.f. Eqn. (23) of Part I).
Derivation of A(t), B(t) and R(t)
In Part I, Section 3.2, we defined the stochastic process A(t) as the cumulative count of external arrivals of infected individuals from the outside. Let us assume that they arrive according to a Poisson process with rate ν(t). Then, it readily follows that For the time-homogeneous case, ν(t) = ν for all t, the above equation reduces to The stochastic process B(t) is defined as the cumulative account of internally infected individuals, and each infected (and infectious, as well) person will reproduce a new infection at the rate λ(t) at time t. Then B(t) = E[B(t)] should satisfy the following differential equation hence, We defined the stochastic process R(t) as the cumulative count of recovered/removed/dead (hence no longer infectious) persons. Each infected person will join this group at the rate of µ(t). Thus, we have, similarly to (29), hence, We now consider the three different situations corresponding to those discussed in the previous section: 1. When ν(t), and I 0 ≥ 1: In this case we have from which we find 2. When ν(t) > 0, and I 0 = 0: In this case, from which we obtain 3. When ν(t) > 0, and I 0 ≥ 1: For this general case, we have from which we find Since the LHS is I(t) as given in (14), we have which could have been directly obtained from the identity (18) of Part I, Section 3.2.
Daily Counts of the Infected, Recovered and Dead
Recall that the process of our interest I(t) is the number of currently infected individuals, excluding those who have recovered, removed (to e.g., hospitals) or have died. Since it represents the current total infectious individuals, it contains the most important information concerning the current and future infections. In practice, however, the statistics that are most frequently reported in mass media are: (i) Daily counts of newly infected persons; (ii) Cumulative count of infected persons up to the present; (iii) Daily counts of newly died persons; (iv) Cumulative count of deaths up to the present; (v) Daily counts of persons who are seriously ill and treated in hospitals, etc.
Copyright ©Hisashi Kobayashi 2021 In our companion paper [3], which report on our simulation study, we will show some of these statistics in terms of bar charts. Thus, it will be instructive to derive the expected values of some of these statistics of interest.
Definition 1 (New Infections). I new [t]
is defined as the expected number of newly infected persons on day t, t = 0, 1, 2, · · · . Definition 2 (New Recoveries). R new [t] is defined as the expected number of newly recovered persons on day t, t = 0, 1, 2, · · · .
With these definitions we state the following simple formulas as a proposition: Similarly, I new [t] is given by (41) Proof. The above formulas are readily found from the computation of A(t), B(t) and R(t) given in (27), (30) and (32) of Section 1.2
Application of the Time-Nonhomogeneous Deterministic Model
In this example, we present an illustrative example how the above general analytic results can be utilized by considering a situation in which a government declares a state of emergency, and requests its citizens to substantially curtail their activities so as to significantly reduce the infectious rate λ(t). The so-called lock-down corresponds to making λ(t) ≈ 0 by banning any contacts with individuals outside the household.
Computation of I(t)
Consider the infection rate function λ(t) depicted in Figure 1, in which λ(t) takes on a constant value λ 0 during the initial period 0 ≤ t ≤ t 1 (= 50). We assume that at time t 1 the government issues a decree to its citizens to significantly reduce its social contacts. Not all public may respond to the decree promptly, so we assume that it takes d days to implement the order. Between t 1 and t 1d = t 1 + d, λ(t) decreases monotonically, then takes on another constant value λ 1 for t ≥ t 1d .
By adopting a "raised-cosine" curve between t 1 and t 1d , we have smooth connections at both ends: t 1 and t 1d , but the shape of this transitional curve is not as important as the delay d value.
Copyright ©Hisashi Kobayashi 2021 As we see in the rest of this section even a small delay in implementing the government's new guideline may significantly impact the effectiveness of the decree. We consider three cases: d = 0, 5, and 10 [days], and the consequences of the different delays are shown in three different colors; cyan, red and blue, respectively. The corresponding s(t) of (10) is obtained by integrating a(t) = λ(t) − µ(t). In this example, µ(t) = 0.1 for all t: where and In Figure 3 we plot the above s(t). The fact that s(t) does not become negative until around t = 300 implies that the epidemic does not begin to die down completely until that period. We discuss the behavior I(t) for the three different cases defined in the previous sections.
By comparing the three curves in Figure 4, we note 4 The BD process is often referred to as Feller-Arley (FA) process. Figure 3.
• If the public immediately respond to the government request by reducing the infection rate λ(t) below µ(t) immediately (i.e., d = 0), the I(t) immediately begins to decrease, as shown by the curve in cyan.
Noting that the function Σ(t) reaches its plateau level 1 a 0 by t 0 (≈ 25), and remains flat until t further increases to t + ≈ 200 as seen in Figure 5. This is because in the interval (t 0 , t + ), we find that s(t) > 5, which makes e −s(t) < 0.0067. Thus, the contribution of the integrand e −s(u) , u ∈ (t 0 , t + ) to Σ(t) is negligibly small. Thus, we find that the approximation (23) is valid in [0, t + ) for any λ(t), µ(t), and ν(t), so long as these functions remain constants during the initial period [0, t 0 ).
For u > t + , the function e −s(u) grows exponentially, as s(u) continues decreasing, and eventually negative for u > 300, as seen in Figure 3. In this region, however, the function e s(t) rapidly decays towards zero. Consequently, we obtained the approximation (24) In Figure 7 we plot I(t) of (24). Because of the behavior of N (t) discussed earlier, the shape of this I(t) is indistinguishable from that of Figure 4; the functional form exp(s(t)) essentially determines the shape of the I(t)'s in both figures. Their magnitudes happen to be the same, because I 0 and ν 1 /a 1 are both equal to unity in this particular example. Needless to say, if we set ν 0 = 0.1, for instance the I(t) of Figure 7 will be scaled down to one half. 3. When ν(t) > 0 and I 0 ≥ 1:
L(t)= 0 t (u)e -s(u) du and M(t)= 0 t (u)e -s(u) du
For the numerical value of the running example, with I 0 = 1 and k 0 where The function s(t) is given by (43) with numerical values of (44) and (45). Figure 8 is a plot of I(t) of (46), and is clear that it is a sum of I(t) plotted in Figures 4 and 7. In this example s(t) ≤ a 0 t for all t ≥ 0, thus e s(t)−aot ≤ 1 for t ≥ 0. 5 Thus, I(t) of (46) is, for all practical purposes, just (1 + ko I 0 ) = 2 times of I(t) given by (21), as shown in Figures 4 and 8.
Computation of A(t), B(t) and R(t)
For ν(t) = ν 0 , A(t) is simply given by Figures 9 & 10 show B(t) and R(t) given by (30) and (32), respectively. Given A(t), B(t), R(t) obtained above and the initial condition I 0 = 0, we compute I c (t) to check the consistency among the the three stochastic processes. The I c (t) computed above should agree to I(t) originally computed by (24), which indeed can be numerically verified.
Computation of Daily Counts of the Infected, Recovered and Dead
From the formulas (40) and (41), we can readily obtain the new infections and new recoveries, as shown in Figures 11 and 12. Note that the shape of R new [t] is proportional to I(t), since µ(t) is constant in this running example, whereas λ(t) changes value from λ 0 to λ 1 during the interval t ∈ [50, 50 + d1), where d = 0 (cyan), d = 5 (red) and d = 10 (blue). The partial differential equation (PDE) that we defined in Part I, Section 3.1, Eqn.(15) can be generalized to the nonhomogeneous case as with the boundary condition Lagrange's method to solve (50) leads to the following auxiliary differential equations (see Part I, Appendix A, Eqn.(A.7)): Unfortunately, the solution form given in (A.8) of Part I does not extend to the nonhomogeneous case. From the left and middle terms of (52), we obtain If we change the variable z to x, as Kendall [5] suggests: we find which transforms (53) into which is an ordinary differential equation, similar to (2). Thus, we have, similarly to (13) where s(t) is defined in (9), and L(t) is defined, similarly to N (t) of (15), by Then we have, Let us define the function M (t) similar to the L(t) above and N (t) of (15): Then we can derive the following identity equation: In order to find a second independent solution of (52), we need to solve the differential equation or alternatively Unfortunately, neither of these equations seem unsolvable, unless ν(t) = 0. Thus, we have to be content with the nonhomogeneous BD (birth-and-death) process, discussed by Kendall [5], and Bailey [6].
Nonhomogeneous Birth-and-Death Process
Thus, we continue our analysis by assuming no external arrivals. By setting the RHS of (62) (hence (63) as well) equal to zero, we find that the second solution to the PDE (52) is simply given by We write the functional relation between C 1 and C 2 as which, together with (59), implies The boundary condition (51) gives By setting we find the functional form f (·) , using (51), as By combining (66) and (69), we obtain By substituting the following relation, which is from the identity equation (61) we find the that PGF of (70) can be written as which we now express as where with the latter being obtained from the binomial theorem for negative integer exponents 6 By summing all coefficients of the terms z k such that k = i + j in (73), we find the expression for the probability mass functions P k (t): By defining , and β(t) we find which is a nonhomogeneous counterpart of the similar expression (see e.g., Takagi (see [7], p. 92, also [1,4]) The expression (76) can be written as When I 0 = 1, which is of our interest, the above formula can be simplified to Using this, we can find an alternative way to compute the I(t), viz.
As a special case, let us consider the time-homogeneous case, for which we have s(t) = at = (λ − µ)t, and Thus, we have Then, we find that in the limit t → ∞, By substituting (83) into (78), we obtain which is equivalent to what we found in Part I, Appendix A, Eqn. (A.19), with r = 0.
The Coefficient of Variation of the Nonhomogeneous BD Process
In this section we calculate, from the PGF obtained above, the first and second moments of the infection process I(t) for the nonhomogenous case. As we did for the homogeneous case in Part I, let us take the natural logarithm of the PGF (72), and differentiate it w.r.t. z, obtaining By setting z = 1, we find which agrees with (21) of the Feller-Arley process. In order to find the variance of I(t), we differentiate (86) once again, obtaining By setting z = 1 in the above equation, we find the variance: Thus, the coefficient of variation of I(t), c I (t) is given by (90)
Numerical Analysis of Time-Nonhomogeneous Stochastic Model
Let us continue pursuing the numerical example discussed in Section 2, where λ(t) begins to decrease from λ 0 to λ 1 at t 1 = 50, as depicted in Figure 1
c I (t)= I (t)/E[I(t)]
d=10 d=5 d=0 Figure 15: The coefficient of variation of I(t), where 0 < t < 300 In the present subsection, we present three most important graphical curves which characterize important behavior of the stochastic process I(t).
The first is the behavior of the function Σ −1 (t) 1/Σ(t), where Σ(t) is an integration of e −s(t) as defined in (18), and plotted in Figure 5. The shape of Σ(t) is determined by the behavior of the function s(t) at around s(t) ≈ 0 as discussed in Section 1.1. The function Σ(t), in turn, largely determines the shapes of the functions N (t), L(t) and M (t), defined by (15), (58) and (60), respectively. In Figure 13, we plot the function Σ −1 (t)(= 1/Σ(t)).
In Figure 14, we show the functions α(t) and β(t) defined by (77). The function α(t) rapidly rises to µ 0 λ 0 = 1/3, as seen from (84), stays at this level for a long time, and then begins at t ≈ 200 to rise towards "1", which is the limit of α(t) in the regime a < 0 . From (80) we see that when I 0 = 1, the function α(t) is equal to P 0 (t).
As the second equation in (84) shows, the function β(t) quickly reaches the unit level in the regime a > 0, and stays there until t ≈ 200, and then makes a transition to the limit λ 1 µ 1 = 0.6 in the regime a < 0. Note that the transitions of α(t) and β(t) from their limit values in the first regime a > 0 to the limit values in the regime a < 0 require some time, taking as long as 200 days (i.e., from t ≈ 200 to t ≈ 400) in our running example. The horizontal axis of this Figure 14 is extended up to t = 500, whereas in most other figures of this example we plot only up to t = 300.
This transitional behavior of the functions α(t), together with the large coefficient of variation, is perhaps the most important result of the present article, that is, (i) Although the expected infection function I(t) begins to decrease once λ(t) becomes smaller than µ(t) (hence a(t) < 0), I(t) does not become sufficiently small until the function s(t) = t 0 a(u) du decreases near zero (see Figure 3). In this running example, it is not until t ≈ t 1 + 250 = 300. This can be easily computed from Figure 3, from the value of s(t 1 )(≈ 10) and the slope a(t)(≈ −0.04),i.e., s(t 1 )/|a(t)| = 250. An important observation to make is that there is a considerable delay from the moment t 1 (= 50) when a state of emergency was issued by the government until the time the infected population decreases towards zero. In our running example, it takes as many as 150 (=200-50) days until I(t) becomes very small, on average, and additional 200 (=400-200) days until the infection process is expected to come to a full end.
(ii) The transitional behavior of α(t) can be better understood by rewriting α(t) as As we noted earlier, P 0 (t), the probability that the number of infected persons at time t is zero, is equal to α(t) I 0 (c.f. (79)). Thus, the function α(t) should provide the policy makers with such crucial a guidance as how long its state of emergency declaration should be maintained.
(iii) Shown in Figure 15 is the coefficient of variation (CV) c I (t) of the process I(t), as given in (90). Note that the CV remains nearly constant (≈ 1.44 in this example) until t ≈ 200, when the CV begins to rise. The value of the CV as large as 1.44 is due to the fact the PMF P k (t) at any given t is nearly flat except its value at k = 0 (see Figures 16 through 27 in the next section). Such a highly skewed distribution with an extremely long tail implies that the I(t) obtained from any deterministic model can provide very limited information of the stochastic process I(t). In the same token, any parameter estimations obtained from the observed data of an instance of I(t) will be very unreliable, to the say the least: an estimated value of the model parameter may be significantly deviated, more often than not, from its true (unknown) value. We will defer a full discussion on model parameter estimation until Part IV [8].
It may look counter-intuitive that the CV c I (t) begins to increase exponentially after t goes beyond ≈ 200, considering that I(t) becomes practically down towards zero in this time frame. This somewhat surprising behavior can be explained by observing (89): the variance σ I (t) is nearly proportional to Σ(t).
Numerical Plots of the Time-Dependent PMF P k (t)
We obtained in Section 3.2 a closed-form expression for P k (t) = P[I(t) = k], which is given by (79).
From the assumption P[I(0) = I 0 ] = 1, it should follow that P k (0) = δ k,I 0 , where δ i,j is Kronecker's delta 7 . To verify this, consider As t → 0, α(t) → 0 and β(t) → 0. Then, all the terms in the above expression approach zero, except for the term that corresponds to i = I 0 , i.e., lim t→0 P I 0 (t) = I 0 I 0 In the limit t → ∞, we have which implies that the infection I(t) converges to zero with probability one as t → ∞. This is not surprising in view of the fact that we assume no external infected individuals (i.e., no immigration) and that we assume the effective reproduction number R(t) λ(t)/µ(t) < 1 for t > t 1 + d. In the population study, the phenomenon (94) is called the extinction of species under study. Presented below are various cross-sections of the three-dimensional array for the running example discussed in this article. The first group of the plots are the PMFs of P k (t) at various points in time t.
The second group is a set of plots of P k (t) vs. t for a given k = 0, 1, 2, . . .. Note that this is not any sort of probability distribution function.
We then finally present the surface plot of the function Z(t, k) of (95) to provide an overall picture of the time-dependent PMF P k (t).
The whole purpose of presenting these graphical plots is to show that the range of values that infinitely many possible realizations (i.e., sample paths) of the BD process I(t) may take on is so broad. Thus, the expected value I(t) or a particular instance or sample path of the stochastic process I(t) cannot be claimed as a typical instance. This implies that we must be extremely careful in drawing any sort of conclusion on the property of I(t) from a small number of realization of the process. This point will be made further clearer in the companion paper [3], where we will present extensive results on non-homogeneous BD and BDI processes. Figure 35 is a bird eye view of the time-dependent function P k (t) over the (k, t) coordinate. This plot summarizes the various plots presented above.
Discussion and Future Plans
In the present article we presented a theoretical analysis of our stochastic model of an infectious disease based on the time-nonhomogeneous BD and BDI processes.
1. First, we discussed a time-nonhomegeneous deterministic model, from which the expectation of stochastic processes of our interest (i.e., I(t), A(t), B(t), R(t)) were obtained. Among the three model parameter functions λ(t), µ(t) and ν(t), the difference of the first two, i.e., a(t) = λ(t) − µ(t), is of primary importance, and its integrated function s(t) = t 0 a(u) du, and its exponentiation play central roles in the analysis.
2. We presented a hypothetical scenario, in which a government declares a state of emergency, requesting its public to significantly reduce their activities that may incur infections. We showed how even a small delay in implementing the new order will result in a further increase in infections for a while, and prolong the period until I(t) diminishes to practically zero.
3. The analysis of the stochastic version showed that the BD process without immigration (i.e., no external arrival of the infected from the outside) is solvable exactly. An exact analysis of time-nonhomogeneour BDI process model remains an open question.
The function Σ(t) =
t 0 e −s(u) du, is the most important determines the timing and duration of the transition in the function α(t) defined by (77). The function α(t) is directly related to P 0 (t), the probability that the infection comes to a halt by time t.
5. From the graphical plots of the time-dependent function P k (t) presented in the last section reinforce our earlier claim (see [2] Abstract) that it would be a futile effort to attempt to identify all possible reasons why environments of similar situations differ so much from in terms of epidemic patterns and the number of casualties. Mere "luck" or "chances" play a more significant role than most off us are led to believe. Thus, we should be prepared for a worst possible scenarios, which only a stochastic model can provide with probabilistic qualification, such as e.g., a 95% confidence level.
Our future research plan include (i) Conduct simulation experiments for time-nonhomogeneous BD and BDI process models to empirically validate the analytic results presented in this article. The simulation will be done in a fashion similar to what was reported in Part II [2] on time-homogeneous cases.
(ii) Develop a method to estimate the model parameters from observable data in simulations.
After validating its effectiveness, we should apply the method to real data. We expect that the estimated model parameters could be used to predict the near-term behavior of the infection process I(t).
(iii) Thus far, we have primarily dealt with the process I(t), which is a Markov process. But the processes B(t) and R(t) are not Markov processes. This makes it difficult to obtain their PDFs in an exact form. An approximate analysis based on saddle-point integration will be investigated.
(iv) The main advantage of our stochastic modelling approach is that it not only allows us to better understand the stochastic behavior of an infectious disease, but also permits us to generalize the results obtained thus far to more realistic situations, because our model is intrinsically linear. One important extension will be to deal with different types of infectious diseases, and different types of infectious and infected individuals. These situations can be adequately represented by introducing different "classes" of susceptible population, and multiple "types" of infectious diseases. | 2021-01-25T02:16:01.335Z | 2021-01-22T00:00:00.000 | {
"year": 2021,
"sha1": "bd5a5d1c3cbe7b38668c333ace6502cce58d0187",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bd5a5d1c3cbe7b38668c333ace6502cce58d0187",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
225253577 | pes2o/s2orc | v3-fos-license | Using an eye tracker to optimise career websites as a communication channel with Generation Y
Abstract This paper presents a research study detailing the procedure and results of experimental eye-tracking research to evaluate employers’ career websites. The objective of this research was to gain an insight into Generation Y’s perspective on the career websites of prospective employers. The objective was developed into several research questions and hypotheses. The eye-tracking research method was used to test the websites. The eye-tracking tests were supplemented by an in-depth interview and a standardised questionnaire with the aim of acquiring the respondents’ subjective views and preferences. The research study contributes to an understanding of how prospective employees from Generation Y view the career websites of employers and the importance of the elements presented on them; it allowed factors that affect the perceived attractiveness of career websites to be identified and provided information as to what Millennials liked the most/least about the organisations’ career websites and what would be advisable to change in order for the career websites to better serve their purpose. Based on the research findings, recommendations were made for creating attractive career websites for job seekers from Generation Y.
Introduction
A number of studies (e.g., Ehrhart et al., 2012;Kissel & B€ uttgen, 2015;Williamson et al., 2010) have shown that career websites have a significant effect on drawing the attention of prospective employees, increase the employer's appeal and thus support the hiring process.Another distinctive feature of career websites is their ability to build a strong brand for the employer and convey a large amount of information about the organisation to prospective employees (Allen et al., 2007;Baum & Kabst, 2014).Despite the number of studies on Internet recruiting, there has been little debate regarding career websites as a communication channel with Generation Y, also called Millennials.
As a result, to address this gap, this study investigates career websites as a communication channel with Generation Y and uses an eye tracker to optimise them for communication with the target group of potential employees.
The process of optimising career websites is closely connected with improving their ability to mediate and present important information and display it in such a way that will result in the active involvement of their users on the career websites (Loyola et al., 2015;Vel asquez & Rebolledo, 2010).This means that the accessibility and visibility of information play a crucial role in the process of career website optimisation (West & Leskovec, 2012).Career websites must be rich in information to attract the attention of prospective employees and increase the attractiveness of the employer.For a career website to serve its purpose, it is crucial that visitors can find all the relevant information they are looking for.It can be assumed that if the information on the employer or vacant job is not relevant for prospective employees, or if the information presented can be misinterpreted by them, this will result in a less favourable perception of the employer (Birgelen et al., 2008).Apart from the quality and sense of the information content for prospective employees, this information needs to be presented in such a way that visitors to the website can easily find it.Ease of use is therefore an important element of a career website (Williamson et al., 2003).
The objective of this study was to explore the importance of factors that influence career websites' attractiveness and to discover the main factors for Millennials' decision-making concerning their future employers in order to optimise career websites for the target group of Millennials.
To test and optimise websites, the eye-tracking method, a method of usability testing in laboratory conditions, was used (Barnum, 2011).The eye-tracking study is based on a scenario, i.e., a series of real-life situations that a typical visitor to the website can encounter (Tan et al., 2009).To ensure the test results were comprehensive, the eye-tracking test was supplemented by an in-depth interview using a standardised questionnaire, the aim of which was to ascertain the respondents' subjective positions, views and preferences (Nielsen & Pernice, 2009;Sauro, 2015).
This paper has the following structure.In the second section of this paper, literature on career website optimisation using an eye-tracking method is presented.Section 3 presents the research methodology, including the central research question (CRQ), subquestions (SQ) and hypotheses.Section 4 presents the research results.Section 5 contains a discussion.Finally, Section 6 sets out the conclusions and limitations.
Optimising career websites using eye-tracking
Eye-tracking is a modern research method used for both academic and commercial purposes (Duchowski, 2002;Nielsen & Pernice, 2009).Using an eye-tracking device, a researcher can analyse the respondent's eye movements and know exactly which area the respondent is focusing his or her visual attention on at any given moment.The results of the eye movement analysis, gained based on the user's interaction with a computer, can help to understand the respondent's cognitive processeswhat stimuli captured the user's attention, in what order, how much time was devoted to them and whether or not the user returned to them (Nisiforou & Laghos, 2013;Vel asquez, 2013).Using an eye-tracking device can reveal even the subtle phases of the cognitive process, which are difficult to track in other types of research studies (Vila & Gomez, 2016).
Eye-tracking is considered to be an objective data source for evaluating user interfaces, and it can provide information to improve their design (Nielsen & Pernice, 2009;Poole & Ball, 2005).Objectivity is regarded as being the biggest advantage of using an eye-tracking device (Djamasbi et al., 2010;Nielsen & Pernice, 2009;Wang, Yang, Liu, et al., 2014).Data obtained using an eye-tracker are rid of the respondents' subjectivity, as eye movement can hardly be manipulated.Another advantage of eyetracking is the ability to gain accurate data on what and for how long the respondents devoted their attention to.This allows the researcher to ascertain what the respondent finds interesting (Vel asquez, 2013).
The disadvantage of gaining information using this unobtrusive and sophisticated method is the inability to conduct research with a large number of respondents.This phenomenon concerns not only eye-tracking, but all neuroscience experiments, which are characterised by their complexity and time-consuming data collection (Goldberg & Helfman, 2011;Vila & Gomez, 2016).
The typical tasks for which the eye-tracking method is used can be divided into the following (Duchowski, 2002): This study uses the eye tracker for its diagnostic function.Within the diagnostic methods, we distinguish between two research goalsto analyse the attractiveness of the stimulus for the respondent or to analyse the respondent's performance (Bojko, 2009).The data used for the aforementioned analyses are the respondents' fixations and pupil diameter in response to the stimuli during set tasks (Allen et al., 2013;Jacob & Karn, 2003;Loyola et al., 2015).
This paper contains the results of the experimental use of an eye-tracking device in research of the optimisation of employers' career websites from the perspective of Generation Y, also known as Millennials.This study is a follow-up to research by Mi c ık and Mi cudov a (2018), who studied how selected employers utilise career websites to attract prospective employees from Generation Y. Using heuristic analysis, they identified employers that are most attractive based on criteria according to research conducted around the world.When choosing an employer, potential employees base their decision on information on its career website; however, they must be able to find relevant information there (Niekerk et al., 2019;Tomprou & Nikolaou, 2011).To gain a more comprehensive understanding of employers' career websites, it is necessary to see the issue also from the other side, i.e., the Millennials themselvesand this perspective was the subject matter of this research using an eye-tracking device.
Research methodology
Three employers were selected for the purpose of this research, and their career websites were subjected to analysis by the eye tracker (Djamasbi et al., 2010).The selected employers were those that got the highest score in the evaluation of their career websites in the research carried out by Mi c ık and Mi cudov a (2018), see Table 1.These organisations could be considered the most attractive employers from the point of view of Millennials.
Based on the ranking determined by Table 1, the companies Skoda Auto, ABB and O2 were selected for the purpose of career website analysis using the eye tracker.Before the research involving respondents commenced, the career websites of the three selected organisations were checked to verify whether the eye tracker's interaction with the career website presented any obstacles that would either limit or prevent its use.A limiting factor was identified on the career website of Skoda Auto, where the eye tracker did not allow a link on the career website to open in a new window, which would result in the respondents not being able to complete their task.
For this reason, Skoda Auto was removed from the research and replaced with the Lidl Company, which received the highest score in the 'career website quality' category among the companies with the same overall score.Verification of the compatibility of the Lidl Company's career website did not identify any obstacles preventing the use of an eye tracker.
This research utilised the method of eye tracking in respondents from Generation Y using an eye tracker, and to achieve comprehensive results, the research was complemented by a standardised questionnaire and an in-depth interview.Determining the gaze points and what the respondents devoted the most attention to are the main reasons for the popularity of using an eye tracker, and these are also the reasons why an eye tracker was used in our research.The growing popularity of using an eye tracker in the area of research is evidenced by the large number of research studies published in recent years (e.g., Allen et al., 2013;Djamasbi et al., 2010;Navarro et al., 2015;Wang, Yang, Liu, et al., 2014;Wang, Yang, Wang, et al., 2014).
Research objective
The objective of our research was to explore the importance of factors that influence career websites' attractiveness and to discover the main factors for Millennials' decision-making concerning their future employers in order to optimise career websites for the target group of Millennials.In other words, the CRQ was to determine how Millennials themselves view the career websites of prospective employers.This CRQ is further developed into several research questions and hypotheses (Creswell, 2014).The three employers that received the highest score for their career websites in an earlier study dedicated to employer brand building were selected for the purpose of this research.Therefore, it can be assumed that prospective employees (in this case, Millennials) can find all the relevant information on the career websites of the selected organisations that will help them decide on their next employer.
Based on research of specialised literature, it was established that the success rate of recruiting an employee through a career website is influenced mainly by three factors (Allen et al., 2013;Dineen et al., 2007;Goldberg & Allen, 2008;Lyons & Marler, 2011;Williamson et al., 2003): the objective characteristics of the jobs themselves and the organisation, subjective characteristics, and the manner of contact and communication.
Based on these three factors, the following CRQ and specific SQ were formulated: CRQ: How do Millennials themselves view the career websites of prospective employers?
SQ1: Can Millennials find all the relevant information about their prospective employers on the career websites?
SQ2: What level of usability do the tested career websites offer to Millennials?SQ3: Which career website offers Millennials the best user experience?SQ4: Which career website success factor has the biggest influence on the evaluation of the attractiveness of career websites?
SQ5: Which factors of the career websites are most important for Millennials?SQ6: How can the tested websites be improved to increase their attractiveness for Millennials?
For the purpose of a more in-depth examination of the aforementioned success factors, the following seven hypotheses were defined: H1: There is a correlation between the ability to find relevant information and the attractiveness of a career website.
H2: There is a correlation between the subjective perception of a website and its attractiveness.
H3: There is a correlation between the ease of use of a website and its attractiveness.
H4: The ability to find all the relevant information positively affects a career website's usability.
H5: The time required to find the relevant information plays an important role in the attractiveness of a career website.
H6: The time required to complete the task significantly affects the usability of a career website.
H7: There is a correlation between the time required to complete the tasks and the number of fixations during their completion.
As part of the research, it was determined whether the respondents find the career websites attractive enough to recommend them to others.For a respondent to consider a career website attractive, he or she must be satisfied with the website in general.A higher level of contentment can be expected as a result of a higher degree of the website's perceived quality (Getty & Getty, 2003;Lathiras et al., 2011).A number of research studies have proven the existence of a correlation between contentment and loyalty (e.g., Bowen & Chen, 2001;Eger & Mi c ık, 2017;Fern andez-Ucl es et al., 2019;Hallowell, 1996;Xu & Goedegebuure, 2005).
Due to the nature of the variables and the number of observations, the nonparametric Kendall rank correlation test was used to test hypotheses H1 to H6. Kendall's Tau is suitable for measuring the association between two category variables with no limitations in terms of the number of observations, as is the case of, for example, the Pearson chi-squared test of independence.The coefficient values range between <-1;1>.The coefficient calculation depends on the so-called concordant and discordant pairs, expressed by the formula (Hendl, 2009): where n c represents the number of concordant pairs, n d is the number of discordant pairs and n represents the total number of observations.To test hypotheses H5 to H7, it was necessary to convert the continuous variable of time into a category variable.According to Sturges' rule (Hendl, 2009): where k represents an appropriate number of intervals and log n is the log base 10 of the number of observations.To test hypothesis H7, Pearson's correlation coefficient, which belongs to parametric tests used for determining the correlation between two continuous variables, x and y, was used (Agresti, 2013): The metrics chosen for this research to answer the research questions and verify/ disprove the hypotheses were the number of fixations, the task completion time, the results of the standardised test SUPR-Q (see further in the text) and the results of the qualitative in-depth semi-structured interview.
Compared to the newly created questionnaires, standardised questionnaires generate more reliable data and the ability to compare the results with other competitors (Hornbaek, 2006).One of the most frequently used standardised questionnaires is the System Usability Scale (SUS) (Brooke, 1996); however, it only evaluates the usability of websites.Although this parameter is a crucial characteristic of websites, it is not the only important characteristic.Apart from evaluating websites' usability, SUPR-Q also evaluates their credibility, appearance and the aspect of building loyalty.The goal of the SUPR-Q questionnaire is to provide a comprehensive evaluation of one's user experience with a website (Sauro, 2015).A SUPR-Q score is suitable to be used for comparing websites with websites with a similar focus.The output score indicates whether the examined website is better or worse compared to relevant websites.
It is recommended to conduct research using the SUPR-Q repeatedly and thus indirectly follow one's competitors and trends in this dynamically developing area, with the result representing important feedback from users-customers.
The number of items in the questionnaire was increased from the basic set of 7 þ 1 to 9 þ 1 as per Sauro (2015).Items 1 to 9 of the questionnaire are evaluated on a five-point scale (from I strongly disagree ¼ 1 to I strongly agree ¼ 5), while the 10th item is derived from NPS and uses the scale of 0 to 10 (not at all likely ¼ 0, extremely likely ¼ 10).To determine the overall score, the points from the first nine questions are added up, to which 1 = 2 of the value of the 10th item is added (i.e., the score values can range between 9 and 50).This score then needs to be compared to the industry score, i.e., that of competitors' websites.When the value exceeds 75%, the website ranks among the best compared to other websites from the same industry, and reaching a median score puts the website half way through the quartile containing 50% of the examined industry websites.If the website finds itself in the lowest quartile, it ranks among the worst 25% of websites of the particular industry.Therefore, the output can be used as a benchmarking tool to compare websites with the best (Sauro, 2015).
Research participants and testing lab
In the case of quantitative studies, the recommended number of respondents is between 30 and 39, mainly to generate a robust and representative heat map (Nielsen, 2012;Nielsen & Pernice, 2009).In their conclusion, these authors add that data from 10 respondents are sufficient to gain a satisfactory heat map.A heat map created based on data from 10 participants brings virtually the same results as a heat map created using 30 or more research study participants.
The total number of respondents who took part in this research study was 18, of which 11 were women and 7 men aged between 21 and 24.All the participants were university students (master's degree program) from Generation Y. Due to the timeintensive nature of the research and the need to motivate respondents to take part in this type of study, all participants were given a caf e voucher for refreshments in the value of EUR 5 as an incentive.
It took approximately 45 minutes to test one respondent (three tasks, each of which took approx.15 minutes to complete).A total of 54 measurements (3 Â 18) were conducted within the study.At the beginning, the respondents received information about the study and its goals and were acquainted with the eye tracker and the testing procedure.
The research was conducted in laboratory conditions.The room was modified to meet the requirements for eye-tracking testing (Nielsen, 2012;Nielsen & Pernice, 2009).The testing was carried out using the VT 3 mini Eye Tracker manufactured by Mangold International (Arnstorf, Germany).This device belongs to unobtrusive research tools (Djamasbi et al., 2010;Schiessl et al., 2010;Vila & Gomez, 2016) that do not distract the respondent or cause them discomfort, and thus do not negatively affect the testing procedure.
Research design
Eye-tracking research meets the essence of an experiment, which is why many authors consider it to be experimental research (e.g., Djamasbi et al., 2010;Duchowski, 2002;Eger, 2018;Wang, Yang, Liu, et al., 2014;Wang, Yang, Wang, et al., 2014).However, this research study uses only one group of respondents who were selected based on certain attributes and characteristics (Millennials about to graduate and actively search for employment).For this reason, it is the so-called quasi-experiment (Gray, 2009).Experimental methods always study the correlation between two variables, one of which is dependent (in this case, it is the eye fixation and pupil diameter) and one independent (in this case, the career websites).The purpose of the experiment is to test the impact of a certain factor (independent variable) on another factor (dependent variable).The results of the experiment can be distorted as a result of the effect of the surrounding environment; therefore, it is important that the researcher tries to eliminate this effect and ensure the validity of the results (Kozel, 2006).One thing that typically influences this type of experiment is the order in which individual tasks are completed.This influence can be avoided by the socalled balancing, i.e., different respondents completing the tasks in a different order (Walker, 2013).To further reduce the influence of random aspects and increase the validity of the experiment, this research study used repeated measurements (Hendl, 2014).Repeated measurements means that all the respondents complete the same tasks on the same websites, i.e., all the respondents follow the same scenario.Unlike aimless browsing of a website, the scenario-based method is more effective, as in real life, a website user always aims to complete a certain task (Nielsen & Pernice, 2009).This eye-tracking research study utilised the pre-experimental design (Creswell, 2014).Although all the respondents were exposed to the same scenario, in order to increase the validity of the measurements, the respondents experienced different treatment during the experiment.In order not to give one organisation an advantage over the others by following the same scenario (e.g., following the identical instructions procedure in the A-B-C sequence, etc., and the experience gained by respondents during the testing) and to avoid any reduction in the reliability of data, the following six groups of testing options were created by on varying the sequence of the three companies: X represents the exposure of the group to the variable (in this case, the career website of an organisation), and O represents the results measured by the eye tracker.Each group consisted of three respondents.
A number of authors recommend using some of the conventional methods of usability testing in combination with the eye tracker in order to increase the validity of the research (L opez-Gil et al., 2010;Nielsen & Pernice, 2009;Schiessl et al., 2003).Therefore, following the completion of the scenario, the respondents were given a SUPR-Q standardised questionnaire related to the just-finished eye-tracking test.
Once the questionnaire had been filled out, the respondents were asked a series of qualitative questions as part of a semi-structured interview, which is a common component of eye-tracking research.This retrospective interview, which took place after the completion of the scenario and questionnaire, was chosen instead of the classic think-aloud protocol (e.g., Elling et al., 2012;L opez-Gil et al., 2010;Van Waes, 2000), which is administered during the completion of the scenario.Verbalising one's thoughts, assumptions and opinions during a think-aloud protocol, the respondents may be distracted, which would result in distortion of the experiment's results.As the description of the research study design suggests, the entire research utilised a mixed research design (Walker, 2013).
Test preparations
Prior to the execution of the eye-tracking research study, a questionnaire pilot study and pre-research were carried out (Disman, 2000).The questionnaire pilot study was conducted by two experts and professionals in the area of online marketing.Based on this pilot phase, only minor modifications were made to some of the questionnaire items.The pre-research represents a test of the tools that are to be used in the research itself.The goal is to identify any problems that could arise in other research phases, or to identify any potential problems related to data collection (Baker, 1994;Chr aska, 2016;Chrom y, 2014;Gray, 2009).As part of the pre-research, the functionality of the selected websites was tested, as well as the intelligibility of the eye-tracking scenario, the users' ability to work with the eye tracker, and the interaction between the eye tracker, user and the websites.This was followed by a test of the intelligibility and unambiguity of the questions in the questionnaire and the semi-structured interview.The pre-research was carried out with four respondents from the target group.
Testing procedure
The eye-tracking study was conducted in the form of a quasi-experiment, with the independent variable being the individual websites and the dependent variable represented by the data measured, such as the length of fixation, the number of fixations and the amount of time required to complete the tasks.To reduce any risk of the experiment being distorted by any effects of the environment, the methods of balancing and repeated measurements were used.The participants were informed about the purpose of the study, and its procedure was explained to them.They were acquainted with the eye tracker and its capabilities.This was followed by calibration of the eye tracker for the particular respondent, which on average took one minute.All calibrations were successful.The respondents then proceeded according to the scenario.Each participant tested all three websites one at a time.Following the completion of all three scenarios, the participants were asked to fill out a standardised questionnaire and subsequently underwent a semistructured interview.The qualitative interview did not take place during the eye-tracking test itself to avoid any distraction.If during the interview a respondent could not remember a particular detail, they were shown recorded footage of them browsing the specific website.Following the completion of all three parts of the experiment (eye-tracking, questionnaire and interview), each participant was thanked for taking part in the survey and given a voucher for refreshments at the caf e.
Research results
The time required to complete the various tasks and the number of fixations related to individual tasks are shown in Tables 2 and 3.
Table 2 shows that the study participants spent between 0.73 and 2.58 minutes completing the various tasks.The average task completion time was 1.92 minutes in variant A, 1.73 minutes in variant B and 1.36 minutes in variant C. The overall Table 3 shows that the number of respondents' fixations ranged between 41 and 110 during the completion of the various tasks.The average number of fixations per task was 89 in variant A, 70 in variant B and 72 in variant C. The total average number of respondents' fixations was 446 during variant A, 352 during variant B and 359 during variant C.
Figures 1 and 2 show heat maps expressing the level of visual attention dedicated to particular objects on a website.A heat map can be interpreted as the degree of visual attention devoted by respondents to certain areas of the website (Djamasbi et al., 2010;Wang, Yang, Liu, et al., 2014).The intensity of attention, expressed by aggregated data on eye fixations, is represented by colours.The darkest colour represents the highest level of attention, while the lightest colour represents the lowest level of attention.In this case, Figures 1 and 2 serve as an example of a heat map.
Figures 1 and 2 indicate what topics in the area of CSR the respondents found most interesting.The darker the colour, the more attention was paid to a particular area.In this case, the respondents were interested in the CSR activities of the companies O2 and Lidl.Table 4 provides an overview of the respondents' answers to questions 1-9 of the SUPR-Q standardised questionnaire.
Table 5 contains responses to question No. 10 in the standardised SUPR-Q questionnaire.
In 10 out of 51 cases of the total number of measurements (19.6%), respondents were unable to find all the information about their prospective employers that they were looking for.These cases concerned the career website of employer B (ABB) and employer C (O2).The respondents were able to find all of the information sought only on the career website of employer A (Lidl).It is important to mention that all of the information sought was present on the employers' websites.Thus, it can be said that although the tested websites contained all the relevant information for Millennials, they were able to find it effortlessly only on the career website of Lidl (SQ1).
The results of the career website evaluation for the three companies are shown in Tables 6-8.The tables also contain the average evaluation of the companies' websites from the research study published by Sauro (2015), thanks to which we have an indication of how the evaluated organisations can be compared based on their career websites.However, this is merely an indication, as Sauro's study dealt with the companies' websites in general, while this research study dealt specifically with their career websites.
It is apparent from the tables above that Lidl achieved the best results.These are above-average results even in comparison with a study conducted by Sauro (2015).In each evaluated category, Lidl got more points than an average website in the given study.The other two organisations did not reach the average value in any of the evaluated categories, and from the point of view of SUPR-Q methodology, their evaluation is below-average compared to the values in the study conducted by Sauro (2015).
Looking at the results of the tested career websites, we can formulate answers to SQ2 and SQ3.When evaluating the level of usability, it very much depends on what we define as the standard evaluation.If we consider as standard the results of the study carried out by Sauro (2015), the usability of the career websites (SQ2) of ABB and O2 would be evaluated as below-average and the usability of Lidl's career website as above-average.However, it would be advisable to create a new standard for such specifically focused websites to serve the specific purpose of evaluating career websites.When comparing the usability of the three tested career websites, it can be said that the usability of Lidl's career website is much higher than that of ABB's and O2's career websites.
The overall evaluation of the user experience can be based on the overall absolute SUPR-Q evaluation of the individual career websites.Lidl achieved the highest score (43.9), followed by O2 (32.9) and ABB (32.2).Just as in the previous case, it can be said that the best user experience (SQ3) for prospective employees is offered by the career website of the Lidl Company.
The identification of factors that have the biggest effect on the evaluation of the attractiveness of the career websites (SQ4) was based on the theory stated in the Introduction, based on which hypotheses H1 to H3 were formulated.Hypothesis H4 is based on the assumption that if a prospective employee cannot find specific information, even though that information is available on the website, the prospective employee's decision is based on incomplete information that he or she has found (Turban, 2001), which reduces the website's usability.Other variables whose influence is often examined in similar types of studies (e.g., Djamasbi et al., 2010;Nisiforou & Laghos, 2013) are time and fixation.If time has an influence on attractiveness, it should also have an influence on the usability of career websites.Hypotheses H5 and H6 were formulated based on these assumptions.Researchers dealing with eye tracking often focus their research on the correlation between the number of fixations and the complexity of websites (e.g., Chassy et al., 2015;Wang, Yang, Wang et al., 2014).For this reason, we used the H7 hypothesis to verify the assumption that there is a correlation between the number of fixations and the attractiveness of the website.This hypothesis is based on the idea that if a respondent is not only browsing, i.e., scanning a particular website, but deliberately fixes his/her visual attention on a specific piece of information in order to learn more details, this indicates that the respondent finds the given website interesting (Jacob & Karn, 2003;Pan et al., 2004;Wang, Yang, Liu et al., 2014).
Table 9 contains the results of tests of the formulated hypotheses, with the results calculated using the Statistica software application.
Table 9 clearly suggests that there is a mutual correlation between the attractiveness of a career website and the ability to find relevant information (H1), the attractiveness of a career website and the subjective perception thereof (H2), and the attractiveness of a career website and its ease of use (H3).Thep-value of these correlations indicates a statistically significant association.The s coefficient reached values of over 0.6, which signifies a strong correlation.The strongest association was identified between the attractiveness of the career websites and the ability to find relevant information on them.Of the three factors tested, this factor has the greatest effect on the evaluation of the attractiveness of the career websites (SQ4).The test results corroborate the validity of the formulated hypotheses H1 to H3.
A statistically significant correlation was identified between the ability to find all the relevant information and the usability of the career websites.The correlation strength of 0.793 indicates a very strong association between the variables.Hypothesis H4 is thus corroborated.
The highp-value and the low value of the association test for hypotheses H5 and H6 indicate that there is no significant statistical association between the tested variables.Based on the test results, the effect of time on the attractiveness and usability of the career websites is thus insignificant.
A very high statistical dependence was identified between the number of fixations during the completion of individual tasks and the time required to look up relevant information while completing those tasks (H7).During the tests, only situations when respondents fixed their gaze on a particular area for at least 300 ms were taken into consideration.A longer duration of browsing the website thus suggests a heightened interest on the part of the respondent in information on the particular website.
Answers to the research questions SQ5 and SQ6 come from a semi-structured indepth interview.In this interview, the respondents were asked the following questions: Note: H1 to H6 represent the results of the tests of the hypotheses using Kendall's Tau.H7 represents the result of testing using the Pearson correlation coefficient.The correlation between the two values, expressed by X, was not determined.Source: Authors.
which factors of an organisation's career website are most important for them when searching for employment (Q1), whether while browsing the career website they encountered any problematic areas that would be worth improving to better capture the attention of prospective employees (Q2), which information they would have liked to have found on the career website ( Q3), what, from their point of view, they liked the least/most about the career website (Q4).
The first question (Q1) is a general question aimed at identifying the main factors that prospective employees pay attention to when choosing an employer through its career website.The most frequently mentioned factors are as follows: corporate culture (corporate social responsibility being the most frequently mentioned one), financial compensation and fringe benefits, sufficient amount of information about the company, sufficient amount of information about the vacant positions, personal and professional development, and work/life balance.This list of factors is the answer to SQ5.The list of factors important for making a decision about one's employer contains factors that employers should utilise on their career websites in order to attract the attention of prospective employees from Generation Y.
Others concerned only the career website of a particular organisation.The following text summarises the key answers to questions Q2 to Q4.The answers are listed in the order assigned to the individual organisations, from A to C.
Lidl career website
When browsing the career website of Lidl, respondents did not come across any problematic areas and found all the information about the employer they were looking for.Respondents most appreciated the fact that the website was well-organised, clear, intuitive, interactive and, most of all, highly informative.The fact that there were no problems identified and the very good evaluation of the website corresponds with the overall evaluation of the career website mentioned above, according to which Lidl achieved the highest score of all the organisations tested.
ABB career website
The most problematic area of the career website of the ABB Company was its language.The career website intended for the Czech Republic contains a number of texts in English, and some hyperlinks (e.g., links to the organisation's corporate culture and corporate social responsibility) refer to the career website intended for American job seekers.The fact that relevant information in Czech is missing may cause one to question whether the Czech subsidiary of ABB is even engaged in the area of corporate social responsibility or what the organisation's corporate culture is like in the Czech Republic.Respondent No. 7 made an interesting comment: 'It's good to know that ABB is engaged in corporate social responsibility in the USA, but I would be much more interested in (ABB's) activities in the Czech Republic.After all, I will be working here, not across the ocean'.Another problematic area of the career website of the ABB Company was, according to respondents, its infrastructure.Some information was difficult to find, as the website is poorly organised and the names of categories were difficult to understand, which reduced the ability to orient oneself on this website.On the ABB career website, respondents missed more detailed information about the company and job opportunities at ABB.Another thing that was missing was more detailed information about the possibility of one's personal development.What the respondents liked the least was the aforementioned switching from Czech to English, the poor organisation and complex structure.What the respondents liked the most was the website's modern design.
O2 career website
The most problematic area of the career website of the O2 Company was access to it, i.e., the link to the career website from the corporate website.The link to the career website is usually situated in the top or side menu, but in the case of O2, the link was placed at the bottom of the page, and what's more, it was printed in very small letters.If a prospective employee did not know about the existence of a link to the career website, it is quite likely that he or she would give up searching for it after a while.This is reflected in a comment made by respondent No. 9: 'If I hadn't had the information that the website does contain the link, I wouldn't have found it'.The 'hidden' link also has another negative effectthe respondents felt that the company was not at all interested in hiring employees: 'It appears that selling monthly plans and phones is more important for them than hiring new staff'.
Figures 3 and 4 show the location of hyperlinks to the career websites analysed in this study and the process of locating the link to the career website by respondents in the case of the O2 corporate website.The generated heat maps are from the first five seconds of browsing in the case of Lidl and ABB.In the case of O2, the heat maps are from the first 5, 15, 25 and 35 s of browsing.
Figure 3 shows that on the corporate websites of Lidl and ABB respondents were able to locate the links to the career websites without any problem in the first five seconds.In both cases, the respondents' attention was mostly focused on the top half of the page, specifically on the top menu.Figure 4 displays the respondents' difficulty in finding the link to the O2 career website from the company's corporate website.Figure 4 clearly shows that respondents expect to find the link to the career website in the upper half of the screen (the darkest colour represents the highest intensity of the respondents' visual attention), which is common with the remaining career websites.Then, they scrolled down and expected to find the link in the bottom navigation menu, which consists of several lists.However, the menu is located in the page footer, even below the link to social media.Locating this link took respondents an average of 31 s (Figure 4).This represents a serious problem because as several respondents indicated in their answers, they would leave the page thinking that there is not a career website link if they had not been informed about this during the scenario.After being redirected to the career website, the respondents were able to find all the information they were looking for.Just as in the case of the ABB career website, they did not like the poor organisation and the subjectively perceived complexity of the O2 career website.On the other hand, the respondents appreciated the sophisticated and modern graphic design of the website.
The answer to SQ6, i.e., what modifications to the tested websites would increase their attractiveness for prospective employees, can be found in the text above.In the case of ABB, it would be advisable to make sure that prospective employees can find all the relevant information on the career website intended for the Czech Republic in Czech.It is also necessary for their career website to contain information about the organisation's corporate culture and activities in the area of corporate social responsibility in the Czech Republic.Another thing that the organisation should focus on is making the website better organised.As far as infrastructure is concerned, a suitable tool for improving it is, for example, the card sorting method (Wood & Wood, 2008).It is also crucial that the website contains more detailed information.In the case of the O2 website, first and foremost it is necessary to improve access to the company's career website.Users unfamiliar with the website will most likely not find the link to the career website.That means that prospective employees would not be able to find anything about O2 as an employer and O2 would lose the opportunity to attract prospective employees.Just as in the case of ABB, it would be advisable to make the career website better organised.
Discussion
The results show that information availability, subjective perception and ease of use each play an important role in career websites' attractiveness.When determining the effect of various factors on the attractiveness of career websites, the most significant attractiveness factor identified was the ability to find relevant information on the given websites (0.688).Career websites are a public source of information that conveys a lot of information.From a potential employee's point of view, this information can already form part of the basis upon which the future psychological contract may be built.This means that information availability can act as a signal message that determines employer attractiveness to potential employees (Niekerk et al., 2019;Tomprou & Nikolaou, 2011).
The ability to find relevant information was followed by the subjective perception of the career websites (0.645) and their ease of use (0.611).This is in line with the conclusion made by Williamson et al. (2003), Dineen et al. (2007) and Allen et al. (2013).Potential employees pay attention to the content provided and evaluate it; these evaluations influence the attractiveness of the websites.Our results differ from Lyons and Marler (2011), who on the other hand concluded that aesthetic features of websites are more important for job seekers than the websites' employment content.
The ability to find relevant information also has a major influence on the usability of career websites (0.698).As information channels become more complex, it is increasingly important to understand how people navigate themselves in locating the desired information.We answer to West and Leskovec (2012), which called for further investigation, which we focused on a specific target group of people.Our result supports the conclusion made by Cober et al. (2004) and Allen et al. (2013).A strong dependence was also identified between the number of fixations during the completion of individual tasks and the time required to find relevant information while completing those tasks (0.793).What that means is the longer the respondents browsed a particular website, the more interesting they found it.This result is in line with Wang, Yang, Liu, et al. (2014).In our case, however, more neuromarketing research would be needed.There was no correlation established between the factor of time and the attractiveness of the career websites (-0.112) and their usability (-0.139).In this case, this finding does not support the assumption made by Nisiforou and Laghos (2013).
This leads to several conclusions: the more informative and better organised a career website is, the more attractive it becomes in the eyes of Millennials, whether or not Millennials consider a website attractive also depends on their subjective feeling and the ease with which they can find the required information on the website, the time required to look up information does not have a significant influence on the attractiveness or usability of a website, as long as Millennials can find the information.
The main factors for Millennials' decision-making concerning their future employers based on their websites were identified in this research.These factors, together with the rich informational value of the career websites, can help achieve the two main goals of career websitesto attract the attention of a prospective employee to the career website for the purpose of evaluation of the provided information about the organisation, and to provide a sufficient amount of information in such a way that it convinces prospective employees to apply for a job at the particular organisation.
The research also identified the main problematic areas of the examined career websites.By eliminating these, employers could make their career websites more attractive for the target group of Millennials.Based on the identified deficiencies in the research presented above, general recommendations can be made for creating attractive career websites for the target group of Generation Y: for the purpose of good accessibility, it should be easy to locate the link to the career website, i.e., it should be placed in the main menu, a career website should present factors that Millennials find important when making a decision about their prospective employer, a career website intended for the recruitment of Czech employees should be in Czech; a career website should be sufficiently well-organised so that all the relevant information can be easily found; if a prospective employee cannot find a particular piece of information, even though the website contains this information, the job seeker's decision is based on incomplete information about the employer and the usability and attractiveness of the website is thus reduced, a career website should also be sufficiently intuitive and its contents interactive, a modern design for career websites is a must these days.
These outcomes are also confirmed by Gunesh and Maheshwari (2019), who conclude that career websites are not sufficiently attractive without interactive content such as videos and animations.The core values and factors that matter to potential employees must be clearly displayed on the career websites and detailed information must be provided.This fact is stressed by several studies (e.g., Williamson et al. [2003], Dineen et al. [2007] and Allen et al. [2013]).
Conclusions and limitations
The eye-tracking research into the optimisation of career websites as a communication channel with Generation Y produced knowledge of how prospective employees from Generation Y perceive the career websites of selected employers and the importance of the elements presented on these websites, which factors influence the perceived attractiveness of the career websites, what in their point of view they liked the most/least and how to improve the websites so that they better serve their purpose.
Although all the examined career websites contain all the information that Millennials consider important when deciding on their future employer, it was possible to locate this information without any problems only on the career website of the Lidl Company.The comparison of the websites leads to the conclusion that when it comes to the evaluation of the usability and user experience, the Lidl career website achieved the highest score.This website could thus serve as a benchmarking model for improving the two remaining career websites.
Our findings also provide practical implications for the development and use of Web-based organisational career sites.Job searchers are influenced by information content, perception and ease of use.The study provides a complex eye-tracking methodology to optimise career websites for a specific target group and insights into the perception of career websites by the target group of Millennials.The studies referenced in the paper (Williamson et al., 2003, Backhaus et al. 2004;Ehrhart et al., 2012;Kissel & B€ uttgen, 2015;Williamson et al., 2010) employed only a quantitative methodological approach which resulted in a dearth of qualitative data.This research attempted to fill this gap by employing a qualitative approach.Also, in contrast to previous studies, this study contributes to e-recruitment literature by providing a target group's perspective of the most important factors of career websites.For practitioners, it can also serve as encouragement to make use of eye-tracking studies to improve the career websites to attract more potential employees from Generation Y.To get more precise results in the future, neuromarketing research would be needed in combination with the eye-tracking method.
This research has several limitations.Firstly, it is focused on the Czech Republic and on a population with a higher education level.Secondly, students from only one faculty were chosen as respondents for this study, as they are attractive in the labour market.Using students as respondents has both advantages and disadvantages; for instance, it can reduce the findings' external validity and generalisability (Berthon et al., 2005;Sivertzen et al., 2013).This fact reduces the ability to generalise the results for all Millennials in the Czech Republic, or even all Millennials at universities.Another limitation of the research is the limited number of its respondents.This limitation is due to the experimental nature of the research and the demands of working with an eye tracker.
To conclude, it needs to be said that the goal of this research was not to analyse and evaluate in an exhaustive way the career websites of a large number of organisations, but rather to prove in a sample of selected organisations the possibilities of experimental eye-tracking research in combination with a qualitative in-depth semistructured interview to deliver a comprehensive evaluation that could significantly contribute to the optimisation of companies' career websites.
average variant completion time by respondent was 9.63 minutes for variant A, 8.65 minutes for variant B, and 6.82 minutes for variant C.
Figures 1 and 2 .
Figures1 and 2. The heat maps of the O2 and LIDL career websites -CSR.Source: Authors.
Figure 3 .
Figure 3.The heat maps of Lidl and ABB websites when locating the link to their career websites.Source: Authors.
Figure 4 .
Figure 4.The process of locating the link to the O2 career website in the first 5, 15, 25 and 35 s.Source: Authors.
Table 1 .
The most attractive employers based on the evaluation of their career websites.
Table 2 .
The time required to complete the various test variants.
Table 3 .
The number of fixations in the individual test variants.
Table 4 .
Answers to questions 1-9 in the SUPR-Q questionnaire.Likert scale of responses 1 ¼ I strongly disagree, 5 ¼ I strongly agree.Source: Authors.
Table 5 .
Responses to question 10 in the SUPR-Q questionnairethe attractiveness of the career websites.
Table 9 .
The results of tests of the formulated hypotheses. | 2020-08-27T09:13:19.233Z | 2020-08-25T00:00:00.000 | {
"year": 2021,
"sha1": "0656cd7bf4de4b21e65e48d26f3544554935e5f9",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/1331677X.2020.1798264?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8ab087f026be6fde4eab249a138b0754df86690e",
"s2fieldsofstudy": [
"Business",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
11827508 | pes2o/s2orc | v3-fos-license | promoter is more efficient than CMV promoter to express H 5 hemagglutinin from influenza virus in baculovirus as a chicken vaccine
Background: The worldwide outbreak of influenza A (H5N1) viruses among poultry species and humans highlighted the need to develop efficacious and safe vaccines based on efficient and scaleable production. Results: White spot syndrome virus (WSSV) immediate-early promoter one (ie1) was shown to be a stronger promoter for gene expression in insect cells compared with Cytomegalovirus immediate-early (CMV) promoter in luciferase assays. In an attempt to improve expression efficiency, a recombinant baculovirus was constructed expressing hemagglutinin (HA) of H5N1 influenza virus under the control of WSSV ie1 promoter. HA expression in SF9 cells increased significantly with baculovirus under WSSV ie1 promoter, compared with CMV promoter based on HA contents and hemagglutination activity. Further, immunization with baculovirus under WSSV ie1 promoter in chickens elicited higher level anti-HA antibodies compared to CMV promoter, as indicated in hemagglutination inhibition, virus neutralization and enzyme-linked immunosorbent assays. By immunohistochemistry, strong HA antigen expression was observed in different chicken organs with vaccination of WSSV ie1 promoter controlled baculovirus, confirming higher efficiency in HA expression by WSSV ie1 promoter. Conclusion: The production of H5 HA by baculovirus was enhanced with WSSV ie1 promoter, especially compared with CMV promoter. This contributed to effective elicitation of HA-specific antibody in vaccinated chickens. This study provides an alternative choice for baculovirus based vaccine production. Background The spread of highly pathogenic avian influenza A (H5N1) viruses from Asia to the Middle East, Europe, and Africa poses the threat of an influenza pandemic. Vaccination of poultry is an effective measure to control virus spread [1]. Current production of inactivated influenza vaccine requires high-level biocontainment facilities and large numbers of embryonated chicken eggs, while baculovirus surface displayed recombinant hemagglutinin may be an attractive alternative to the effective influenza vaccine [2-5]. Published: 31 December 2008 BMC Microbiology 2008, 8:238 doi:10.1186/1471-2180-8-238 Received: 22 August 2007 Accepted: 31 December 2008 This article is available from: http://www.biomedcentral.com/1471-2180/8/238 © 2008 He et al; licensee BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Background
The spread of highly pathogenic avian influenza A (H5N1) viruses from Asia to the Middle East, Europe, and Africa poses the threat of an influenza pandemic. Vaccination of poultry is an effective measure to control virus spread [1]. Current production of inactivated influenza vaccine requires high-level biocontainment facilities and large numbers of embryonated chicken eggs, while baculovirus surface displayed recombinant hemagglutinin may be an attractive alternative to the effective influenza vaccine [2][3][4][5].
White spot syndrome virus (WSSV), a major pathogen in shrimp, can infect a wide range of invertebrate tissues and cells. WSSV genome has 9 repeated regions similar to those of baculovirus, suggesting the potential to exploit WSSV promoters in baculovirus and insect cell expression system [6,7]. Baculovirus produces high yield of foreign soluble protein in insect cells and mediates efficient transduction of mammalian cells. Thus, it is widely used as a vaccine production system [8]. WSSV ie1 promoter was reported as one of the strongest promoters in insect cells [9,10]. However, no documented report has compared the activity of WSSV ie1 promoter with other promoters in vaccine production. In this study recombinant baculoviruses were constructed under WSSV ie1 promoter, in an attempt to establish a novel platform for efficient antigen expression. These recombinant baculoviruses were further evaluated in the hemagglutinin production of H5N1 influenza virus.
The influenza virus HA glycoprotein has receptor-binding activity and mediates viral-endosomal membrane fusion during viral entry and serves as the primary target for neutralizing antibodies [11,12]. HA protein from H5N1 influenza virus expressed in baculovirus mediated by WSSV ie1 promoter can be displayed on baculovirus surface without disrupting its authentic cleavage, hemagglutination activity and immunogenicity [13]. Besides, baculovirus pseudotyped with the vesicular stomatitis virus glycoprotein (VSV G) emerges as a promising gene-delivery vector by virtue of its capability in transducing numerous mammalian cells [14,15]. Coexpressed with VSV G in baculovirus, the HA protein could be delivered into host cells to elicit immune response in a long term. For the efficient HA delivery to target cells, an active promoter is required in both vertebrate and invertebrate species.
The current study compared WSSV ie1 promoter with CMV promoter in the context of baculovirus vector for the efficient expression of HA protein from H5N1 influenza virus as a surface-displayed immunogen in SF9 (Spodoptera frugiperda) cells. Further studies on immunogenicity were performed for these baculovirus vaccines under WSSV ie1 promoter in chickens. The results demonstrated that HA of H5N1 influenza virus could be more efficiently produced by baculovirus with WSSV ie1 promoter, which serves as a safe vaccine in chickens and provides effective immune protection from avian influenza.
WSSV ie1 promoter mediates efficient protein expression in SF9 cells
In order to investigate whether the relative strength of the promoter was cell type dependent, a plasmid containing WSSV iel promoter (phRL-ie1) for luciferase expression was transfected into CEF and SF9 cells to test luciferase activity, in comparison to CMV (phRL-CMV). Luciferase activity, indicating intracellular luciferase quantity, was presented in folds of the basic value set in the system. Hence, a link was established between promoter activity and luciferase activity. SV40 promoter was used as a control promoter in both insect and mammalian cells. Vero cells were used to normalize transfection efficiency. CMV promoter activity (mean 87 folds, SD 5.3) was much weaker than the WSSV iel promoter (mean 1610 folds, SD 26.4) in SF9 cells. In CEF cells, the WSSV iel promoter activity (mean 6195 folds, SD 156.8) was slightly less than the CMV (mean 12715 folds, SD 258.8) (Fig 1). The data indicated that the WSSV iel promoter activity was strong in insect cells, in which CMV promoter activity was weak. Furthermore, WSSV ie1 promoter was found to be active in all of the vertebrate cells tested here, including human TK-143b, monkey Marc145, Vero, porcine PK15 and carp epithelioma papillosum (EPC) (data not shown). This property of WSSV ie1 promoter renders it a promising candidate for efficient protein expression in baculovirus infected SF9 cells.
WSSV ie1 promoter stimulates strong H5 hemagglutinin expression in baculovirus
To further compare WSSV ie1 promoter with CMV promoter in the efficiency of protein expression, four recombinant baculoviruses were constructed, termed as vAc-ie-HA and vAc-CMV-HA expressing HA; vAc-G-ie-HA and vAc-G-CMV-HA coexpressing VSV G protein with HA for gene transduction [16,17] (Fig 2). To confirm the activity of WSSV ie1 promoter in SF9 cells as shown in luciferase test, SF9 cells were infected with the four recombinant baculoviruses individually. The infected cells were fixed and subjected to antibody staining 3 days postinfection. 3D4 and 8B6 are two different H5-specific monoclonal antibodies used in these studies [18]. As shown in Fig 3A, indirect fluorescence signals from HA protein were strong and sharp by recombinant baculoviruses with WSSV ie1 promoter. HA expression was detected in cells infected with CMV promoter-controlled baculoviruses though the fluorescence signals were diffused and faded. For those baculoviruses with VSV G cassette, the staining with anti-VSVG antibody verified the successful coexpression of VSV G protein and suggested the selected promoter has no effect on the infection efficiency (Fig. 3A).
Baculovirus-expressed HA sustains hemagglutination activity. The HA titer of baculoviruses under different promoters was evaluated with the same number of virus particles. 25 μl of baculovirus at a titer of 10 10 PFU/ml was used in the standard hemagglutination assay. Data were collected from 4 parallel assays. As shown in Figure 3B, constructs under WSSV ie1 promoter gave a higher mean hemagglutination titer of 1:256 (vAc-ie-HA) to 1:320 (vAc-G-ie-HA), while those under CMV were at a mean titer of 1:44 (vAc-CMV-HA) to1:48 (vAc-G-CMV-HA) (p < 0.05). Coexpression of VSV G protein had no significant effect on the hemagglutination result (p ≥ 0.359). Besides, at a mean titer of 1:112, vAc-pol-HA, a HA-expressing construct under baculovirus polyhedrin promoter, was included in this hemagglutination test as a system control, since it is widely used in recombinant baculoviruses for HA production [8]. The HA titer of vAc-ie-HA was higher than vAc-pol-HA at the same virus copies (p < 0.0001), indicating its advantage in HA production.
To verify this result, recombinant virus copies and HA contents were measured in a time course study during baculovirus infection. As shown in Fig. 3C, the temporal kinetics of the growth curves for these viruses were similar [13] among the three promoters studied here. However, differences were found in HA production with the three promoters (Fig. 3D). With WSSV ie1 promoter, the HA content in virions was up to 6.6 ug/ml (SD 0.56) corresponding to the virus titer of 10 9 PFU/ml at 96h postinfection. The HA content of polyhedrin promoter was 5.05 ug/ml (SD 0.48) at the similar virus titer, while the HA production of CMV promoter was around 2 ug/ml (SD 0.40) at the same collection time. In ANOVA test, p values for comparisons among three promoters at each time point were less than 0.005 and the differences between each two groups were considered as significant. (For N tests, p < 0.005 is significant at the overall 0.05 level with Bonferroni adjustment.) Taken together, these results indicated that WSSV ie1 promoter can induce more abundant H5 hemagglutinin expression in baculovirus with hemagglutination activity, in comparison to polyhedrin promoter, as well as CMV promoter.
Immunogenicity of H5 hemagglutinin expressed by WSSV ie1 promoter in chickens
The immunogenicity of baculovirus under WSSV ie1 promoter or CMV promoter was subsequently investigated through intramuscular (IM) or intranasal immunization (IN) of 2-week-old chickens with purified live virions without adjuvant. The live H5N1 vaccine (VNH5N1-PR8/ CDC-RG) was used as the positive control, while PBS vaccinated chickens served as negative controls. The same H5N1 (VNH5N1-PR8/CDC-RG) strain was inactivated with BEI (binary ethylenimine) [19] as another control. The serology assays performed here were based on five different serum samples from five individual chickens in the each group (95% confidence interval is between 8.72 and 35.02). To determine the neutralizing antibody level in those chicken sera, HI tests were performed with H5N1 (VNH5N1-PR8/CDC-RG). As shown in Table 1, the WSSV ie1-type baculoviruses elicit higher HI titer than CMV promoter in serum samples. Coexpression of VSV G also contributes to an increase in anti-HA antibody level with HI activity. In addition, to confirm this result about the neutralization activity, a standard micro-neutralization test was performed with H5N1 (VNH5N1-PR8/CDC-RG) in MDCK cells. The sera induced by WSSV ie1-controlled Comparison of promoter activity of WSSV ie1 and CMV promoter in luciferase assays in different cell lines Figure 1 Comparison of promoter activity of WSSV ie1 and CMV promoter in luciferase assays in different cell lines. The relative luciferase activity was expressed as fold activity 24 h post-transfection, and the data were from three transfections. The transfections were performed with the reporter plasmid phRL with WSSV ie1, CMV and SV40 promoter individually. CEF: transfections in chicken embryo fibroblasts; SF9: transfections in SF9 cells. Vero: transfections in Vero cells. p. value is less than 0.005, when WSSV ie promoter was compared with CMV promoter in SF9 cells, evaluated in t-test.
Polyhedrin locus
HA-displaying baculoviruses showed a higher neutralization titer than those from CMV promoter, while coexpression of VSV G protein enhanced the neutralization titer in samples of both WSSV ie1 and CMV promoters. Further, serum samples were tested in ELISA. Serum samples from the second collection were diluted at 100 fold in PBS and tested for the anti-HA antibody level. For those samples from intramuscular injection (Fig. 4A), at the same dosage of virus inoculated, the chickens immunized with those baculoviruses under WSSV ie1 promoter developed higher antibody response than those under CMV promoter (p < 0.05). Moreover, coexpression of VSV G protein contributed to an increase in anti-H5 antibody level (p < 0.05) due to the transduction mediated by VSV G protein. In the lack of constant virus replication in vivo, antibody levels of these WSSV ie1 baculovirus immunized chickens were relatively lower than those of live H5N1 (VNH5N1-PR8/CDC-RG) infected chickens (p ≤ 0.09), but they were higher than those of immunized chickens with inactivated H5N1 (VNH5N1-PR8/CDC-RG) at the same protein dosage (p < 0.05). For those intranasally immunized chickens by baculovirus, lower IgG response was detected compared with intramuscularly injected chickens (Fig. 4B). Furthermore, cut-off value of this ELISA was determined as 0.3 based on tests with healthy new-born chicken serum. To further evaluate the HA-specific antibody level in sera, the dilution factor of each serum sample was recorded (Table 1) at the value beyond 0.3 in ELISA. The data obtained in this method is consistent to the results from other tests performed here. All of these findings indicated that efficient production of HA by Efficient production of activated HA protein of influenza virus by WSSV ie1 promoter in baculovirus A WSSV ie1 promoter in baculovirus allowed it to be exploited as a vaccine production platform.
Significant antigen expression in chicken tissue was induced by HA-VSV G coexpression constructs
To further study HA-transduction in immunized chickens and detect antigen expression in tissue based on vaccination, immunohistochemistry of frozen tissue sections from chickens was performed with an H5-specfic monoclonal antibody 2 weeks after the second vaccination. In addition, the recombinant baculovirus under WSSV ie1 promoter was shown to bring strong antigen expression in chicken tissues, which depends on its activity of HA expression in both of insect and chicken cells.
Discussion
The recombinant baculoviruses under WSSV ie1 promoter presented here, present advantages in HA production and gene transduction, relying on its promoter efficiency in both vertebrate and invertebrate species. In this study, CMV promoter was used for major comparison because it displayed activity in both mammalian and insect cells as indicated in luciferase assays and it is widely used for protein expression and gene transduction in numerous cell lines [20][21][22]. However, CMV promoter might be relatively weak for protein expression in insect cells. Therefore, WSSV ie1 promoter was also compared with polyhedrin promoter from baculovirus in HA expression in insect cells. The results confirmed the role of WSSV ie1 promoter as an efficient promoter for baculovirus mediated protein expression. HA expressed in baculovirus served as exogenous antigen to stimulate primary immune response, while HA de novo expression in chicken tissue will contribute to the trigger of new HA-antibody production for further protection in a long run [18,20]. For gene transduction, although CMV promoter is stronger in chicken cells than WSSV ie1 promoter, as indicated in luciferase assays, such advantage could be offset by the stronger HA expression on viral surface with WSSV ie1 promoter, which eventually leads to enhanced HAspecific immune response in chickens [23].
To further verify those advantages brought by WSSV ie1 promoter in vaccine production, the immunogenicity of these baculovirus-based immunogens was studied with chickens. In the comparison of different promoters in the same type of baculovirus construct, vaccine dose was based on virus copies rather than protein contents in order to differentiate the HA production efficiency by different promoters with the same amount of baculovirus copies (10 9 PFU) [8]. As shown here, at the same dosage of baculovirus, constructs with WSSV ie1 promoter elicited better immune response than CMV promoter, confirming the higher HA expression level by WSSV ie1 promoter. Meanwhile, when the comparison was performed between baculovirus and attenuated H5N1 influ- As the direct evidence for gene transduction, HA antigen expression in tissues was revealed in IHC assays. The results were repeatedly observed in most chickens from each group and considered to be significant. For intranasal inoculation, lung is the major organ to directly contact immunogen upon vaccination, where VSV G mediates virus entry and HA expresses with its individual promoter. Also, as the major organ involved in immunity of avian species, thymus supports exogenous antigen expression [24][25][26]. Therefore, for intramuscular injections, most of HA expression was detected in chicken thymus.
Immunogenicity of HA-expressing baculoviruses
Although baculovirus-expressed hemagglutinin influenza vaccines have been widely used and well characterized in different ways and under variant vector designs [5,8,[27][28][29], innovative methods are under investigation for more efficient HA production at a higher hemagglutination titer. WSSV ie1 promoter supports the abundant production of HA in baculovirus system, as compared with other promoters tested here. This allows it to induce higher level of specific antibody response in immunized poultries at the same number of baculovirus copies. In addition, compared with inactivated H5 influenza virus vaccines at the same dose of HA protein (p < 0.05), data shown here indicates that WSSV ie1 promoter-mediated baculovirus vaccine could present better immunogenicity without biosafety concerns in vaccine preparation [30]. This also suggests that there could be some other properties of WSSV ie1-controlled baculovirus contributing to better immunogenicity. One possibility is that the surface-displayed HA in baculovirus sustains its natural conformation upon vaccination due to the obviation of the inactivation process in the baculovirus-type vaccine production [8]. Future studies will focus to identify whether other properties of WSSV ie1 promoter support strong immunogenicity. Taken together, our studies provide an alternative choice for the efficient production of surfacedisplayed HA with baculovirus. Its vaccine potential was primarily studied in chickens, which might throw light on its promising trials in humans.
Conclusion
With WSSV ie1 promoter, the recombinant baculovirus provided an efficient and expeditious method in vaccine production, compared with traditional means. WSSV ie1 promoter-mediated baculovirus conferred better immunogenicity in chickens upon vaccination in the light of the efficient HA expression in both insect and chicken cells. This study fully characterized the capacity of baculovirus featuring WSSV ie1 promoter in antigen production and immune response elicitation in chickens, suggesting it could be a promising choice as an efficient vaccine production system.
Viruses and cells VNH5N1-PR8/CDC-RG obtained from Center for Disease
Control (USA) is a non-pathogenic H5N1 influenza virus. PR8 strained-based reassortant virus comprises of the HA and NA gene of AIV H5N1 virus infecting human in Vietnam (A/Vietnam/1203/04). The virus was grown in the allantoic cavities of 10-day-old embryonated eggs (Chew's Poultry Farm, Singapore).
Luciferase activity assay
Renilla luciferase activity was measured with the Dual-Luciferase Reporter Assay System (Promega, Madison, WI) according to the protocol (Technical Manual, #TM040) using a Luminometer (Berthold, Lumat LB 9507, ITS Science & Medical PTE LTD) [31]. DNA was isolated from WSSV-infected shrimps using DNeasy tissue kit (Qiagen). Luciferase reporter plasmids were constructed by inserting WSSV ie1 promoter into KpnI-Hind III sites of phRL vector. Vector pRL-SV40 and pRL-CMV were provided in the kit. Cells were lysed in 1 × lysis buffer (50 μl/ well) for 15 min at room temperature and each cell lysate was added into the luminometer tube containing 100 μl of assay reagent. The mixture was mixed quickly by flicking for 2 s, and placed in the luminometer for 10 s measurement. Transfection efficiency was normalized using pRL-SV40 in Vero cells, in which SV40 promoter drives the firefly luciferase reporter gene. Data (mean + SD) were collected from triplicate assays of three independent transfections.
Construction of recombinant baculoviruses
For the generation of the recombinant baculovirus vectors, as mentioned before [13], AcMNPV polyhedrin promoter-controlled vesicular stomatitis virus glycoprotein (VSV G) [17] expression cassette and WSSV ie1 promoter or CMV promoter-controlled HA expression cassette (Figure 1) were inserted into the shuttle vector pFastBac1 and integrated into the baculovirus genome within DH10BAC™ according to the protocol of Bac-To-Bac system (Invitrogen). HA gene in our study was amplified from a Vietnam strain (A/Vietnam/1203/04/H5N1) with the multibasic HA cleavage site in a standard PCR method (94°C 20 sec, 55°C 30 sec and 72°C 2 min for 30 cycles). CMV-HA cassette was amplified with a primer set of 5' ACGCTACGTATAGTTATTAATAGTAATCAA 3' and 5'ACGTGCGGCCGCTTAAATGCAAATTCTGCATTGTAAC GATC3' from vector pCMV-EGFP (BD clontech) with HA gene. CMV promoter in the vector is Human cytomegalovirus (CMV) immediate early promoter without intron. The CMV-HA cassette was inserted into pFastBac1 at the SnabI-NotI site. HA was also inserted into pFastBac1 vector with polyhedrin promoter at the NotI-SalI site to generate vAc-pol-HA recombinant baculovirus.
Recombinant baculovirus purification
Infected SF9 cells with individual recombinant baculoviruses were harvested at 96 h postinfection and subjected to freeze-thaw cycles for cell lysis. The cell lysate was spinned at 1000 g for 5 minutes to remove cell debris. From the supernatant, the virus was purified by sucrose gradient ultracentrifugation following standard protocols [8] and the purity was determined by SDS-PAGE. Virus titer was determined in standard plaque assays with SF9 cells according to baculovirus construction protocol (Invitrogen, No.10359). Hemagglutinin contents in purified virions were estimated by densitometric analysis of stained gels following electrophoresis with Quantity One software (Bio-Rad).
All of the baculoviruses were used without inactivation. As a control of protein vaccine, H5N1 (VNH5N1-PR8/ CDC-RG) strain was inactivated with 0.3 M BEI (binary ethylenimine) incubated at 37°C overnight, according to the previous protocol [19].
Animal experiments 14-day-old chickens (Chew's Poultry Farm, Singapore) received two doses of vaccines or PBS at intervals of 14 days by intramuscular injection or intranasal inoculation. 5 chickens were in each group. Each chicken was vaccinated with purified live baculoviruses without adjuvant at the dose of 10 9 PFU based on virus copies (100 ul, 10 10 PFU/ml), or with influenza virus without adjuvant at the dose of 6 ug based on HA content or 10 5 TCID 50 based on infectivity. The sera were collected two weeks after each vaccination for evaluation. Chickens were killed for dissection two weeks after the second vaccination.
Approval for the animal experiments was obtained from Institutional Animal Care and Use Committee in Temasek Life Sciences Laboratory, Singapore (the approved project number TLL-07-007).
Serological assays
To inactivate non-specific inhibitors, chicken sera were treated with receptor destroying enzyme (RDE, Sigma) by incubation at 56°C for 30 min. Hemagglutination inhibition (HI) tests were carried out in microtitre plates with 1% suspension of chicken red blood cells. For neutralization tests, 2 × 10 4 /ml of MDCK cells were allowed to grow to 70% -90% of confluence. Allantoic fluids with H5N1(VNH5N1-PR8/CDC-RG), using a series of dilutions factors from 10 -1 to10 -8 , were tested for TCID 50 . Using Reed and Muench mathematical technique., the infectivity titer was expressed as TCID 50 /100 μl and the viruses (VNH5N1-PR8/CDC-RG) were diluted to having 100 TCID 50 in 50 μl. After which, 100 TCID 50 viruses were incubated with chicken serum for 1 h at 37°C and inoculated into MDCK cells. The cells were then incubated at 37°C and CPE was observed at 96 h post-infection. Besides, a cell-based ELISA were performed to determine neutralization titer according to standard procedures [32,33]. Guinea pigs were immunized with a live nonpathogenic virus (VNH5N1-PR8/CDC-RG) at 6 ug of HA contents and bled after two injections. The guinea pig IgG was purified from serum using protein A affinity column (Sigma, USA) in accordance with manufacturer's instructions. Enzyme-linked immunosorbent assays (ELISA) were performed by antigen-capture methods with purified anti-H5N1 IgG from guinea pig (150 ng/well) as capturing antibody. The plate was then incubated with purified virus (VNH5N1-PR8/CDC-RG) as antigen. The chicken serum samples were added and anti-chicken IgG secondary antibody HRP (Sigma, 1:3000) was used to develop signals with OPD substrate (Sigma, 1 tablet in 20 ml water).
Immunofluorescence assays SF9 cells were infected with HA-expressing baculovirus. They were fixed at 3 days post-infection with 100 μl of absolute ethanol for 10 minutes at room temperature. Cells in 96-well plates were then washed 3 times with PBS, pH 7.4. Subsequently, the fixed cells were incubated with 50 μl of anti-H5 [18] or anti-VSVG monoclonal antibody (Sigma) for 1 hr at 37°C. After 3 washings, the antigens were incubated with fluorescein isothiocyanate (FITC)conjugated anti-mouse Ig (1:100 DAKO, Denmark) for 1 h at 37°C. The cells were observed under fluorescence microscope.
Immunohistochemistry
Chickens were dissected after 2 weeks from the second vaccination and a series of organ tissues were collected, including brain, kidney, liver, lung and spleen. They were in the form of frozen sections. Commercially available immunoperoxidase staining system (Dako Cytomation EnVision + System-HRP (AEC)) was used for these specimens according to instructions in the kit. This is a twostep staining technique. to recognize bound antibodies. based on a horseradish peroxidase labeled polymer which is conjugated with secondary antibodies.
Statistical analysis
Welch's t test, which is the two-sample t-test that does not assume equal variances between groups, was performed to determine the level of significance in the difference between means of two groups (GraphPad, Software). One way ANOVA was performed by using ANOVA test calculator (Danielsoper, Software) online and the level of significance of difference in multiple comparison was determined according to Bonferroni adjustment (α = 0.05) for multiple comparisons if applicable. 95% confidence interval was determined using survey calculator online (Creative research systems). | 2018-05-31T03:58:28.579Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "17ef3a7ea38a667d5302c2013dd5c2a3c79ef9ed",
"oa_license": "CCBY",
"oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/1471-2180-8-238",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "404e73eb405d548bf6fe785e22bf203c9e94be6c",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": []
} |
4226959 | pes2o/s2orc | v3-fos-license | Effect of the Chemical Structure of m and p N-vinylbenzylidene of 5-methyl-thiazole and 1 , 2 , 4-triazole on Antimicrobial Activity
New Schiff bases heterocyclic structures have been synthesized and evaluated for their antibacterial and antifungal properties. They were prepared by a condensation reaction of 2-amino5-methylthiazole, 4-amino-4H-1,2,4-triazole and 3-amino-1H-1,2,4-triazole with p-vinylbenzaldehyde 1p and m-vinylbenzaldehyde 1m. The synthesized compounds were characterized by 1H-NMR, 13CNMR, FT-IR and UV–Vis. The compounds were examined for antibacterial and antifungal activities in vitro using the disc diffusion method. Activity against two bacterial strains (gram positive bacteria and gram negative bacteria) and two fungal strains is discussed. These compounds are active against assayed bacteria (Pseudomonas aeruginosa ATCC 26883 and Staphylococcus aureus ATCC4330,) and fungal strain (Candida albicans ATTC 10231) with minimal inhibitory concentration (MIC) value of 10 μg/mL.
INTRODUCTION
The world health organization defines the antimicrobial resistance (AMR) as the ability of microbes to resist the effects of drugs aimed at destroying them 1 .Indeed, the micro-organisms including bacteria, fungus, viruses and certain parasites are not affected by the drugs used to eliminate them.The treatment becomes inefficient and the infections they cause persist [2][3][4] .Although the microbial resistance development of any organism is a natural phenomenon, human actions could substantially accelerate it.Actually the research in pharmaceutical chemistry has mainly targeted the synthesis of new chemical compounds.The ultimate objective of which is to have hydrolysable therapeutic supports that offer versatile physico-chemical properties that exhibit high liberation efficiency 5 .
Recently, we have synthesized the isomers via the grafting of p-vinylbenzaldehyde and m-vinylbenzaldehyde on the active ingredient.Note that the currently used isomers in the field of drug research are either the para or the meta as requested by the in vitro and in vivo pharmacological tests on the starting molecule.In this investigation we have chosen the imine function as the labile linkage for the grafting of the 1,2,4-triazole and thiazole heterocyclic.The literature reveals that Schiff bases are important intermediates for the synthesis of some bioactive compounds 6 .They have demonstrated a versatile interesting biological actions including antibacterial, antifungal and anticancer [7][8][9] .The Schiff bases derivatives of 1,2,4-triazole and thiazole are also associated with a variety of applications in biology, clinical and pharmacological domains [10][11][12] .
In this paper we describe the synthesis of Schiff bases derived from 1,2,4-triazole and thiazole.The influence of the para and the meta substitution on the antibacterial and antifungal activity of the prepared Schiff bases is carried out.
MATERIALS AND METHODS
2-amino-5-methylthiazole, 4-amino-4H-1,2,4-triazole, 3-amino-1H-1,2,4-triazole, 4 -c h l o r o m e t h y l s t y r e n e ( C M S 6 0 % ) a n d m-vinylbenzaldehyde were purchased from Sigma Aldrich.Melting points were determined by REICHERT (N°184321) apparatus and are uncorrected.IR spectra were recorded on KBr discs using a JASCON FT/IR4200 spectrophotometer. 1 H-RMN and 13 C-RMN spectra were recorded (in CDCl 3 / DMSO-d 6 ) as a solvent on a Bruker AC spectrometer at (200, 400, 75.5)MHz using TMS as an internal standard.UV spectra in ethanol solvent were taken on a SpectroScan 80DV UV spectrophotometer.
Synthesis of p-vinylbenzaldehyde (1p)
p -vinylbenzaldehyde was synthesized by using a mixture of 4-chloromethylstyrene (CMS 60%) and hexamethylenetetramine (HMTA) according to the method described by Sommelet 13 .The final product was distilled under reduced pressure using a vacuum pump.This compound was obtained as a yellow oil; (Yield: 79%), IR (KBr) cm -1 : 2825.20
General procedure for the preparation of Schiff bases compounds
A mixture of equal volumes of heterocyclic amine (0.02 mol), and vinylbenzaldehyde (1p and 1m) (0.02 mol) in the presence of some traces of 2,6-di-t-butyl catechol as the polymerization inhibitor, and 4-5 drops of glacial acetic acid used as reaction catalyst in 30 mL of absolute ethanol was refluxed for 3 h in water bath as shown in the Scheme 1.The resulting solution was concentrated in vacuum and cooled down in a freezer for 24 h.The precipitated product was filtered, washed with cold absolute ethanol and then dried.
Antimicrobial activities
The newly synthesized compounds were screened in vitro for their antimicrobial activity using disc diffusion method 14,15 .The following strains of bacteria and fungi were used as test microorganisms: Pseudomonas aeruginosa ATTC 26883 (Gram-positive), Staphylococcus aureus ATTC 4330 (Gram-negative), Candida albicans ATTC 10231 and Aspergillus niger ATTC 10404.The synthesized compounds were dissolved in sterile dimethylsulfoxide (DMSO) at different concentrations of 10, 50 and 500 µg/mL for bacteria and Candida albicans ATTC 10231 and 6.6 x 10 7 spores /mL for Aspergillus niger ATTC 10404.The antimicrobial activity of the compound that penetrates into the
Chemistry
The Schiff bases were synthesized by a condensation reaction of heterocyclic amine and vinylbenzaldehyde para-and meta-substituted as shown in the Scheme 1.
H NMR spectra
The products were identified using 1 H NMR spectroscopy in chloroform and DMSO.The signals spectra were divided into four categories of protons, namely those of the vinyl, the benzene ring, the imine function and the heterocyclic groups.All the protons of the imine function CH=N of the synthesized products appear as a singlet with near signals having a slight difference between 8.57 and 9.32 ppm for 3p and 4p.The protons of the aromatic ring represent a system of AA'XX' for the Schiff bases para-substituted but those of the meta-substituted ones form a multiplet of four protons.The chemical shifts of the vinyl groups are almost constant for all the products.A doublet at 5.42 ppm, a doublet at 5.90 ppm and a double of doublets at 6.77 ppm.However, different signals appear for the substituents of the heterocyclic rings depending on the type of their structure for example a singlet of three protons located at 2.49-2.40ppm for the methyl group of 2m and 2p respectively as shown in the fig 1.Another singlet at 14.09-14.07ppm is assigned to (NH) of 4m and 4p.
C NMR spectra
The 13 C NMR spectra clearly confirm the presence of the azomethine function.It has almost the same chemical shift for the isomers containing the same heterocyclic ring.The values of 170.98-171.14ppm are assigned to 2m and 2p structures, respectively.
FT-IR spectra
The FT-IR spectra of the Schiff bases synthesized revealed the absence of carbonyl (C=O) stretching vibrations expected at 1701.87 cm -1 .In contrast, a strong new band ranging from 1613.16 to 1599.66 cm -1 is present which is due to the azomethine (-CH=N-) linkage.The bands at 890, 808.99 and 712.56 cm -1 are assigned to (C-H) out of plane bending of the meta-substituted benzene ring.Whereas, the strong band appearing at 850.45 cm -1 , is assigned to the para-substituted benzene ring.The (NH) stretching vibration of the 4p and 4m is visible at 3240.79 -3146.29 cm -1 .
UV/vis spectra
The UV/vis spectra of the products 2, 3 and 4 were carried out in absolute ethanol at a concentration of 10 -4 mol L -1 at room temperature.All heterocyclic compounds present strong bands at l max 361 and 279 nm.They are assigned to the electronic transition p→p* of azomethine.The parasubstituted isomers exhibit a higher intensity than their respective isomers.This is due to the higher degree of molecular planarity.Shifts of the absorption bands of medium intensity between l max 245 and 224 nm are assigned to a locally excited state of the benzal part of the molecule.These results agree with the UV spectra of aromatic and aliphatic Schiff bases reported in literature [16][17][18] .
Antimicrobial activities
The newly synthesized compounds were screened in vitro for their antibacterial activity against gram positive bacteria Staphylococcus aureus (ATTC 26883) and gram negative bacteria pseudomonas aeruginosa (ATTC 4330).Their antifungal activity against Candida albicans (ATTC 10231) and Aspergillus niger (ATTC 10404).The results obtained are presented in tables 1 and 2.
The majority of the synthesized products exhibit an antibacterial activity as revealed by the values of the inhibition diameters zones.
All the products tested at the three indicated concentrations are very active against Gram negative bacteria Pseudomonas aeruginosa).Generally, the products activity as tested against Staphylococcus aureus is moderate and is dependent on the type and concentration of product used.Exception to this tendency is noticed for the products 2p and 2m which showed a relatively good effect on the bacteria.The heterocyclic compounds tested in vitro against both fungal strains prove moderately active on the growth of the yeast Candida albicans.In contrast, they do not show any activity against filamentous fungi Aspergillus niger except the product N-(4vinylbenzylidene)-5-methyl-thiazol-2-amine 2p.The microbial inhibition was due to the nature of the heterocyclic thiazole which contains electron donating methyl group -CH 3 present in the cycle.It has been demonstrated that the electron donating groups increase the electron density which renders the product efficient against the microorganisms 19 .In addition, the presence of the sulfur atom enhances the antimicrobial efficiency of the molecule 20 .
The heterocyclic thiazoles are considered as interesting biocides 21 .The minimum inhibitory concentration (MIC) the growth of bacterial strains and that of the yeast Candida albicans is 10 µg/mL whatever the product under test.This concentration seems sufficient to inhibit the majority of the microbial strains.
The low antifungal activity and sometimes its absence is probably due to the resistance of fungal strains 22 .The results of the present investigation show that the heterocyclic compounds oriented to the para of the phenyl ring are more efficient than those oriented to the meta.This difference might originate from the fast release of the imine link CH=N of the active ingredients when the phenyl ring is para oriented.
CONCLUSIONS
In this work the new heterocyclic Schiff bases derived from 5-methyl-thiazole and 1,2,4triazole were successfully synthesized with a high yield.The spectroscopic analysis including 1 H-NMR, 13 C-NMR, FT-IR and UV-Vis confirmed perfectly the expected chemical structures of these compounds.The study in vitro of antimicrobial activities showed that they exhibit antibacterial and anticandidal properties except with the fungal strains of Aspergillus niger.
Scheme 1 :
Scheme 1: Synthesis of heterocyclic Schiff bases in para-and meta-substituted | 2018-03-25T07:24:24.277Z | 2016-08-23T00:00:00.000 | {
"year": 2016,
"sha1": "71c6483f3e3ba94d2cad841ce145d99fc5e00cc0",
"oa_license": "CCBY",
"oa_url": "http://www.orientjchem.org/pdf/vol32no4/OJC_Vol32_No4_p_2043-2049.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "71c6483f3e3ba94d2cad841ce145d99fc5e00cc0",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
119495997 | pes2o/s2orc | v3-fos-license | High-frequency oscillations in low-dimensional conductors and semiconductor superlattices induced by current in stack direction
A narrow energy band of the electronic spectrum in some direction in low-dimensional crystals may lead to a negative differential conductance and N-shaped I-V curve that results in an instability of the uniform stationary state. A well-known stable solution for such a system is a state with electric field domain. We have found a uniform stable solution in the region of negative differential conductance. This solution describes uniform high-frequency voltage oscillations. Frequency of the oscillation is determined by antenna properties of the system. The results are applicable also to semiconductor superlattices.
I. INTRODUCTION
There is a number of materials with narrow energy band of the electronic spectrum for electrons moving in certain directions. The narrow band may result either from low-dimensional structure of the crystal or from artificially made periodic superstructure like semiconductor superlattices. Typical examples of low-dimensional crystals are layered materials such as high-Tc superconductors, transition metal dichalcogenides like TaS 2 , NbSe 2 and others 1 . Such materials can be treated as a stack of conducting layers of atomic thickness, separated by insulating ones 2 . Narrow electronic bands are typical also for quasi-one-dimensional conductors in directions perpendicular to conducting chains. For example, the linear chain conductor NbSe 3 is a strongly anisotropic crystal with the anisotropy of conductivity σ b /σ c ∼ 10 in (b-c) plane, and σ b /σ a * ∼ 10 4 in (a-b) plane at low temperatures 3 . Such a strong anisotropy reflects weak inter-chain coupling and narrow electronic bands for electron motion across conducting chains.
This feature of energy spectrum should lead to similarities of the effects observed in low-dimensional materials and in semiconductor superlattices which are thoroughly studied over last few decades 4 . Interesting effects are expected if electron motion approaches a regime close to the Bloch oscillations, when applied voltage is high enough and an electron has enough time to perform several oscillations within the same mini-band until it is scattered, i. e., when period of the Bloch oscillations is smaller then the momentum scattering time, τ p . In such a regime DC current decreases with DC voltage increasing, resulting in a region of negative differential conductance (NDC). This result was predicted theoretically for superlattices by Esaki and Tsu 5 , and later was observed repeatedly. Similar conclusions were made for layered conductors 6 . However, to our knowledge, there is no direct experimental confirmation of this effect in low-dimensional conductors. A possible reason for this is that the crystals were not perfect enough to ensure the band motion of electrons in directions of low conductivity in low-dimensional materials. However, the strong conductivity maximum at zero bias voltage observed recently in NbSe 3 by Latyshev et al. 7 argues in favor of possibility of the NDC in such materials.
An important corollary of the NDC is an instability of uniform stationary state in the NDC region which, in principle, may result in generation of high frequency oscillations when the DC voltage gets into the NDC region. The boundary of the NDC region in the static I-V curve is determined by the voltage related to the momentum scattering rate τ −1 p , and the upper frequency limit for the NDC is equal to τ −1 p as well. The value of τ −1 p in many materials falls into the THz region 7 , 8 , so one can expect microwave generation in this frequency range. However, there is a number of obstacles preventing generation of high frequency oscillations. The primary one is formation of nonuniform electric field distribution (field domains) which impedes high frequency oscillations due to relatively low domain velocity 4 . A number of techniques was suggested to overcome this difficulty in superlattices, including application to a sample additional large AC 9 and using of pulse voltage 10 , or formation of superlattices with minibands having non-sinusoidal dispersion law 11 , etc.
In this work we study theoretically a low-dimensional conductor with current flowing in the direction of low conductivity when applied DC voltage falls into the NDC region, and find a uniform oscillatory solution in case when no external time-dependent influence is imposed. This solution is possible because of interaction of a sample with an environment which we describe by means of an effective antenna impedance. We found conditions for existence of stable oscillations: the antenna reactance on main oscillation frequency must have the sign corresponding to an inductance, in this case the sample and the effective antenna form an effective oscillatory circuit. The frequency of the oscillations is determined by the antenna reactance and by material properties of the sample. Oscillation amplitude depends on applied voltage and vanishes at the boundaries of the NDC region.
The paper is organized as follows: in Sec. II we derive general expression for the current across the layers. In Sec. III we study in details the uniform current flow through the sample, derive equations of motion for time-dependent voltage, and find a solution describing oscillations. We show first the possibility of coherent voltage oscillations using oversimplified but physically transparent model, and then bring strict solution in the form of harmonic series. In Sec. IV we examine response of the system to a small external time-dependent perturbation, and in Sec. V we demonstrate that solution describing coherent voltage oscillations in the sample is stable with respect to small fluctuations. Finally, in Sec. VI we discuss conditions for experimental realization of the oscillatory regime found in preceding sections and make conclusions.
II. MAIN EQUATIONS
As it was mentioned in the Introduction, electron motion across the layers of perfect enough layered conductor, or between the conducting chains in quasi-one-dimensional conductor can be considered as a motion in narrow conducting band that should have specific features typical for electron motion within the minibands in semiconductor superlattices. In the theory of miniband transport there are two main limiting cases. If electron wave functions in adjacent layers are strongly overlapped then an electron has enough time to perform several tunneling processes until it is scattered. This quasi-classical case of miniband conduction can be well described by means of Bolzman equation which holds when τ −1 p ≪ ∆, where ∆ is the miniband width. If the interlayer hopping integral is small then the sample may be treated as a stack of weakly coupled quantum wells, hence, tunneling processes in adjacent junctions are practically independent, and purely quantum case of sequential tunneling is fulfilled. An amazing result of supperlattice theory is that in both these limits expressions describing the current as functional of voltage are the same 4 . Below we show how the similar expression can be derived for a low-dimensional conductors. For definiteness we will speak about a layered conductor with voltage applied in the stack direction.
For brevity we use units with e =h = 1. To investigate non-equilibrium transport we use the Keldysh technique for Green functions 12 . Current density between layers n and n + 1 may be expressed in terms of the non-diagonal component of Green's function both on temporal indices and layer numbers: where t ⊥ is the hopping integral for electron tunneling between the adjacent layers, which is supposed to be independent on layer number and electron momentum, and G R(A) is the retarded (advanced) Green function, τ is the dimensionless time, which is mesured in units of τ p . For convenience we introduce other dimensionless units: frequencies and energies we mesure in terms of τ −1 p . Usually, the hopping integral, t ⊥ , is small and it is possible to construct a perturbation theory, which allows us to express non-diagonal on layer number Green functions in terms of diagonal ones. In the first approximation on t ⊥ it readsĜ n,n+1 (τ, τ ′ ) = t ⊥ Ĝ n,n (τ, τ ′′ )Ĝ n+1,n+1 (τ ′′ , τ ′ )dτ ′′ , and, hence, we get well known expression for the current between the adjacent layers 13 .
We consider the case when electrons inside the layer are in in the equilibrium while layers are not in the equilibrium with each other. A difference between j n−1,n and j n,n+1 leads to deviation δµ n of the chemical potential in layer n from its equilibrium value, µ. In this case in each layer the diagonal Green function with respect to layer number, G K n,n , can be related to the retarded and advanced Green functions by the expression valid for the equilibrium case: G K n,n (ω) = (G R n,n (ω) − G A n,n (ω)) tanh(ω − δµ n )/2T, here we take into account temporal uniformity, so that G n,n (ω)δ(ω − ω ′ ). The transition to temporal representation we perform according to here Φ n (τ ) is time-dependent scalar electric potential of corresponding layer. To give proper account to scattering, we use the standard expression for Green functions with non-zero momentum relaxation rate G We assume for simplicity that τ p and t ⊥ do not depend on electron momentum. We concentrate on the case of the same voltage on all junctions, taking the non-uniformity into account only in Sec. V where we study the stability of the uniform solution in respect to small perturbations. So we assume that deviations, δµ n , of the chemical potential from the equilibrium value are small.
Finally, the expression for the tunneling current between adjacent layers reads where s is the lattice period in the direction of the current. The second item in square brackets in Eq. (1) describes the diffusion current, V n (τ ) is the voltage between layers n + 1 and n, σ 0 = me 2 t 2 ⊥ τ p /h is conductivity at zero bias and frequency, and s is the period of the structure.
Consider now limiting forms and simplifications of equation (1) for the current. We represent the voltage across the layer as a sum where v 0 is the average voltage (here we drop index n). When the second time-dependent term is small Eq. (1) can be expanded in series with respect to v(τ ), and we obtain for the current: Here we introduce the DC part of the current, is the specific tunneling capacity of the sample. The apostrophe denotes derivative on voltage v 0 .
As an illustration, consider, first, the simplest case when the applied voltage is time-independent, v(τ ) = 0. Then Eq. (2) results in well-known form 5 It follows from this expression that the differential conductance becomes negative when v 0 > 1.
In case of small harmonic voltage, v(τ ) = v 1 e iωτ the expression for the current in the first approximation in v(τ ) reads From this expression it follows that at high frequencies, ω > ∼ 1, the differential conductivity becomes positive even when the DC bias is in the NDC region.
III. COHERENT VOLTAGE OSCILLATIONS
In this section we derive equation of motion for the voltage and study dynamics of the system when currents in all junctions are identical. In addition, we neglect voltage fluctuations here, therefore, we do not consider non-equilibrium deviations of chemical potential in different layers and, hence, the diffusion current.
In addition to the tunneling current (1) there are other contributions to the total current. First, we must take into account the displacement current where we use notation for the specific capacity of the sample c 0 = 1/4πs. We must take into account also leakage currents, contributions due to incoherent tunneling, and other processes which are not related to coherent tunneling. To allow for these effects phenomenologically we add an extra ohmic term to the current: For the I-V curve to have the NDC region conductivity σ 1 must be small enough: To determine two independent variables, the current and the applied voltage, an additional equation relating the applied voltage to the current in the external to the sample circuit is required. The equation for current balance which determines the voltage dynamics reads Here j (ext) = j A + j S is the current in the circuit external to the sample. We consider this circuit as that consisting of a power supply and environment. Coupling of the sample to the environment we describe by means of an effective antenna with the impedance Z A . The equivalent scheme for the system under consideration is shown in Fig. 1. Since typical frequencies of the problem is high, we assume that AC current does not flow through power supply, so that antenna current j A is AC and power supply current j S is DC. Sample dimensions are assumed to be small with respect to radiation wavelength and to the skin-effect length. This allows us to neglect spatial non-uniformity.
The resulting equation describing dynamics of the voltage oscillations has rather complicated form of the integrodifferential equation. In order to find an analytical solution we consider the limit in which the amplitude of the voltage oscillations is small in comparison to the DC voltage, v(τ ) ≪ v 0 . This regime occurs near the boundary of the NDC region or in the case when the NDC region is small.
A. Simple model
It is instructive to examine, first, an oversimplified, but physically transparent model, based on assumption that in the frequency range under consideration the effective antenna can be described by the admittance with purely inductive reactive part: Here L is the antenna inductance, and factor N is number of layers in the sample, so that the total voltage across the sample is equal to N v(τ ). The real part of the antenna admittance leads to redefinition of the conductivity σ 1 only, therefore, we drop it here. For convenience we introduce here a new variable related to the uniform alternating electric field in the sample asθ = v(τ )/s, and assume that the NDC region is small, so that contribution of the second harmonic into the friction terms is negligible (see the next subsection). Eq. (7) then reads:θ Here j is DC current flowing through the sample, ω 0 = N/LSc s , S is the sample area in the plane perpendicular to the current direction, and N is the number of junctions, and we have introduced the total specific capacity of the sample, c s = c 0 + c t , and supercriticality, δ = σ d + σ 1 , which changes its sign from "−" to "+" across zero when the DC voltage leaves the instability region of the I-V curve. Equation for oscillations (8) can be treated as a mechanical equation of motion. In this equation θ plays a role of a coordinate, the last term in the left-hand side of Eq. (8) corresponds to a time-dependent alternating friction. In the regime of stable oscillations the friction averaged over the period must be equal to zero, and its momentary value can be considered perturbatively. So, in the first approximation we can neglect the friction term,θ (1) + ω 2 0 θ (1) = 0. Since the phase of the solution is arbitrary, for definiteness, we choose it in the formθ (1) = E 1 sin ω 0 τ . E 1 is an amplitude of electric field oscillations on main frequency, so voltage oscillations amplitude on this frequency is E 1 s. To ensure that the solution does not increase with time the friction term in (8) must not contribute at the oscillation frequency ω 0 . This condition determines the amplitude of the oscillations Guided by mechanical analogy mentioned above, we multiply friction term in Eq. (8) by θ 1 and integrate over the period and demand that the work of the friction force over the oscillation period is zero. The condition of vanishing of the average friction determines the I-V curve: This dynamic I-V curve intersect the static one j stat = σ 1 + σ 0 /(1 + v 2 0 ) v 0 at three points: at the boundaries of the NDC region and at v 0 = √ 3 (see fig. 2).
B. Solution in the form of harmonic series
In this subsection we consider a more general case and determine conditions for existence of the oscillatory solution. We seek the solution of Eq. (7) for v(τ ) in the form of harmonic series In the most general case the antenna current reads Here σ R An = R n s/(R 2 n + X 2 n )S, σ I An = X n s/(R 2 n + X 2 n )S, where R n and X n are resistance and reactance of the antenna at frequency nω 0 respectively. Solution (11) has an arbitrary phase which can be determined by initial conditions. We may fix this phase without any restriction assuming that v c,1 = 0. The supercriticality parameter, δ n = N σ R An + σ 1 + σ d , we assume to be small at the main oscillation frequency: |δ 1 | ≪ 1. As it will be shown later, amplitudes of the n-th harmonic are proportional to δ n/2 1 , hence, it is possible to neglect the terms of the order higher than the third one in the small parameter √ δ 1 . So inserting Eq. (11) into the expression for the current (1), and using equation for the current balance (7), we obtain a system of linear equations for v 2 s,1 , v s,2 and v c,2 : Here σ 2 = 2c s ω − N σ I A2 . Physically reasonable oscillation frequency is determined by the second equation of the system (12): Here we take into account that the contribution of the second harmonic is of the second order in √ δ 1 , so that lefthand sinde of this equation should be considered as higher order correction and, hence, should be neglected in main approximation. For oscillation amplitudes we find v 2 s,1 = −8δ 1 This expressions have a transparent physical meaning: if D > 0 the existence of the oscillatory regime is determined by the sign of the supercriticality, δ 1 . Oscillations are possible, (i.e., the amplitude of the first harmonic is real) only inside of the NDC region. Now we show that D is positive under conditions of our assumption that harmonics drop rapidly. If the NDC region is small then the voltage is close to the center of the NDC region, |v 0 − √ 3| ≪ 1. Then σ ′ d ∝ v 2 0 − 3 is small and D is positive. If the NDC region is large then our approach is valid near the boundaries of the NDC region. Near the low-voltage boundary v 0 ∼ 1. In this case σ d , c ′ t → 0 as v 0 − 1 and σ ′ d ≃ 1/2, σ ′′ d ≃ 3/2. So if δ 2 and/or γ 2 are large enough, then D > 0. The high-voltage boundary of the NDC region corresponds to v ≫ 1. In this case , and two first terms in (17) are small in comparison to the last one and, hence, D > 0. In other words, in this case the influence of second harmonic on the amplitude of the first one is small, so that v s,1 ≃ −8δ 1 /σ ′′ d , as it was found in the previous subsection.
At the end of this chapter we consider the amplitudes of the higher harmonics. Acting perturbatively, we can express the amplitude of the third harmonic in terms of amplitudes of the first and the second harmonics considering the last ones as an "external force": Amplitudes of the higher harmonics, n > 3, can be found in the similar perturbative way, and one can see that their amplitudes are proportional to δ n/2 1 . Therefore, the harmonic series (11) converges and our perturbative approach is valid provided |δ 1 | ≪ 1.
Thus in this section we have found the solution describing voltage oscillations in the vicinity of NDC region boundaries or for whole NDC region if it is small. Beyond the NDC region boundaries the oscillatory solution does not exist, this is reflected by the imaginary value of the oscillation amplitude in Eq. (14). Amplitude of the oscillations vanishes when the average voltage approaches NDC region boundary. Oscillation frequency is determined by geometry and by material properties of the sample. This solution exists provided the imaginary part of impedance of the effective antenna has the sign corresponding to inductance.
IV. MICROWAVE RESPONSE
In this section we determine modification of the I-V curve when the system is exposed to a weak external harmonic radiation of frequency ω. Again, for simplicity, we treat antenna as an inductance. Let the incident radiation induces voltage oscillations with the amplitude V A at the antenna output. Then the equation for the oscillating component of the voltage (8) acquires the form (20) Here f = V A ω 2 0 /ωN s is the amplitude of the external force. This equation describes the forced oscillations of the system with single degree of freedom in case of periodic perturbation. As in the previous section we consider the case when the amplitude of the voltage oscillations is small and we solve equation for θ(τ ) perturbatively. When the frequency of the external field, ω, is close to the eigen frequency, ω 0 , one can expect the effect of frequency capture. As in the previous section we consider the case when the amplitude of the voltage oscillations, a, is small. Then in the first approximation we seek the solution in the form of harmonic oscillations at the frequency of the external force but with unknown time-independent phase shift ψ: θ(τ ) = a sin(ωτ + ψ). Then we find from Eq. (20) where A s = ω 2 0 − ω 2 , A c = ω(δ + a 2 ω 2 σ ′′ d /8)/c s s, and tan ϕ = A c /A s . Eq. (21) determines the amplitude of oscillations as function of frequency, ω, and bias voltage, v 0 . We seek solution of Eq. (21) for the amplitude in the form where the first term describes the voltage oscillations with the amplitude of the natural oscillations, A 1 , given by Eq. (9). For small enough values of f we find two solutions describing corrections to the oscillation amplitude induced by the external perturbation This expression is valid provided the external perturbation is small enough, |f | ≪ δ 3/2 . Corresponding expressions for the phase read Using the approach of the next section one can show that both solutions found above are stable.
For ω = ω 0 the phases of the solutions are equal to ±π/2. In this case the first solution describes voltage oscillations in phase with the driving voltage, the oscillation amplitude being increased by the external perturbation, while the second solution corresponds to the out of phase oscillations with the amplitude decreased by the external force.
According to Eq. (22) the regime of frequency capture exists in the frequency range Variation of the I-V curve due to irradiation as function of frequency and DC bias is described by expression Thus we can conclude that the external radiation induces two branches at the I-V curve that merge near the boundary of the regime of the frequency capture. Maximum variation of the amplitude and, hence, the largest shift of the I-V curve occur at ω = ω 0 . They are linear in driving voltage. At larger difference between the driving frequency and the eigen frequency, that is outside the regime of the frequency capture, corrections to the I-V curve induced by the external perturbation are quadratic in f .
V. STABILITY
Now we study the stability of the solution found in preceding section with respect to non-uniform perturbations of the voltage. Consider a sample consisting of N layers. As it was stated above, we assume non-equilibrium processes to be slow enough in the scale of the energy relaxation time, so that distribution function of electrons inside conducting layers does not change. Then the in-layer electron distribution can be described by the Fermi distribution function with energy shifted by a non-equilibrium correction to the chemical potential induced by difference of the currents through adjacent conducting layers. At first we relate the shift of the chemical potential to the voltages in adjacent layers. Taking into account quasi-2D character of the layers, we obtain the expression for the difference of chemical potentials on the adjacent layers This allows us to express current in terms of voltages across different junctions only: where ∂ 2 n V n = V n+1 + V n−1 − 2V n is a discrete version of the second derivative on layer number. Since sample dimensions are assumed to be small, the current in the external circuit is determined only by the total voltage across the sample: Using Eqs. (26) and (27) in Eq. (7) we obtain the equation describing voltage dynamics taking into account dependence of the voltage on the layer number: We simplify Eq. (28) assuming that in the considered frequency range the antenna impedance can be modeled by pure inductance,σ Then each equation in Eq. (28) acquire a form of Eq. (8) To show that solution describing coherent voltage oscillations is stable, we substitute θ n (τ ) = θ (0) (τ )+ δ n (τ ) into (29), where θ (0) (τ ) is the solution found in Sec. III, and then linearize the resulting equation in δ n . Temporal evolution of small perturbation δ n (τ ) is described by equation: Here σ(τ ) = E 1 σ d cos(ω 0 τ + ϕ 0 ) + 1 8 σ ′′ d E 2 1 (1 + 2 cos 2(ω 0 τ + ϕ 0 ) /c s plays the role of time-dependent friction coefficient, and D = σ 0 /4mc s s 2 (1 + v 2 0 ) is diffusion coefficient.
The general solution of linear equations (30) can be divided into a uniform and a non-uniform parts: δ n = δ (0) +δ (n) , n = 1..N . The uniform part of the solution, δ (0) , can be found as the general solution of the equation which follows from Eq. (30) when all δ n are equal. The rest of Eq. (30) form N identical equations for the non-uniform part of the perturbation, δ (n)δ which satisfy the condition which ensures that the total voltage over all layers is described by So we examine the temporal evolution of the total voltage across the sample and voltage fluctuations at different junctions separately, by Eqs. (31) and (32), respectively.
At first we study non-uniform perturbations described by Eq. (32). Its solution can be found easily by means of the Fourier transformation on layer number: wherek = 2 sin k/2, and c 1 (k) and c 2 (k) are constants of integration. Constants c 1 (k) are irrelevant since they are not related to voltage fluctuations since voltage is determined by time derivative of the phase θ n , so we can equate these constants to zero. Since the diffusive term promotes the stability (D > 0), we concentrate on the case of smallk → 0. Then the stability of the solution is determined by the damping term σ(τ ) which can be presented as σ(τ ) = σ 0 +σ 1 (τ ) where σ 1 (τ ) is a sum of harmonic terms, and the average value of the damping, σ 0 , is positive. Since it is positive, Fourier transforms of the perturbation (34) decrease exponentially at τ → ∞ for all values of k.
Note that the results on stability with respect to non-uniform oscillations remain valid in the general case, when the external antenna cannot be described by pure inductance. Indeed, in case of antenna described by a linear operator of the total voltage across the sample, the uniform and non-uniform fluctuations can be examined separately similar to the way it was described above. Thus non-uniform perturbations cannot lead to instability of the oscillatory solution and the stability of the oscillatory regime is determined only by the stability of uniform oscillations. Now we discuss the uniform perturbation. The general solution of (31) is Here C 1 and C 2 are arbitrary constants. As, by definition,θ (0) (τ ) = v(τ ), the first term in (35) describes the solution equal to the unperturbed solution that is shifted by time C 1 . This term is related to the different choice of initial conditions, and it is not related to stability. The second term decreases exponentially. This can be shown by direct calculations and demonstrated by means of the Ostrogradsky-Liouville formula relating the Wronskian to the coefficients of the linear differential equation, where C is an arbitrary constant. With τ increase perturbed solution exponentially converges to non-perturbed one which may have some phase shift with respect to original solution, i.e., original uniform solution is stable.
VI. DISCUSSION
In this section we discuss some aspects of experimental realization of the oscillatory regime. The condition for voltage oscillation derived in the previous section implies that the supercriticality must be negative, δ 1 < 0. This is possible when additional conductivities due to non-coherent tunneling and antenna do not compensate the NDC due to coherent interlayer tunneling, cf. Eq. (6) Again we consider Eq. (8), in which we explicitly show the antenna impedance. As it was mentioned above, on main frequency the "friction" term vanishes, and the equation defining the frequency of the voltage oscillation reads Since specific tunneling capacity c s does not change dramatically with v 0 changes, all conclusions remain valid for the case of boundaries of large NDC region either. The oscillation frequency can be presented in the form Here R 0 is resistance of the sample. Note also, that from equation (1) for the tunneling current it follows that the oscillation frequency can not be large compared to the momentum scattering rate 1/τ p . This becomes clear, in particular, from Eq. (3) demonstrating that the differential conductivity becomes positive at high frequencies, ω > τ −1 p . According to Eq. (37) oscillation frequency depends on the input antenna impedance Z A . Dependence Z A (ω) can be very diverse for different antenna configurations, so to illustrate possibility of the proposed oscillatory regime we consider the most simple case of the dipole antenna consisting of thin straight wire of length l A and radius r. This antenna can be described by the long line theory 14 . Then antenna wave impedance in the SI units reads ρ = 1 πcε0 (ln 2lA r − 1) [Ω] = 120(ln 2lA r − 1) [Ω], where k = 2πl A /λ, where λ is the wave length. For the sending-end impedance this theory gives: Here, the dipole resistance R A is a sum of the radiation resistance related to the antinode of the current, and of the ohmic resistance of the antenna's material, the latter is assumed to be much smaller than the former one. The radiation resistance is given by expression R Σ = P/I 2 = 30(2(γ + ln 2kl [Ω] (here P is radiation power, γ is Euler constant, Ci and Si are integral cosine and sine, respectively). According to Eq. (38) the imaginary part of the antenna impedance is oscillatory function of frequency and on the length of the antenna arms l A . The dependence of real and imaginary parts of Z −1 A on l A is presented in Fig. 3 for several sets of antenna parameters. When the ratio l A /λ is in the range from 0.5 to 1, ℑZ A is positive and the oscillatory solution is possible. To estimate resistivity of the sample, we use conductivity data for NbSe 3 adduced in Ref.7 for NbSe 3 , σ 0 = 0.1mΩ cm, and assume the sample dimensions to be 10µm × 10µm × 300Å with the smallest length in the current direction. Then we obtain R 0 = 30Ω. Typical values of ℜZ A for the most interesting case of antenna with relatively large wave impedance which corresponds to high Q-value, are of the order of several kΩ (see Fig. 3), hence, the antenna impedance is larger then the sample resistance (R 0 ℑZ −1 A < 1). So conductivities σ I An and σ R An are small, as it was supposed in previous sections.
For practical applications the higher frequencies are more interesting. We see several possible ways to increase the oscillation frequency. The first one is to increase R 0 by increasing the ratio sN/S, either by changing a sample geometry or by increasing the number of layers. Implementation of this idea is limited by the condition for the NDC region to exist (36) due to an increase of relative dissipation by antenna caused by radiation as σ R A1 ∝ sN/S. Another way is to maximize ℑZ −1 A in order to optimize the ratio of the antenna arm length and the wavelength, or to use more complicated antenna, than the simple dipole one. In this case the limitations are imposed by real part of antenna impedance related to radiation power R Σ -practically interesting case of larger radiation leads to larger R Σ while ℑZ −1 A < R −1 Σ /2. In previous sections we assumed that the current density does not depend on the in-plane coordinate. In addition to the non-uniformity related to fluctuations of charge density considered in Sec. V, there is another non-uniformity which is related to the skin effect. So, strictly speaking, our results are applicable provided the sample width is smaller, than the skin-effect length, c/ 2π(σ 1 + σ 0 /(1 + v 2 0 ))ω 0 . This condition can be easily satisfied in the tera-Herz region. In conclusion, we have studied the problem of the current flow through the sample with narrow energy band when the average voltage corresponds to the NDC region. We have found analytically the stable uniform oscillating solution describing coherent voltage oscillations in the sample. The amplitude and the frequency of the oscillations are determined by dimensions, geometry, and material properties of the sample, and depend on the antenna properties of the system. The maximum value of the oscillation frequency is limited by the momentum relaxation rate. | 2019-04-14T02:10:19.857Z | 2005-12-09T00:00:00.000 | {
"year": 2005,
"sha1": "0294b125a453bbbc4db62a80e241717dd4572b8a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0512444",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "158b4070fbd979b821ff2a41f0a7d87ba12de84f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
240460218 | pes2o/s2orc | v3-fos-license | Novel treatment for post-orgasmic illness syndrome: a case report and literature review
year. This treatment provided considerable relief from the rash, while the other symptoms were tolerable with no marked improvement. However, because of their increasingly obvious side effects, the dosage and frequency of these drugs were gradually reduced. Prednisone was gradually reduced from 15 mg to 7.5 mg per day and mycophenolate mofetil dispersible tablets were reduced from 0.5 g three times a day to 0.5 g per day for maintenance therapy. In the last several months, the rash recurred and got worse. Among the other symptoms, headache, in particular, worsened and became intolerable. The patient underwent high ligation of bilateral varicocele 15 years ago, following which transient acute epididymitis occurred in the left testis. After anti-infection treatment, the patient’s condition improved and no further attention was paid to the matter. He had a 10-year-old child and firmly refused the need for reproduction. On physical examination, we found an obvious hydrocele in the left tunica vaginalis and significant enlargement of the left epididymis. After adequate preoperative communication, the patient accepted our preset surgical protocol and provided informed consent. Besides, the surgical procedure was approved by the Ethics Committee of Northern Jiangsu People's Hospital. Preoperative examination included autoimmune antibody screening, hormone level testing, and a skin patch test, along with routine preoperative checks. Autoimmune antibody tests were negative and all hormone levels were within the normal range. The skin patch test was conducted at an andrology laboratory and the result was negative (data not shown). At first, we performed bilateral scrotal exploration and found a hydrocele in the tunica vaginalis on the left side. In addition, we noted remarkable thickening of the left epididymis and spermatic cord; therefore, left epididymitis was considered. Furthermore, bilateral seminal tract radiography was performed after bilateral deferens separation at the epididymis level. No obvious spillage of the contrast agent was observed. Meilan (Jumpcan Pharmaceutical, Taizhou, China) was injected into the bilateral deferens, and the outflow of Meilan from the catheter was observed, indicating that the seminal duct was unobstructed. Finally, in addition to turning over the testicular sheath, bilateral epididymectomy and bilateral vasoligation (at the epididymal level) were simultaneously performed. Histological examination of the removed epididymis was performed post-surgery, which revealed epididymitis. The operation was successful and the patient was discharged after 4 days. Prednisone and immunosuppressants were stopped 10 days before surgery and were not continued after surgery. Sexual intercourse began 3 weeks after surgery and gradually increased to two times a week. By the two-month follow-up, the Dear Editor, Post-orgasmic illness syndrome (POIS) is a rare disorder affecting men, and is characterized by flu-like symptoms that appear immediately or within hours after ejaculation. In 2002, Waldinger and Schweitzer1 reported the first two cases of POIS and subsequently proposed five preliminary diagnostic criteria for POIS.1,2 The symptoms were categorized into seven clusters. To avoid these symptoms, patients with POIS tend to minimize the frequency of ejaculation and avoid sexual activity. Therefore, POIS poses a severe mental and psychosocial burden on patients and negatively affects their quality of life by affecting schedules, dampening romantic prospects, and creating internal struggles to avoid eroticism.3 Previously, the main treatment strategies were focused on drug and hyposensitization therapies.4 To date, a standard treatment strategy using these therapies has not been established. Here, we report a case of POIS that was successfully treated with surgery after failed immunosuppressive therapy. In March 2021, a 42-year-old man came to the Department of Urology (Northern Jiangsu People’s Hospital, Yangzhou, China) with the chief complaint of flu-like symptoms and rash following ejaculation for two years. Two years ago, he started experiencing symptoms such as exhaustion, palpitations, difficulty finding words, incoherent speech, concentration difficulty, depressed mood, perspiration, ill with flu, general malaise, headache, burning, red/injected eyes, itchy eyes, runny nose, sneezing, dirty taste in mouth, painful muscles, heaviness in the legs, and rash after ejaculation, occurring via sexual intercourse, masturbation, or nocturnal emissions. The flu-like symptoms occurred 1–2 h after ejaculation, and the rest of the symptoms occurred subsequently. Rash was the final symptom to appear, occurring approximately 12–24 h after ejaculation. All the symptoms peaked within 1–2 days and then spontaneously disappeared. One year ago, the patient developed a rash that was very severe compared to previous instances, extending from the right calf to both calves. He visited several physicians and was finally diagnosed with vasculitis. He was treated with prednisone (Pengyao Pharmaceutical Co., Ltd., Wuxi, China) and mycophenolate mofetil dispersible tablets (Sinopharm Chuankang Pharmaceutical Co., Ltd., Chengdu, China) for up to a LETTER TO THE EDITOR
year. This treatment provided considerable relief from the rash, while the other symptoms were tolerable with no marked improvement. However, because of their increasingly obvious side effects, the dosage and frequency of these drugs were gradually reduced. Prednisone was gradually reduced from 15 mg to 7.5 mg per day and mycophenolate mofetil dispersible tablets were reduced from 0.5 g three times a day to 0.5 g per day for maintenance therapy. In the last several months, the rash recurred and got worse. Among the other symptoms, headache, in particular, worsened and became intolerable.
The patient underwent high ligation of bilateral varicocele 15 years ago, following which transient acute epididymitis occurred in the left testis. After anti-infection treatment, the patient's condition improved and no further attention was paid to the matter. He had a 10-year-old child and firmly refused the need for reproduction. On physical examination, we found an obvious hydrocele in the left tunica vaginalis and significant enlargement of the left epididymis.
After adequate preoperative communication, the patient accepted our preset surgical protocol and provided informed consent. Besides, the surgical procedure was approved by the Ethics Committee of Northern Jiangsu People's Hospital. Preoperative examination included autoimmune antibody screening, hormone level testing, and a skin patch test, along with routine preoperative checks. Autoimmune antibody tests were negative and all hormone levels were within the normal range. The skin patch test was conducted at an andrology laboratory and the result was negative (data not shown).
At first, we performed bilateral scrotal exploration and found a hydrocele in the tunica vaginalis on the left side. In addition, we noted remarkable thickening of the left epididymis and spermatic cord; therefore, left epididymitis was considered. Furthermore, bilateral seminal tract radiography was performed after bilateral deferens separation at the epididymis level. No obvious spillage of the contrast agent was observed. Meilan (Jumpcan Pharmaceutical, Taizhou, China) was injected into the bilateral deferens, and the outflow of Meilan from the catheter was observed, indicating that the seminal duct was unobstructed. Finally, in addition to turning over the testicular sheath, bilateral epididymectomy and bilateral vasoligation (at the epididymal level) were simultaneously performed. Histological examination of the removed epididymis was performed post-surgery, which revealed epididymitis. The operation was successful and the patient was discharged after 4 days. Prednisone and immunosuppressants were stopped 10 days before surgery and were not continued after surgery. Sexual intercourse began 3 weeks after surgery and gradually increased to two times a week. By the two-month follow-up, the Dear Editor, Post-orgasmic illness syndrome (POIS) is a rare disorder affecting men, and is characterized by flu-like symptoms that appear immediately or within hours after ejaculation. In 2002, Waldinger and Schweitzer 1 reported the first two cases of POIS and subsequently proposed five preliminary diagnostic criteria for POIS. 1,2 The symptoms were categorized into seven clusters. To avoid these symptoms, patients with POIS tend to minimize the frequency of ejaculation and avoid sexual activity. Therefore, POIS poses a severe mental and psychosocial burden on patients and negatively affects their quality of life by affecting schedules, dampening romantic prospects, and creating internal struggles to avoid eroticism. 3 Previously, the main treatment strategies were focused on drug and hyposensitization therapies. 4 To date, a standard treatment strategy using these therapies has not been established. Here, we report a case of POIS that was successfully treated with surgery after failed immunosuppressive therapy.
In March 2021, a 42-year-old man came to the Department of Urology (Northern Jiangsu People's Hospital, Yangzhou, China) with the chief complaint of flu-like symptoms and rash following ejaculation for two years. Two years ago, he started experiencing symptoms such as exhaustion, palpitations, difficulty finding words, incoherent speech, concentration difficulty, depressed mood, perspiration, ill with flu, general malaise, headache, burning, red/injected eyes, itchy eyes, runny nose, sneezing, dirty taste in mouth, painful muscles, heaviness in the legs, and rash after ejaculation, occurring via sexual intercourse, masturbation, or nocturnal emissions. The flu-like symptoms occurred 1-2 h after ejaculation, and the rest of the symptoms occurred subsequently. Rash was the final symptom to appear, occurring approximately 12-24 h after ejaculation. All the symptoms peaked within 1-2 days and then spontaneously disappeared. One year ago, the patient developed a rash that was very severe compared to previous instances, extending from the right calf to both calves. He visited several physicians and was finally diagnosed with vasculitis. He was treated with prednisone (Pengyao Pharmaceutical Co., Ltd., Wuxi, China) and mycophenolate mofetil dispersible tablets (Sinopharm Chuankang Pharmaceutical Co., Ltd., Chengdu, China) for up to a
LETTER TO THE EDITOR
Novel treatment for post-orgasmic illness syndrome: a case report and literature review patient's symptoms, especially rash and headache, which were the most severe, were relieved.
POIS is a rare and debilitating condition, in which affected men experience a range of symptoms that can last approximately 2-7 days after ejaculation. In 2011, Waldinger et al. 2 proposed five preliminary diagnostic criteria for POIS. The presentation of the symptoms of POIS varies considerably and is categorized into seven clusters: general, flu-like, head, eyes, nose, throat, and muscle. In our report, the patient experienced symptoms belonging to all seven clusters. The flu-like symptoms occurred approximately 1-2 h after ejaculation, followed by the others. All symptoms peaked within 1-2 days and then spontaneously disappeared. The patient met four of the five diagnostic criteria for POIS. The duration of the symptoms is the only criterium that was not fulfilled. At the onset of POIS, the symptoms disappeared within a week; however, as the disease progressed, the duration increased to as long as 20 days. Strashny 5 first assessed the validity of the diagnostic criteria suggested by Waldinger et al. 2 and suggested that POIS symptoms can last for as long as 21 days. According to the updated diagnostic criteria by Strashny, 5 the patient met all five criteria and was finally diagnosed with POIS.
POIS can be classified into two types: primary and secondary. 2,6 In primary POIS, symptoms appear after the first ejaculations during puberty or adolescence. In secondary POIS, symptoms manifest later in life. 7 The occurrence of the two types of POIS was found to be approximately equivalent in study of 45 Dutch Caucasian males. 2 However, the pathogenesis is not well established. 8,9 At present, two main mechanisms are speculated ( Table 1). First, Waldinger et al. 2 hypothesized that POIS is an autoimmune or allergic disorder. Second, Ashby and Goldmeier 10 hypothesized that POIS is triggered by impairment of the cytokine and neuroendocrine responses. Jiang et al. 11 proposed a competing hypothesis that POIS is a disorder of the endogenous μ-opioid receptor system and that the symptoms of POIS resemble those of opioid withdrawal. Furthermore, Pierce et al. 12 hypothesized that POIS was associated with sympathetic dysregulation. In our report, in addition to the seven clusters of symptoms, our patient experienced rash after ejaculation, which was treated with prednisone and an immunosuppressant. These results strongly suggest that POIS is, at least partially, caused by an allergic disorder. In recent years, some other studies have suspected that POIS is associated with hypogonadism. 6,13 POIS caused by this condition was successfully treated with the administration of human chorionic gonadotropin 13 or testosterone enanthate. 6 However, in our case, the patient's total testosterone level was 10.4 nmol l −1 , luteinizing hormone level was 2.97 mIU ml −1 , and follicle-stimulating hormone level was 7.85 mIU ml −1 , all of which were within the normal range. Therefore, our case of POIS was different from that associated with hypogonadism.
The therapeutic options for POIS are usually selected according to the hypothesized pathogenesis and main symptoms. Treatment options include drug and hyposensitization therapies. Potentially effective drugs mainly include antihistamines, prednisone, nonsteroidal antiinflammatory drugs, benzodiazepines, selective serotonin reuptake inhibitors, alpha-blockers, silodosin, nifedipine, and flutamide; however, these may vary based on complaints, patients, case series with small sample sizes, and case reports. 1,10,12,14,15 POIS symptoms have been reported to be relieved with niacin, olive leaves, fenugreek, saw palmetto, Wobenzym N, probiotics, and an anti-inflammatory diet. 16 Drug therapy is currently in the exploratory phase, and there is no standard treatment presently available. Hyposensitization therapy was successfully conducted in two Dutch men, with 90% and 60% improvement in POIS symptoms at 15 months and 31 months, respectively. 16,17 Intralymphatic immunotherapy is a promising new method of allergen-specific immunotherapy, and it was first used to treat a Korean man with POIS. 18 The patient's POIS symptoms and sexual dysfunction were both relieved using intralymphatic immunotherapy. 18 Although these data are promising, they were obtained from case reports. In our case, we hypothesize that POIS is caused by repeated contact of the sperm or epididymal fluid and circulating T-lymphocytes in the seminal tract. Moreover, epididymitis may increase local vascular permeability, which may increase the possibility of blood and semen exposure. Therefore, we believe that epididymectomy and vasoligation are effective ways to eliminate the influence of these two factors. By the 2-month follow-up, the patient's symptoms, especially rash and headache, which were the most severe symptoms, were relieved. The patient was able to ejaculate twice a week by intercourse. To the best of our knowledge, ours is the first study to use surgical processes such as seminal tract radiography, epididymectomy, and vasoligation in the treatment of POIS.
Currently, there is no study assessing the outcomes of POIS treatment over a long period or life-time follow-up period. Although patients respond positively to drugs, it does not indicate that the drug will be effective for a long time. In the present case, prednisone and the immunosuppressant were initially effective, but gradual occurrence of side effects led to a reduction in their dosage and frequency, after which the patient's symptoms recurred and worsened. Medical therapy can only relieve a part of the patient's symptoms. In other words, a single drug may be unable to control all the symptoms, unless etiological treatment is performed. As reported, a remarkable remission of the disease was achieved in patients with POIS associated with hypogonadism by treatment with human chorionic gonadotropin 6 or testosterone enanthate. 13 If POIS is triggered by allergies, it may be treated by isolating the allergens. For patients with risk factors for allergies, it is possible to obtain encouraging results through local exploration of the seminal tract. However, this method may not always yield favorable results, since an orgasm is a complex biochemical event and complete neurophysiological process but not just the physical ejaculation of semen. 12 Furthermore, semen comprises the seminal vesicle fluid and prostate fluid as well as other fluids. Our report provides an insight into an alternative treatment for POIS. We believe that surgical interventions should be conducted in selected patients, after comprehensively considering their reproductive needs, basic diseases, and their own requirements.
In conclusion, for patients with POIS, treatment should be determined according to the hypothesized pathogenesis and main symptoms. For patients with abnormalities detected during specialized physical examination, it is possible to obtain encouraging results through local exploration of the seminal tract. Bilateral epididymectomy and bilateral vasoligation can be used as alternative treatments with promising outcomes. However, as long as surgery is not established as the standard therapy, it should be considered as the last resort and performed only in selected cases. More large-scale randomized controlled trials are needed to confirm the effectiveness and safety of surgical treatments for POIS.
AUTHOR CONTRIBUTIONS
TBH was responsible for data collection, literature review, and manuscript drafting. JJY contributed to data collection, follow-up and helped to revise the manuscript. ZYL and YJD conceived the study, participated in its design and coordination, and helped to revise the manuscript. All authors read and approved the final manuscript. | 2021-11-03T15:12:12.583Z | 2021-10-29T00:00:00.000 | {
"year": 2021,
"sha1": "f062508875c6bd00d167b38577d2220e096f12d8",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/aja202170",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "02edd0f13cb0a54ed16eba468ea6a8120623d8e3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
136236450 | pes2o/s2orc | v3-fos-license | Effects of flexoelectricity and surface elasticity on piezoelectric potential in a bent ZnO nanowire
In this work, a rapid model is established to study the effects of flexoelectricity and surface elasticity on the piezoelectric potential of a bent ZnO nanowire. Based on the piezoelectric theory and core-surface model, the distribution of piezoelectric potential of the ZnO nanowire is investigated. The analytical solution shows that the flexoelectricity and surface elasticity both significantly influence the piezoelectric potential. However, the effect of flexoelectricity is longitudinal dependent, which vanishes on the top side of nanowire, but only left surface elasticity effect on the potential. Simulation results show that the maximum value of potential on the top side of nanowire is about ± 220.5mV, of which result is lower compared to other theoretical models, but it should be more reasonable.
Introduction
One-dimensional zinc oxide (ZnO) nanowire is an important semiconducting piezoelectric material that has been widely used as a biomedical [1], environment monitoring, and even personal electronic nanodevices and nanosystem. In recent years, ZnO nanowires are widely used as nanoscale power generating systems. Many theoretical papers are reported to analyze the electrostatic potential of ZnO nanowires. Gao and Wang [2] established a classical piezoelelctric perturbation theory, which provides a way to estimate the potential generating in ZnO nanowire. Other groups [3,4] also reported some similar studies to quantify the piezoelectric potential on a bent ZnO nanowire. It is well known that, with decreasing size of nanowire, strong size dependencies are displayed. Besides, flexoeletricity also becomes prominent at nanoscale, Liu et al. [5] took flexoelectric effect into account, analyzed the piezoelectric potential generated in a ZnO nanowire. Yan et al. [6,7] utilized an extended linear theory of piezoelectricity to discuss the influence of flexoelectric effect on the electroelastic responses, and the bending and vibration behavior of piezoelectric nanobeams.
In this paper, with the consideration of the flexoelectricity and surface elasticity effect, a simple and comprehensive theoretical framework for bending ZnO nanowires will be established.
Problem identification and flexoelectric equation
Based on the piezoelectricity of ZnO nanowires, Wang et al. [8] proposed a nanogenerator which can convert the mechanical energy into electrical energy by bending the ZnO nanowires. Figure 1 shows the configuration of a nanogenerator using an atomic force microscope (AFM) tip scanning over a ZnO nanowire. A lateral force y f from the AFM tip applied to the top side of nanowire, result in a mechanical deflection and piezoelectric polarization. Flexoelectric polarization shows apparently in the nanoscaled material, and it should not be ignored. Liu et al [5] have set a model which describes the flexoelectricity affects electrostatic potential in a bent piezoelectric nanowire. However, the derivation process is complex, thus, it will be useful to simplify such a derivation process to establish a rapid model. In this research, for granting a most close realistic conclusion, the size dependent effect has been taken into consideration.
(3) f ijkl is a so-called fourth-order flexoeletric tensor related to stresses. The transformed flexoelectric coefficients f ijkl with the relation to μ imnl can be written as follow: 1 It is well known that the polarization charges induced by piezoelectric effect and flexoelectric effect are bound charges rather than free charges. By Gauss's law, it gives Where s ρ is the cylinder surface free charge density in nanowire, and D is the electric displacement, which is defined as Here ε is the dielectric constant, and ϕ is the electric potential.
Model derivation
The ZnO nanowire is a hexagonal prism in geometry. To simplify the process of analytical solution, we assume the nanowire has a cylindrical shape with a uniform cross section of diameter 2a and length l. By Saint-Venant's principle of pure bending in electricity [11], the stress induced in the nanowire is It is known that the material elastic constants can be obtained by an isotropic elastic modulus E and Poisson radio ν , hence, the non-zero flexoelectric coefficients f ijkl can be derived from the equation (1 2 ) Then we can easily get the piezoelectric charge density 33 15 ( ) (6) and (7), the relationship between piezoelectric potential ϕ and piezoelectric charge density v ρ is derived as In cylindrical coordinates, the non-homogeneous differential equation (13) Then, we follow the solving steps of paper [4]. Finally, we can get the analytical solution of the piezoelectric potential distribution of the ZnO nanowire This equation is completely equal to the solution of paper [5].
As we known, when the characteristic sizes of materials shrink to nanometer, surface effects often plays an important role in their material properties due to the increasing ratio of surface area to volume. For the effect of surface elasticity, we adopt the core-surface model, which essentially assumes that a nanowire consists of a core with elastic modulus 0 E and a surface (zero thickness) with so-called surface elastic modulus s E (the unit is N/m). We can use a concept of effective modulus * E to describe the elasticity of nanowire with surface effect. Under bending, the expression of effective elasticity E * with the bulk elasticity E 0 and surface elasticity E s is written as follows [12]: Here, γ is a dimensionless elastic modulus ratio If the influence of surface elasticity is taken into account in our model, the solutions of potential (15) should be modified. Firstly, surface elasticity will affect the flexoelectric coefficients f ijkl , thus, If the effect of surface elasticity is neglected, surface elasticity 0 s E = , namely, the dimensionless elastic modulus ratio 0 γ = , the analytical solution can be reduced to the equation (15), which is the results without surface elasticity effect, but only with the effect of flexoelectricity.
Numerical Results And Discussion
We established a rapid model which simplifies the derivation process, and also incorporating the effects of flexoelctricity and surface elasticity on the electric potential of ZnO nanowire. For a bulk ZnO, the Young's modulus E 0 = 129GPa, Poisson's ratio ν= 0.349 [2,4], relative dielectric constant ε/ε 0 = 7.7 [13], and the surface elastic modulus E s takes the value of 235N/m [12]. The piezoelectric constants Figure 2 gives the potential distribution along the y-axis direction at z = 300 nm for a nanowire with a = 25nm and l = 600nm bent by a force of 80 nN under different effect conditions. It is shown that flexoelectric effect strengthens the value of electric potential, and this curve with flexoelectric effect is consistent with the previous work [5]. However, the effect of surface elasticity weakens the value of electric potential. It is found from the curve that the value of potential decreases when the effects of flexoelectricity and surface elasticity are both taken into consideration. The curve with no surface elasticity and flexoelectricity comes to the classic result of piezoelectric polarization [2,4]. Moreover, we simulate the side and top cross-sectional potential distribution at an applied force of 80nN on the nanowire, of which the effects of flexoelectricity and surface elasticity are both taken into account, shown in figure 3(a) and 3(b). It is clearly found that the electric potential due to flexoelectricity is dependent on z . More closely to the top side, more weak effect of flexoelectricity.
According to the equation (19), the effect from flexoelectricity is vanished on the top side. Therefore, on the top of nanowire, the main effect is only from surface elasticity. The maximum output of potential on the top side of nanowire is about ± 220.5mV, but at the bottom side is ± 312mV. In Gao and Wang's model [2] the calculated potential value is ± 284mV, and Shao theoretically is ± 271mV [4]. Though our theoretical result is decreased but still has a wide gap from the experimental value. However, our theoretical result should be more reasonable, for the reason that, firstly, actually, in the process of experimental operation, the primary captured voltage comes from the top side or its vicinity of the ZnO nanowire when the AFM tip scanning over the nanowire, that is to say, the influence of flexoelectricity is almost zero or very weak. Secondly, most researches have proved that surface effects become the important effect for the nanoscale elements [14][15][16]; especially the effect of surface elasticity cannot be ignored. Thus, it is possible that surface elasticity becomes one of key factors influencing the piezoelectric potential. due to both flecxoelectric and surface elastic effects.
3(b).
Top cross-sectional output of the piezoelectric potential of the nanowire, due to both flecxoelectric and surface elastic effects.
Conclusions
In summary, we discuss a rapid theoretical model to describe the effects of flexoelectricity and surface elasticity on the piezoelectric potential generated in a bent ZnO nanowire. An analytical solution of potential distribution is deduced, and the results indicate that the effect of flexoelectricity is longitudinal dependent, which vanishes on the top side of nanowire, but surface elasticity demonstrates significant effect on the captured potential. Therefore, the primary contribution to the piezoelectric potential of ZnO nanowire comes from surface elasticity. Simulation result show that the maximum output of potential on the top side of nanowire is about ± 220.5mV, which is more reasonable than other theoretical models. Though our result still has a wide gap between the experimental data, there must be other influencing factors for further study in the future. | 2019-04-29T13:16:12.392Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "c9a6917d48605a42873e28d28c757eb0bfa515e2",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/167/1/012023",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "861c402a431939d7dc52faffd19ae237c6e899cc",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
10210078 | pes2o/s2orc | v3-fos-license | Resistance of Neisseria Gonorrhoeae to Neutrophils
Infection with the human-specific bacterial pathogen Neisseria gonorrhoeae triggers a potent, local inflammatory response driven by polymorphonuclear leukocytes (neutrophils or PMNs). PMNs are terminally differentiated phagocytic cells that are a vital component of the host innate immune response and are the first responders to bacterial and fungal infections. PMNs possess a diverse arsenal of components to combat microorganisms, including the production of reactive oxygen species and release of degradative enzymes and antimicrobial peptides. Despite numerous PMNs at the site of gonococcal infection, N. gonorrhoeae can be cultured from the PMN-rich exudates of individuals with acute gonorrhea, indicating that some bacteria resist killing by neutrophils. The contribution of PMNs to gonorrheal pathogenesis has been modeled in vivo by human male urethral challenge and murine female genital inoculation and in vitro using isolated primary PMNs or PMN-derived cell lines. These systems reveal that some gonococci survive and replicate within PMNs and suggest that gonococci defend themselves against PMNs in two ways: they express virulence factors that defend against PMNs’ oxidative and non-oxidative antimicrobial components, and they modulate the ability of PMNs to phagocytose gonococci and to release antimicrobial components. In this review, we will highlight the varied and complementary approaches used by N. gonorrhoeae to resist clearance by human PMNs, with an emphasis on gonococcal gene products that modulate bacterial-PMN interactions. Understanding how some gonococci survive exposure to PMNs will help guide future initiatives for combating gonorrheal disease.
PMn antiMicrobial activities
PMNs are the most abundant white cells in the peripheral blood of humans. They are professional phagocytes and the first line of defense of the innate immune system (Borregaard, 2010). In response to peripheral infection or damage, PMNs follow chemotactic cues to extravasate from the bloodstream and migrate through tissues to reach the target site. Mucosal epithelial cells and resident immune release chemokines for PMNs, including interleukin-8, interleukin-6, tumor necrosis factor-α, and interleukin-1 (Borregaard, 2010). These chemokines are released during human Gc infection (Ramsey et al., 1995;Hedges et al., 1998).
PMNs possess receptors to bind and phagocytose complementand antibody-opsonized particles [e.g., complement receptor 3 (CR3), FcRs]. They can also engulf unopsonized particles through lectinlike interactions or using receptors that are specific for ligands on the particle surface (Groves et al., 2008). Interaction between PMNs and a target particle results in the mobilization of different subsets of cytoplasmic granules to the plasma or phagosomal membrane (Figure 2). Granule fusion enables the degradation and killing of microorganisms both intracellularly and extracellularly (Borregaard et al., 2007). PMN mechanisms of microbial killing include production of reactive oxygen species (ROS) via the NADPH oxidase enzyme (the "oxidative burst") as well as the oxygen-independent activities of degradative enzymes and antimicrobial peptides ( Table 1). Human PMN granules are classified as azurophilic or primary granules, which contain myeloperoxidase, α-defensin peptides, and cathepsin G, among other antimicrobial components; specific or secondary granules containing the flavocytochrome b 558 subunit of NADPH oxidase, LL-37 cathelicidin, lactoferrin, and CR3; and gelatinase or tertiary granules containing gelatinase (Borregaard et al., 2007). PMN granules release their contents in a set order. Initially, gelatinase granule contents degrade extracellular matrix, allowing PMNs to migrate across the tissues underlying the site of infection. Next, the release of specific granules at the target destination increases phagocytic potential due to presentation of CR3 on the PMN surface.
Finally, the release of both specific and azurophilic granules creates an environment that is generally hostile to microbial survival (Lacy and Eitzen, 2008). PMNs also release neutrophil extracellular traps (NETs) composed of DNA, histones, and selected granule components, which trap and kill microbes without requiring phagocytosis (Papayannopoulos and Zychlinsky, 2009). Thus PMNs combine oxygen-dependent and -independent mechanisms to combat intracellular and extracellular microorganisms.
The fact that gonorrheal exudates contain viable Gc indicates that PMNs are ineffective at completely clearing Gc infection. There are two mechanisms that could explain how Gc survives PMN challenge: Gc prevents PMNs from performing their normal antimicrobial functions (phagocytosis, granule content release), or Gc expresses defenses against oxidative and non-oxidative components produced by PMNs (Figure 2). As we will discuss, there is now substantial evidence for both mechanisms, which ultimately enable Gc to survive within a host and be transmitted to new individuals.
MoDel systeMs For exaMininG PMns DurinG Gc PathoGenesis
Four experimental approaches have been used to investigate the involvement of PMNs in gonorrheal disease. Each has contributed to our understanding of how PMNs are recruited during acute gonorrhea and how Gc withstands this onslaught.
the Male urethral challenGe MoDel
Experimental human infection is limited to male urethral inoculation, due to the potential for severe complications such as pelvic inflammatory disease in women with gonorrhea (Cohen and Gc (blue diplococcus) attaches to the surface of PMNs and is engulfed into a phagosome (white oval). PMNs possess three classes of granules (1°, 2°, and 3°), each of which contains a unique subset of antimicrobial compounds. Granules fuse with the nascent phagosome or plasma membrane to deliver their contents to invading microorganisms. We propose two mechanisms that allow Gc to survive after exposure to PMNs. (For illustrative purposes, only intracellular Gc survival is depicted.) First, PMN granules release their contents at the plasma membrane or into phagosomes containing Gc (A). However, Gc virulence factors confer resistance to granules' antimicrobial compounds. Second, Gc prevents PMN granules from releasing their contents at the plasma membrane or into phagosomes, allowing the bacteria to avoid encountering PMN antimicrobial compounds (B). Either mechanism would enable a fraction of Gc to survive and replicate in the presence of PMNs (C). phagocytose and generate ROS in response to Opa-expressing Gc akin to primary human PMNs (Bauer et al., 1999;Pantelic et al., 2004). However, HL-60 cells do not possess the robust antimicrobial activity associated with primary cells, due in part to the absence of specific granules and other intracellular compartments (Le Cabec et al., 1997).
PriMary PMns
Research on the molecular mechanisms underlying Gc infection of PMNs has mostly relied upon primary human cells, purified from freshly isolated human blood. The abundance of PMNs in human blood and the ease of purification make PMNs amenable to infection with Gc in vitro. The limitations of working with primary PMNs include their short half-life, their limited capacity for genetic manipulation, and the person-to-person variability intrinsic to primary human cells. However, primary human PMNs have been used to measure binding and phagocytosis of Gc, quantify Gc survival after PMN exposure, and assess the roles of Gc virulence factors in bacterial defense against PMNs (see below). Gc infection of murine PMNs has also been conducted (Wu and Jerse, 2006;Soler-Garcia and Jerse, 2007). Future studies using primary PMNs along with cultured epithelial cells from relevant anatomic sites may provide a means to examine the complex interactions between host cells that occur during gonorrheal infection.
Gc interaction with PMNs is influenced by the physiological state of the PMNs being used. Initial experimentation with primary human PMNs utilized cells and Gc suspended in buffered saline solutions (Densen and Mandell, 1978;Rest et al., 1982), but this is unlikely to reflect the transmigrated, primed state of PMNs in the genitourinary tract during acute infection. Research from the laboratory of Dr. Richard Rest (Drexel University) demonstrated that when PMNs were allowed to adhere to tissue culture-treated dishes, they released granule components and bound significantly more Gc than PMNs in suspension (Farrell and Rest, 1990). Dr. Michael Apicella's laboratory (University of Iowa) subsequently developed an assay using collagen-adherent PMNs, which generated a system for studying the role of selected Gc virulence factors in bacterial survival after PMN challenge (Seib et al., 2005;Simons et al., 2005). We adapted the Apicella protocol to include PMN treatment with the chemokine interleukin-8, which facilitates PMN activation (Borregaard, 2010). We used this system to demonstrate Gc survival inside PMNs and to identify Gc proteins that defend the bacteria from killing by PMNs (Stohl et al., 2005;Criss et al., 2009).
Gc survival anD rePlication in the Presence oF PMns
Although the survival of Gc in association with PMNs was once hotly debated, there is now substantial evidence that gonococci survive and multiply within human phagocytes. Examination of urethral exudates by light and electron microscopy has repeatedly shown the presence of abundant PMNs with associated and internalized Gc (Ovcinnikov and Delektorskij, 1971;Farzadegan and Roth, 1975;Apicella et al., 1996). The fact that viable gonococci can be cultured from urethral exudates or cervical swabs is strongly suggestive of Gc survival in the presence of PMNs (Wiesner and Thompson, 1980). In vitro studies from the Apicella laboratory using adherent human PMNs demonstrated that over 50% of Gc internalized by PMNs remained viable for Cannon, 1999). Urethral infection of male volunteers results in the release of proinflammatory cytokines and appearance of PMNs in the urogenital tract 2-3 days after infection, similar to what is seen in natural cases of gonococcal urethritis (Cohen and Cannon, 1999). As in natural infections, exudates from males with experimental Gc infection contain PMNs with associated Gc and occasional exfoliated epithelial cells. Electron microscopic analysis of these exudates revealed that a subset of Gc inside PMNs appear intact, providing the initial evidence that Gc may survive within PMN phagosomes (Ovcinnikov and Delektorskij, 1971;Farzadegan and Roth, 1975;Apicella et al., 1996).
the FeMale Murine Genital tract MoDel
Dr. Ann Jerse (Uniformed Services University of the Health Sciences) has developed a female mouse model of Gc genital tract infection, which allows gonorrheal infection to be examined in a genetically tractable host. In this model, estradiol-treated mice are inoculated vaginally with Gc, which allows over 80% of mice to be colonized with bacteria for over 1 week. Infected mice produce inflammatory cytokines, leading to rapid appearance of PMNs in the genital tract (Jerse, 1999). Experimental infection of female mice has provided insight into the selective advantage of opacityassociated (Opa) protein expression on Gc survival and the roles of Gc virulence factors conferring in vitro resistance to ROS and antimicrobial peptides in in vivo infection (Jerse, 1999;Jerse et al., 2003;Wu and Jerse, 2006;Wu et al., 2009;Cole et al., 2010). Because mice lack the human-specific receptors and other components that are likely to be important for gonorrheal disease, future studies could employ mice transgenic for human proteins of interest. Inbred mice that are transgenic for human carcinoembryonic antigenrelated cellular adhesion molecules (CEACAMs) and CD46, receptors that are implicated in gonorrheal pathogenesis (Merz and So, 2000), have already been developed (Johansson et al., 2003;Gu et al., 2010), with additional mouse strains likely to be produced in the coming years.
iMMortalizeD PMn-like cell lines
The use of immortalized promyelocytic human cell lines to study the molecular mechanisms of Gc pathogenesis provides a system which is clonal, easy to maintain, and amenable to expression of transgenes. As one example, the leukemic HL-60 cell line can be differentiated into a PMN-like phenotype with retinoic acid (Collins et al., 1977;Newburger et al., 1979). Differentiated HL-60 cells can Proteins that have been shown to have or produce antimicrobial activity against Gc in vitro are bolded and italicized. Proteins to which Gc is resistant are indicated in red type.
The complement system is a key component of the innate immune system comprised of more than 30 proteins. The complement system can be activated by three routes: the classical, the alternative, and the lectin pathway, but all three routes normally proceed to proteolytic activation of the major complement protein C3 and assembly of the membrane attack complex (Ram et al., 2010). Gc has multiple ways of resisting the bactericidal activities of complement in normal human serum. Gc binds the complement regulatory proteins C4b-binding protein (C4BP) and factor H (fH) on its surface via porins and sialylated LOS (Ram et al., 1998a(Ram et al., ,b, 2001Gulati et al., 2005). C4BP restricts the amount of C3 which can be deposited by the classical complement pathway. fH is a cofactor for factor I-mediated cleavage of C3b to the hemolytically inactive iC3b. In the alternative pathway fH irreversibly dissociates factor Bb to limit C3 deposition and subsequent C5 cleavage (Ram et al., 2010). C4BP and fH provide defense against direct complement-mediated killing but concomitantly increase iC3b deposition on the Gc surface. iC3b is a ligand for CR3 (CD11b/CD18), which in PMNs drives actin-dependent particle engulfment into degradative phagolysosomes and production of ROS (Groves et al., 2008). Although it is assumed that Gc is complement-opsonized at mucosal surfaces, how opsonization impacts Gc survival after PMN exposure remains to be explored. up to 6 h, as determined by viable bacterial counts and electron microscopy . Our group corroborated these findings and directly detected viable extracellular and intracellular Gc after PMN infection, using dyes that reveal the integrity of bacterial membranes (Criss et al., 2009). We conclude from these studies that a fraction of Gc can survive both extracellularly and intracellularly in the presence of PMNs.
There is evidence that Gc does not only persist within PMNs, but also uses the PMNs as a site for replication. Pioneering studies in the 1970s showed that Gc inside exudate-derived PMNs were sensitive to penicillin, which only kills replicating bacteria. In the presence of antimicrobial agents such as spectinomycin or pyocin that cannot permeate eukaryotic membranes, numbers of PMNassociated Gc increased over time, indicative of bacterial replication inside exudatous and in vitro-infected PMNs Casey et al., 1979Casey et al., , 1980Casey et al., , 1986. Using electron microscopy and colony counts, the Apicella laboratory observed an increase in Gc within collagen-adherent human PMNs over a 6-h infection, results also suggestive of intracellular replication . Similarly, we used bacterial viability dyes to observe an increase in the number of viable Gc inside PMNs over time (Criss et al., 2009). While the advantage of Gc replicating inside terminally differentiated cells of a limited life span is questionable, the Apicella group showed that PMNs infected with Gc delay their spontaneous apoptosis (Simons et al., 2006). We anticipate that advances in cellular imaging will provide additional support for Gc replication inside PMNs and will give insight into the timing and extent of this event.
binDinG anD PhaGocytosis oF Gc by PMns
Since gonorrheal secretions contain PMNs associated with viable intracellular and extracellular bacteria, Gc must possess factors that promote attachment and phagocytosis by PMNs. Opsonic and non-opsonic interactions are the two basic means of phagocytosis, both of which may be utilized by Gc (Groves et al., 2008; Figure 3).
oPsonic uPtake
The two major opsonins for PMN phagocytosis are immunoglobulins and complement, which bind to Fc receptors and complement receptors such as CR3, respectively (Groves et al., 2008). Patients with gonorrhea produce opsonic IgG and IgA directed against Gc surface-exposed components including porin, Opa proteins, pilin, iron-regulated outer membrane proteins, and lipooligosaccharide (LOS) (Brooks et al., 1976;McMillan et al., 1979;Tramont et al., 1980;Rice and Kasper, 1982;Siegel et al., 1982;Lammel et al., 1985;Schwalbe et al., 1985). Intriguingly, serum from individuals with no prior history of gonorrhea contains opsonic IgG against Gc porin and IgM against Gc LOS isotypes containing hexosamine; the non-Gc antigens recognized by these antibodies are not known (Sarafian et al., 1983;Griffiss et al., 1991). Many of the Gc surface structures that promote humoral immune responses are phase and antigenically variable and thus evade antibody-mediated immune surveillance (Virji, 2009). Also, Gc secretes an IgA protease that cleaves the polymeric IgA in mucosal secretions (Blake and Swanson, 1978). Thus complement rather than antibodies is likely to drive the opsonic phagocytosis of Gc by PMNs.
Figure 3 | Opsonic and non-opsonic phagocytosis of gc by PMNs. (A)
Antibodies that recognize Gc surface structures opsonize the bacteria and allow for phagocytosis via Fc receptors. The efficacy of immunoglobulinmediated phagocytosis is questionable given the extensive phase and antigenic variation of Gc surface structures. (B) Gc binds factor H and C4 binding protein, resulting in opsonization of Gc with C3 and other complement components. Gc is then phagocytosed via the CR3 receptor. Gc pili and porin can cooperatively interact with CR3, which may mediate the non-opsonic phagocytosis of Gc by PMNs. (C) Selected Opa proteins bind to CEACAM family receptors expressed on PMNs, leading to non-opsonic phagocytosis of Opa + Gc.
Gc DeFenses aGainst PMn antiMicrobial activities
Whether they remain extracellular or are phagocytosed by PMNs, Gc must contend with the variety of oxidative and non-oxidative antimicrobial components produced by PMNs (Figure 2). Gc isolated directly from human material or guinea pig subcutaneous chamber fluid display increased survival in the presence of phagocytes compared to Gc grown in vitro (Witt et al., 1976;Veale et al., 1977), suggesting that Gc possesses factors necessary for defending against phagocyte killing that are lost or altered with extended in vitro culture. These Gc factors aid Gc in resisting the toxic activities of PMNs in two ways. First, Gc prevents PMNs from producing or releasing antimicrobial components. Second, Gc expresses virulence factors that defend against these components. As we will describe, many Gc gene products have been identified that protect Gc from purified ROS, proteases, or antimicrobial peptides, but in most cases their roles in defense against PMNs have not yet been investigated.
DeFenses aGainst oxiDative DaMaGe
The major species of ROS include superoxide anion, hydrogen peroxide, and hydroxyl radical. These ROS have different reactivities and half-lives, but together they induce DNA, protein, and cell membrane damage that can lead to cell death (Fang, 2004).
There are at least four potential sources of oxidative stress for Gc in vivo.
(1) PMN NADPH oxidase transports electrons across the phagosomal or plasma membrane to generate superoxide, which spontaneously dismutates to hydrogen peroxide. In PMNs, the azurophilic enzyme myeloperoxidase uses hydrogen peroxide as a substrate to generate hypochlorous acid (bleach; Roos et al., 2003). Phagocytes can also produce reactive nitrogen species (RNS) such as nitric oxide and peroxynitrite, but RNS appear to be of limited importance in human PMN antimicrobial activity (Fang, 2004).
(2) Enzymes related to phagocyte NADPH oxidase are expressed in epithelial cells, and the survival defect of Gc antioxidant mutants inside primary cervical cells implies that epithelial cells may also be an important source of oxidative stress for Gc (Wu et al., 2005Achard et al., 2009;Potter et al., 2009). (3) Lactobacillus species that generate hydrogen peroxide are normally found in the vaginal flora of women (Eschenbach et al., 1989). Women with inhibitory lactobacilli are less likely to be infected with Gc (Saigh et al., 1978), and lactobacilli inhibit Gc growth in vitro (Saigh et al., 1978;Zheng et al., 1994;St Amant et al., 2002). However, it appears that effects of lactobacilli on Gc may be independent of hydrogen peroxide production, since mucosal secretions can effectively quench lactobacilli-derived ROS (O'Hanlon et al., 2010). (4) Gc generate ROS during aerobic respiration, although this may be less of an issue in vivo, where the oxygen tension in the genitourinary tract is low (Archibald and Duong, 1986). Gc defenses against oxidative stress involve manipulation of the PMN oxidative burst, detoxifying or repair of oxidative damage, and transcriptional upregulation of antioxidant gene products ( Figure 4A).
Gc manipulation of the PMN oxidative burst
In the absence of Opa protein expression, Gc fails to induce the PMN oxidative burst (Rest et al., 1982;Virji and Heckels, 1986;Fischer and Rest, 1988;Criss and Seifert, 2008). Even in the presence of Opa + Gc that induce ROS production in PMNs, the magnitude of ROS production is small relative to stimuli such as phorbol esters or other non-oPsonic uPtake In the absence of antibodies or complement, efficient binding and engulfment of Gc by PMNs is achieved via expression of colony Opa proteins Virji and Heckels, 1986;Fischer and Rest, 1988). Opa proteins, formerly known as "protein II," are a family of closely related, 20-30 kD outer membrane proteins that facilitate Gc binding and internalization by human cells, including PMNs (Sadarangani et al., 2011). Gc strains possess approximately 11 opa genes encoding 7-8 antigenically distinct Opa proteins (Connell et al., 1990;Dempsey et al., 1991). Each opa gene is phase-variable due to slipped-strand mispairing in a pentameric nucleotide repeat that places the gene in our out of frame (Murphy et al., 1989), such that individual Gc can express zero, one, or any possible combination of Opa proteins. Differential expression of Opa proteins can influence bacterial tropism for host cell types and provides a mechanism of immune evasion (Sadarangani et al., 2011). Opacity-associated proteins bind heparan sulfate proteoglycans (HSPGs) and/or CEACAMs. Only those Opa proteins that bind CEACAMs are reported to influence Gc interactions with PMNs (Sadarangani et al., 2011). The Opa-binding CEACAMs on PMNs are CEACAM1, CEACAM3, and CEACAM6, with CEACAM3 expression exclusively restricted to PMNs. CEACAM1 and CEACAM3 are transmembrane proteins, while CEACAM6 possesses a glycosylphosphatidylinositol anchor (Gray-Owen and Blumberg, 2006). Binding of Opa proteins to any of the three CEACAMs results in Gc internalization, but via different signaling events (McCaw et al., 2004).
Opacity-associated protein expression is selected for in the male urethra, the female cervix during the follicular phase of the menstrual cycle, and in the murine cervix Swanson et al., 1988;Jerse et al., 1994;Jerse, 1999). However, Opa − Gc survives better after exposure to PMNs in vitro than isogenic Opa + Gc (Rest et al., 1982;Virji and Heckels, 1986;Criss et al., 2009). Opa protein expression increases Gc phagocytosis by PMNs and stimulates PMN ROS production, and both factors may influence bacterial survival after exposure to PMNs (Rest et al., 1982;Fischer and Rest, 1988).
Gc surface structures other than Opa proteins may contribute to adherence and phagocytosis by PMNs. Pili and porin cooperatively interact with CR3 on cervical epithelial cells (Edwards et al., 2002). It is not known if this interaction occurs on PMNs, but if it were to occur, it would drive non-opsonic uptake of Gc by PMNs. In vitro studies suggested that "type 1," virulent, piliated Gc were resistant to phagocytosis and killing by PMNs compared to "type 4," avirulent, non-piliated bacteria that arise after extensive laboratory passage (Ofek et al., 1974;Dilworth et al., 1975). We now know that type 1 and type 4 Gc vary in Opa expression as well as piliation, both of which could have contributed to these observations. Purified porins also decrease PMN actin polymerization, which may reduce the phagocytosis of Gc by PMNs (Bjerknes et al., 1995). Serogroup C strains of N. meningitidis with lacto-N-neotetraose (LNnT) on LOS are phagocytosed by neutrophils in an opsonin-independent manner (Estabrook et al., 1998); it has not been examined whether this LOS epitope on Gc affects phagocytosis by PMNs. Together, the combinatorial expression of Opa proteins, pili, porin, and LOS modulate Gc binding and internalization by PMNs. play a significant role in protection against oxidative stress (Tseng et al., 2001). In comparison, KatA is crucial to Gc defense against ROS. Gc has approximately 100-fold higher levels of catalase than E. coli (Hassett et al., 1990). Disruption of katA significantly reduces Gc survival to hydrogen peroxide and superoxide in vitro (Johnson et al., 1993;Soler-Garcia and Jerse, 2004;Stohl et al., 2005) and reduces the survival of some strains of Gc in the female murine genital tract (Wu et al., 2009). Gc also has high peroxidase Gc activity due to the periplasmic cytochrome c peroxidase encoded by ccp (Archibald and Duong, 1986). ccp mutant Gc show slight sensitivity to hydrogen peroxide, which is markedly enhanced when katA is also inactivated (Turner et al., 2003). Gc also imports Mn(II) into its cytoplasm via the MntABC transporter, where it scavenges superoxide and hydrogen peroxide by a mechanism independent of SodB and catalase (Tseng et al., 2001). This system is similar to the manganese transport system in Lactobacillus plantarum (Archibald and Duong, 1984).
Gc can also repair oxidative damage to proteins and DNA. Gc expresses two forms of methionine sulfoxide reductase, which reverses the oxidation of methionine residues in proteins. The MsrA protein is localized to the cytoplasm, while MsrB is secreted to the outer membrane. A msrAB mutant is more sensitive to hydrogen peroxide and superoxide in vitro than its wild-type parent (Skaar et al., 2002). bacteria . Gc utilizes three mechanisms to reduce the amount of ROS produced by PMNs. First, exposure to lactate that is released from PMNs undergoing glycolysis stimulates the rate of Gc oxygen consumption, reducing the amount available to PMNs as a substrate for NADPH oxidase (Britigan et al., 1988). Second, purified Gc porin inhibits PMN ROS production in response to Gc, yeast particles, and latex beads (Lorenzen et al., 2000), but not formylated peptides (Haines et al., 1988;Bjerknes et al., 1995). Whether porin has this effect in the context of whole Gc bacteria remains to be examined. Third, we reported that Opa − Gc suppresses the PMN oxidative burst induced by serum opsonized staphylococci and formylated peptides by a process requiring bacterial protein synthesis and bacteria-PMN contact; the bacterial products mediating this effect are not known at this time (Criss and Seifert, 2008).
Detoxification and repair of oxidative damage
Bacteria respond to oxidative stress by catalysis of superoxide to hydrogen peroxide by superoxide dismutase (SOD), which is then converted to water and molecular oxygen by catalases and peroxidases . Gc possesses a single cytoplasmic superoxide dismutase (SodB), one cytoplasmic catalase (KatA), and several genes annotated as peroxidases. SodB activity is low in Gc and does not Resistance to PMN non-oxidative damage. Gc pili and/or porins prevent PMN granules from releasing non-oxidative antimicrobial components. LOS protects Gc outer membrane proteins such as Opa and porin from proteolysis by cathepsin G. Sialylation of LOS by Lst and PEA modification of LOS by LptA increases bacterial resistance to cathepsin G and other antimicrobials. The MisR/ MisS two-component regulator increases expression of LptA and other gene products that confer resistance to PMN non-oxidative damage. Ngo1686 and RecN also protect Gc from PMN non-oxidative damage. The MtrCDE and FarAB efflux pumps export cationic antimicrobial peptides (CAMPs) and long-chain fatty acids (FA) from the Gc cytoplasm, respectively. In most cases, the contribution of these virulence factors to Gc survival after exposure to PMNs remains to be determined. permeability-increasing protein ("hCAP57"), cathepsin G protease, and LL-37 antimicrobial peptide (Casey et al., 1985;Shafer et al., 1986Shafer et al., , 1998. Unlike many Gram-negative bacteria, Gc are highly resistant (>0.2 mg/ml) to another class of antimicrobial peptides, the defensins (Qu et al., 1996), although the observed resistance varies depending on experimental conditions used (Porter et al., 2005). Of PMN non-oxidative granule components, cathepsin G and LL-37 have been the most actively studied for their effects on Gc.
Cathepsin G is a highly cationic serine protease which resides in PMN azurophilic granules. It enzymatically cleaves Gc outer membrane proteins including porin and Opa proteins (Rest and Pretzer, 1981;Shafer and Morse, 1987). However, heat and protease inhibitors do not impede cathepsin G's ability to kill Gc in vitro, indicating its antigonococcal activity is independent of its proteolytic activity . Cathepsin G can insert into Gc membranes, but killing does not appear to be due to changes in membrane permeability; instead, cathepsin G may impede peptidoglycan biosynthesis (Shafer et al., 1990).
LL-37 is the active form of an 18 kD protein precursor ("hCAP18") which resides in specific granules. hCAP-18 is proteolytically processed to LL-37 by the azurophilic granule protein proteinase-3 (Sorensen et al., 2001). hCAP-18/LL-37 is also synthesized by mucosal epithelial cells, and is readily detected in cervicovaginal secretions (mean LL-37 concentration of 10 μg/ml) and seminal plasma (mean hCAP-18 concentration of 86 μg/ml; Malm et al., 2000;Tjabringa et al., 2005). Gc infection increases the levels of hCAP18/LL-37 by two-to four-fold in cervical and urethral washes (Porter et al., 2005;Tjabringa et al., 2005). These concentrations of LL-37 would be sufficient to exert antibacterial activity on Gc, since the mean inhibitory concentration of LL-37 for Gc is 6 μg/ml (Shafer et al., 1998). The antigonococcal mechanism of action of LL-37 remains enigmatic and may be related to its ability to form pores that disrupt the integrity of bacterial membranes (Brogden, 2005).
Although Gc are susceptible to cathepsin G and LL-37 in vitro, the ability of some percentage of Gc to survive PMN exposure suggests that the bacteria have evolved mechanisms to counter these antimicrobial components. These mechanisms involve direct modulation of PMN granule release, changes to the Gc surface to resist non-oxidative antimicrobial components, and active export of these components ( Figure 4B).
Modulation of PMN granule release
Both pili and porin have been reported to reduce PMN granule fusion with the plasma membrane or phagosomes. When added to primary PMNs, purified porins inhibit azurophilic and specific granule exocytosis (Bjerknes et al., 1995;Lorenzen et al., 2000). "Type 1," piliated Gc was also reported to inhibit azurophilic granule exocytosis relative to "type 4," non-piliated Gc, but additional surface structures expressed on type 1 bacteria may have mediated this result (Densen and Mandell, 1978). More detailed studies with isogenic Gc strains are necessary to determine whether and how Gc surface structures influence granule mobilization.
Modifications to the Gc surface
Gc LOS is thought to mask proteins which are degraded by cathepsin G, since truncation or loss of LOS results in increased binding of cathepsin G and increased susceptibility to cathepsin G-mediated Many Gc gene products involved in recombinational DNA repair, base excision repair, and nucleotide excision repair participate in Gc defense against ROS, such as the recombinase RecA and the DNAbinding protein RecN (Davidsen et al., 2005;Stohl et al., 2005;Stohl and Seifert, 2006;LeCuyer et al., 2010). The putative metalloprotease Ngo1686 helps protect Gc from hydrogen peroxide and the lipid oxidant cumene hydroperoxide, but the cellular targets with which Ngo1686 interacts are currently unknown (Stohl et al., 2005). Both ngo1686 and recN mutants have significant survival defects after exposure to primary human PMNs, but a recA mutant does not (Stohl et al., 2005;Criss et al., 2009).
Transcriptional induction of antioxidant gene products
Gc pre-exposed to hydrogen peroxide survives PMN challenge significantly better than unexposed Gc (Criss et al., 2009). This finding implies that Gc possesses complex transcriptional circuitry that is important for defenses against ROS and/or PMNs. The transcriptome of Gc exposed to sublethal concentrations of hydrogen peroxide was defined and revealed the upregulation of transcripts encoding RecN, Ngo1686, and other antioxidants after oxidative challenge (Stohl et al., 2005). Antioxidant gene expression is regulated by selected transcriptional repressors. The OxyR protein represses KatA expression, which is relieved following oxidative stress in order to increase catalase production (Tseng et al., 2003). PerR is responsive to Mn(II) levels and represses expression of MntC, part of the Mn(II) transporter . Finally, Ngo1427, a LexA homolog, represses expression of RecN, which is relieved when a cysteine residue is oxidized (Schook et al., 2011).
PMns PriMarily Direct non-oxiDative antiMicrobial coMPonents aGainst Gc
Although Gc has complex mechanisms for detecting oxidative damage and responding to it, the importance of these processes in Gc survival to PMNs appears to be limited. Mutants in katA, sodB, ccp, or mntABC, alone or in combination, do not affect the percentage of Gc that can survive PMN challenge (Seib et al., 2005;Criss et al., 2009). Moreover, Gc survival is similar between normal PMNs and ROS-deficient PMNs obtained from patients with chronic granulomatous disease (CGD; Rest et al., 1982;Criss and Seifert, 2008), and PMNs maintained in anoxic conditions, as are likely to be found in the upper reproductive tract of females, are not impaired for antigonococcal activity (Casey et al., 1986;Frangipane and Rest, 1992). Our group showed that Gc survival was unaffected after exposure to PMNs treated with diphenyleneiodonium (DPI), an inhibitor of NADPH oxidase. DPI treatment or CGD PMNs did not increase the percent survival of ngo1686 or recN Gc, nor did it enhance survival of Opa + Gc that induce ROS from PMNs (Criss et al., 2009). From these results, we conclude that PMNs primarily direct non-oxidative antimicrobial activities against Gc. The functional redundancy in Gc antioxidant defenses may be sufficient to counter PMN-derived ROS; alternatively, PMNs may not generate enough ROS during infection to affect Gc survival.
DeFenses aGainst non-oxiDative DaMaGe
Seminal research from the Rest and Shafer laboratories indicated that components found inside PMN granules display oxygenindependent antigonococcal activity (Rest, 1979;Casey et al., 1985;Rock and Rest, 1988). These components include the bactericidal/ FarB cytoplasmic membrane transporter, and MtrE. Far expression is believed to be important for survival of isolates at the rectal mucosal surface, which is rich in diet-derived fatty acids, and does not contribute to Gc survival in the murine genital tract (Jerse et al., 2003). How the Far system contributes to defense of Gc against PMNs, which may release fatty acids (Huang et al., 2010), remains to be explored.
Discussion
Despite the prevalence of gonorrhea in the human population and the abundance of PMNs during acute gonorrheal disease, we are just beginning to understand the molecular mechanisms underlying Gc interactions with and resistance to PMNs. There are three overarching questions which remain currently in the field. First, how does a subset of Gc survive PMN challenge? As we have described, Gc possesses gene products which protect against oxidative and non-oxidative components that are made by PMNs. Many of these gene products are necessary for in vitro protection against isolated antimicrobial components and some provide a selective advantage in vivo. However, in many cases, it has not been investigated whether these gene products also confer a survival advantage in the context of PMN challenge. Second, how does Gc persist over time inside PMNs, as is seen in PMNs isolated from gonorrheal exudates? Although virulence-associated Gc surface structures such as Opa proteins, pili, porin, and LOS have been highly investigated for their biochemistry and impact on Gc-epithelial interactions, their effects on Gc survival inside PMNs remain enigmatic. How complement or immunoglobulin opsonization affects Gc phagocytosis by and survival inside PMNs also needs to be examined. Finally, how and why does Gc stimulate PMN recruitment? That is, what is the benefit of recruiting professional antimicrobial cells to the site of Gc infection? Given the long history of Gc in the human population, Gc could have evolved mechanisms for inhibiting PMN recruitment; instead, Gc LOS and lipoproteins are strong initiators of the host innate immune response (Massari et al., 2002;Pridmore et al., 2003;Zughaier et al., 2004). The answer to this question remains enigmatic, but may be revealed once we have a better understanding of how Gc manipulates PMNs in vitro and in vivo.
Our current knowledge of Gc interactions with PMNs demonstrates the impressive ability of Gc to survive PMN challenge. Although we are just beginning to piece together the roles of many Gc surface structures and gene products in Gc survival after PMN exposure, we now have model systems in hand that will allow these issues to be directly addressed. We are optimistic that continuing to investigate the mechanisms used by Gc to defend against PMN antimicrobial responses will shed light on how Gc has remained a fixture in the human population for all of recorded history (Wain, 1947;Morton, 1977). This research also has the potential to reveal novel human and Gc targets that can be exploited for new therapeutics to treat the ever-growing threat of highly antibiotic-resistant gonorrhea.
acknowleDGMents
We thank Dr. Jeffrey Tessier for obtaining the clinical sample for the image in Figure 1. We thank Louise Ball, Joanna Goldberg, and the two reviewers for helpful comments on the manuscript. This work was supported in part by NIH R00 TW008042 (Alison K. Criss) and T32 AI007046 (M. Brittany Johnson).
killing (Shafer, 1988). Two modifications of LOS impact bacterial interactions with host cells and host defenses: phosphoethanolamine (PEA) substitution on lipid A or the oligosaccharide, and sialylation of the terminal Galβ1-4GlcNAc epitopes of the oligosaccharide (Mandrell et al., 1990;Plested et al., 1999). PEA addition to the heptose group on the beta chain of the core oligosaccharide enhances Gc serum resistance but does not affect susceptibility to antimicrobial peptides (Lewis et al., 2009). In contrast, PEA addition to lipid A by the LptA enzyme increases resistance to both normal human serum and cationic antimicrobial peptides¸ indicating that structural changes in LOS contribute to the ability of gonococci to resist the bactericidal action of these innate immune components (Lewis et al., 2009). In the related bacterium N. meningitidis, expression of lptA is positively regulated by the misR/misS two-component regulatory system (Newcombe et al., 2005;Tzeng et al., 2008). The roles of MisR/MisS and LptA in Gc pathogenesis remain to be examined. The gonococcal α-2,3-sialyltransferase Lst transfers sialyl groups from host-derived CMP-N-acetylneuraminic acid to the terminal galactose residue on the oligosaccharide of LOS (Gilbert et al., 1996). Sialylation contributes to Gc resistance to normal human serum as well as PMN-derived oxygen-independent antimicrobial factors Parsons et al., 1992). Importantly, sialylated Gc are more resistant to PMNs in vitro, and sialylation contributes to Gc survival in the murine female genital tract (Kim et al., 1992;Rest and Frangipane, 1992;Gill et al., 1996;Wu and Jerse, 2006). In addition to LOS, changes in other surface components may contribute to Gc resistance to non-oxidative antimicrobial factors. For instance, loss of Opa expression enhances Gc resistance to serine proteases (Blake et al., 1981;Cole et al., 2010), and N. meningitidis lacking pili (due to insertional mutagenesis of the pilMNOPQ operon) are more resistant to the model antimicrobial peptide polymyxin B (Tzeng et al., 2005).
Gc export of antimicrobial components
The multiple transferable resistance (mtr) locus is a key determinant of Gc resistance to antimicrobial agents (Shafer et al., 1998). Mtr, a member of the resistance-nodulation-cell division (RND) family of efflux pumps, is encoded by a three gene operon designated mtrCDE. MtrC spans the periplasm to link the inner membrane protein MtrD, the multidrug efflux transporter, with outer membrane protein MtrE, the channel for export of antimicrobials to the extracellular environment . MtrCDE uses the proton motive force to export a variety of compounds from the Gc cytoplasm, including antibiotics, detergents, and antimicrobial peptides Veal et al., 2002). mtrCDE is negatively regulated by the MtrR transcriptional repressor and positively regulated by the MtrA transcriptional activator (Rouquette et al., 1999). Mutations in mtrR and mtrA that modulate expression of MtrCDE affect Gc resistance to antimicrobial peptides . MtrCDE expression promotes Gc survival in the murine female genital tract (Jerse et al., 2003) and enhances resistance to murine antimicrobial peptides (Warner et al., 2008), but its role in defense of Gc against PMNs is unclear. Gc also uses the FarAB efflux pump system to confer resistance to long-chain fatty acids, independent of Mtr activity (Lee and Shafer, 1999). The Far system is composed of the FarA membrane-spanning linker, the | 2014-10-01T00:00:00.000Z | 2011-03-16T00:00:00.000 | {
"year": 2011,
"sha1": "1eba61e0594c6740329d20ad68714af4265226c4",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2011.00077/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1eba61e0594c6740329d20ad68714af4265226c4",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
52183569 | pes2o/s2orc | v3-fos-license | Hypertension Assessment via ECG and PPG Signals: An Evaluation Using MIMIC Database
Cardiovascular diseases (CVDs) have become the biggest threat to human health, and they are accelerated by hypertension. The best way to avoid the many complications of CVDs is to manage and prevent hypertension at an early stage. However, there are no symptoms at all for most types of hypertension, especially for prehypertension. The awareness and control rates of hypertension are extremely low. In this study, a novel hypertension management method based on arterial wave propagation theory and photoplethysmography (PPG) morphological theory was researched to explore the physiological changes in different blood pressure (BP) levels. Pulse Arrival Time (PAT) and photoplethysmogram (PPG) features were extracted from electrocardiogram (ECG) and PPG signals to represent the arterial wave propagation theory and PPG morphological theory, respectively. Three feature sets, one containing PAT only, one containing PPG features only, and one containing both PAT and PPG features, were used to classify the different BP categories, defined as normotension, prehypertension, and hypertension. PPG features were shown to classify BP categories more accurately than PAT. Furthermore, PAT and PPG combined features improved the BP classification performance. The F1 scores to classify normotension versus prehypertension reached 84.34%, the scores for normotension versus hypertension reached 94.84%, and the scores for normotension plus prehypertension versus hypertension reached 88.49%. This indicates that the simultaneous collection of ECG and PPG signals could detect hypertension.
Introduction
Hypertension is a major factor of many cardiovascular diseases (CVDs), which are a group of disorders of the heart and blood vessels, including coronary heart disease, cerebrovascular disease, peripheral arterial disease, rheumatic heart disease, etc. [1]. Although sometimes there are symptoms of headache, lack of breath, chest pain, and so on, for most people with hypertension, there are no symptoms at all. Therefore, it is also known as the "silent killer", and 13% of global death is attributed to it [1]. With each heartbeat, blood is pumped via the contraction of the heart and flows through the whole body following the arterial system. Blood pressure is formed by the main propulsion of the heart's pumped blood and blockage of the microcirculatory system. Therefore, the higher is the blood pressure, the more difficult it is for the heart to pump. This undoubtedly increases the burden of the heart and, in the long term, will lead to a series of CVDs and damage to the heart, blood vessels, brain, kidneys, and so on.
Fortunately, blood pressure is the most important preventable factor of CVDs. Early prevention and management of hypertension are the major and most effective means of improving people's health levels worldwide. Healthy lifestyles (healthy diet, non-alcohol consumption, non-tobacco use, and physical activity), early detection, evaluation of blood pressure levels, proper diagnosis, and treatment with low-cost medication are beneficial in the prevention and control of hypertension [2]. The seventh report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC7) [3], which is funded and published by the US National Institutes of Health, is widely adopted. According to this report, different BP levels are divided into different hypertension categories, including normotension, prehypertension, stage 1 hypertension, and stage 2 hypertension. Due to the number of research participants, this study adopted the three BP categories of normotension, prehypertension, and hypertension, labeled according to the BP ranges of the JNC7 report [3].
Clearly, earlier attention and treatment are more effective in preventing hypertension and other CVDs. However, as we know, most hypertension patients have no symptoms in the stage of elevated blood pressure and even in hypertension. Thus, many people miss the best time for treatment and experience some complications. However, some physiological signals change based on blood pressure level [4,5], such as electrocardiogram (ECG) and photoplethysmography (PPG). The morphological changes in physiological signals mainly reflect the change of function status of the heart and vascular system. Therefore, the morphological information of PPG could be used to assess hypertension [6]. For this purpose, the Medical Information Mart for Intensive Care (MIMIC) database [7,8] was used to collect the dataset for this study, which involves arterial blood pressure (ABP), ECG and PPG signals.
Many researchers have used the MIMIC database assuming that all simultaneously collected signals were synchronized [9][10][11][12][13]. However, the creators of the MIMIC database have reported errors in the data matching and alignment in some recordings, as mentioned by Clifford et al. [14], confirming that not all signals were synchronized. This contradiction motivated our study, and we thought it would be useful to test the synchronicity-dependent features (features that rely on the time interval between ECG and PPG events) and asynchronicity-dependent features (features that rely only on features extracted from PPG events) to gain insights about the usability of the MIMIC database for evaluating hypertension either by using ECG and PPG signals or by using PPG alone.
The rest of this paper is organized as follows: Section 2 explains the methods used in this study, including data collection, signal process, and feature extraction. Section 3 shows the comparison results of the different classification models and different feature sets. Finally, Sections 4 and 5 discuss the results and conclusions on the differences and optimizations of arterial wave propagation theory and PPG morphological theory, respectively.
Database
In this study, the data were collected from the MIMIC database, which is a free-to-use database that contains tens of thousands of Intensive Care Unit (ICU) patients [7,8]. The recordings with arterial blood pressure (ABP, measured using a catheter in the radial artery), electrocardiograph (ECG) and photoplethysmography (PPG) were collected and archived for this study. During data collection, there were some abnormal and noisy recordings, for example, missing peak, pulsus bisferiens, no signal (sensor-off), and so on. These recordings were excluded in this study. Meanwhile, to explore and model the relationship between ABP, ECG, and PPG signals, 120 one-second-length signals with stable, complete ECG, ABP, and PPG signals without heart disease except for hypertension were cut from raw recordings for each subject. In the end, 121 subjects' records, each 120 s in length, were collected in this research.
PAT Feature
The definition of PAT in the literature is highly inconsistent [15]. On many occasions, PAT is used to refer to Pulse Transit Time (PTT), and in other publications, PTT referred to as PAT [15]. Moreover, the calculation of PAT is not consistent in the literature, but some general convention exists. It is worthy to note that the R-wave of an ECG and the foot of the PPG waveform is perhaps the most commonly used in the literature. In 2013, Choi et al. [16] tested three different measurement points for PAT, including the peak (PAT RS ), middle (PAT Rb-2 or PAT W-1 ), and end (PAT RO ) of the PPG waveform. Their study recommended the use of PAT-middle as it is highly correlated with BP, and therefore, in our study we used the PAT Rb-2 to represent the PAT feature.
PPG Features
As the referenced blood pressure source of this study, the original ABP signal did not undergo any preprocessing. Systolic blood pressure (SBP) and blood pressure categories were extracted and labeled directly from the original ABP waveform signal [17]. A 0.5-10 Hz 4th Chebyshev II bandpass filter was adopted to remove the noise of raw PPG signals and improve the signal quality index (SQI) [18], and a 0.5-40 Hz 4th Butterworth bandpass filter was used to filter the noise of raw ECG signals [19]. Additionally, a normalization process was conducted for filtered ECG and PPG signals to divide the pulsating part of blood volume (the AC part) by the non-pulsating part (the DC part). Further, two forward differential processes were implemented to acquire the velocity waveform of PPG signals (VPG) and the acceleration waveform of PPG signals (APG). Note, the first order differential to the original PPG signal to obtain the VPG signal, and the second order differential to the original PPG signal to obtain the acceleration of PPG waveform signals. To visualize main events within these signals, the TERMA framework [20] and Eventogram [21] can be used. Finally, ECG, PPG, VPG, and APG were together regarded as the feature extraction signal resources.
In this study, the feature points were extracted beat by beat, and the heart-beat pair was divided by the R wave of the ECG, which was identified by a reliable detector [22][23][24]. In one beat period, some feature points of PPG and its derivatives were defined [25], and the detailed waveforms and names are clearly marked in Figure [25]. The PPG amplitude is represented by the feature name and the amplitudes is represented by the height from PPG baseline to feature points such as a, a-1, a-2, etc. The shaded area contains features associated with hypertension.
Classification Models
Several classifiers are discussed in the literature; however, we selected four distinctive classifiers: Logistic Regression, AdaBoost Tree, Bagged Tree, and K Nearest Neighbors (KNN). These classifiers represent different classification theories such as regression, decision tree, cluster, and bagged decision tree. From the results, we can see that KNN achieves better classification performance than the others. As we know, KNN is a very common classifier that can be used in many applications and can be easy to realize.
In this study, the dataset was divided into a training set (70%) and a testing set (30%). In the training phase, the training adopted 10-fold cross validation to validate the generalization ability of the trained model. The trained model was then used to predict for the testing set. The F1 score was calculated as an evaluation measure, as follows: where Precision = TP/(TP + FP) and Recall = TP/(TP + FN). TP stands for true positives, FP stands for false positives, and FN stands for false negatives. To comprehensively evaluate the trained models, various evaluation indexes were used, including sensitivity (SE), specificity (SP), and the F1 score, which is the harmonic mean of precision and sensitivity. In this study, all the signal processing, modeling, and evaluating were carried out via MATLAB software (R2017b version), developed and released by MathWorks (Natick, MA, USA) company.
Results
In our past research [18,25,26], we conducted a BP management study based on a clinical dataset collected in China by a PPG device designed for that study. In that study, 10 PPG features were evaluated and selected for BP category classification. Based on that study, the same 10 PPG features The characteristics of arterial blood pressure (ABP), electrocardiogram (ECG), photoplethysmogram (PPG), velocity photoplethysmogram (VPG), and acceleration photoplethysmogram (APG) waveforms. The definition of feature points can be found in our past research [25]. The PPG amplitude is represented by the feature name and the amplitudes is represented by the height from PPG baseline to feature points such as a, a -1 , a -2 , etc. The shaded area contains features associated with hypertension.
Classification Models
Several classifiers are discussed in the literature; however, we selected four distinctive classifiers: Logistic Regression, AdaBoost Tree, Bagged Tree, and K Nearest Neighbors (KNN). These classifiers represent different classification theories such as regression, decision tree, cluster, and bagged decision tree. From the results, we can see that KNN achieves better classification performance than the others. As we know, KNN is a very common classifier that can be used in many applications and can be easy to realize.
In this study, the dataset was divided into a training set (70%) and a testing set (30%). In the training phase, the training adopted 10-fold cross validation to validate the generalization ability of the trained model. The trained model was then used to predict for the testing set. The F1 score was calculated as an evaluation measure, as follows: where Precision = TP/(TP + FP) and Recall = TP/(TP + FN). TP stands for true positives, FP stands for false positives, and FN stands for false negatives. To comprehensively evaluate the trained models, various evaluation indexes were used, including sensitivity (SE), specificity (SP), and the F1 score, which is the harmonic mean of precision and sensitivity. In this study, all the signal processing, modeling, and evaluating were carried out via MATLAB software (R2017b version), developed and released by MathWorks (Natick, MA, USA) company.
Results
In our past research [18,25,26], we conducted a BP management study based on a clinical dataset collected in China by a PPG device designed for that study. In that study, 10 PPG features were evaluated and selected for BP category classification. Based on that study, the same 10 PPG features and new extracted PAT features were adopted here to classify the different BP categories to optimize the arterial wave propagation theory and PPG morphological theory. The 10 PPG features are shown in Table 1. We can see that the features are mainly in the bd segment. The 10 PPG features and PAT feature were used to classify the different BP categories, which include normotension (46 subjects), prehypertension (41 subjects), and hypertension (34 subjects). Meanwhile, four different classifiers were trained and tested. Table 2 shows the classification performances of the different trials and feature sets. In general, the KNNs achieved the best classification performance compared to the other classifiers. Our findings were that the PPG features were more beneficial in classifying BP categories than the single PAT feature. Further, the combination of PAT feature and PPG features greatly improved the classification performance of using only PPG features.
A comparison was also carried out with our past research. Because of the difference in the PPG datasets of this study (MIMIC database) and the past one (collected by a designed device [26]), the 10 PPG features were also used to classify the BP categories to compare with the past research, and, further, an optimization using the PAT feature and 10 PPG features was compared. To our knowledge, no study has previously investigated this research question on the same database.
Discussion
PPG signal is affected by heart activity, vascular wall function, and peripheral arterial status [27]. Therefore, it is a very complex physiological signal with abundant information [28,29]. The morphological information of PPG signals plays an important role in the analysis of cardiovascular activity. In past research, many PPG morphological features [29,30] have been proposed, including the Crest Time, Delta T, Augmentation Index, Large Artery Stiffness Index [31], PPG intensity ratio [32], etc. Some novel features showed excellent performance in BP prediction or hypertension management. However, most of the research was conducted based on a small quantity of healthy participants [33]. A more comprehensive and systematic study needs to be implemented to improve and validate the arterial wave propagation and PPG morphological theories.
Several issues have been studied in our past research, such as optimal SQI [34], optimal filter for PPG signal [18], detection of PPG morphological characteristics [35][36][37][38], generating diagnostic PPG features for abnormality evaluation [25], compressing PPG signals [39], and so on. To continue in our previous research direction, we aimed in this study to: (1) identify special signatures in both PAT feature and PPG features for hypertensive and prehypertensive subjects and to differentiate them from normotensive subjects; and (2) use such features to monitor management of BP level and to check treatment compliance using the MIMIC database.
PAT and PPG features reflect different physiological information: PAT can indicate the transmission of the arterial wave in the blood vessel, while PPG features can indicate the status change of vascular tissue and blood volume. Therefore, three experimental analyses were implemented to determine the feature differences in the different BP level classifications (normotension versus prehypertension, normotension versus hypertension, and normotension plus prehypertension versus prehypertension). Based on our past research, 10 PPG features were used in this study for these experimental classifications. Table 2 shows the 10 PPG features that were evaluated in our past research. To determine the characteristics of features to classify, four different type classifiers were adopted: the AdaBoost Tree, Logistic Regression, K-Nearest Neighbors (KNN), and Bagged Tree. The KNN classifier showed the best performance compared with the other models.
PAT has some limitations as it cannot classify these three categories of blood pressure levels; PPG features showed better performance in classifying hypertension from normotension than the other experiments. Furthermore, the feature set of PAT feature and 10 PPG features obviously improved the classification performance for all three experiments. This indicates that the combination of arterial wave propagation theory and PPG morphological theory can be beneficial in modeling and quantizing the BP formation, which is comprehensive and complex. Various influencing factors work together to determine and affect blood pressure, such as a heart s cyclical activity, vasomotion, total blood volume, cardiac output, vascular elasticity, peripheral resistance, and so on. Therefore, the blood pressure level is the physiological response of the cardiovascular system, and cardiac function, total blood volume, and vascular elasticity play decisive roles in the formation of blood pressure. Hence, it is feasible to use arterial wave propagation theory to explain blood transmission and to use PPG morphological theory to explain the changes of vascular aging, stiffness, and compliance that generally occur at different BP levels.
In our past research, the PPG signal was collected as 1000 Hz sample frequency and 12 bits ADC, and the blood pressure was collected by a commercial BP device: the Omron 7201 BP device [26].
Comparing the results of this study to the past study, we saw that using the PPG feature set scored similar results but was lower in accuracy than the past research. The MIMIC database used in this study contains a wealth of physiological and pathological information and waveform records to study and explore physiological models and algorithms. However, more attention should be paid to this database. MIMIC data were collected from ICU wards, which means that many of the participants may have received medication or other medical treatment that may lead to BP abnormalities. In addition, it is very likely that the age of most of the participants is generally high. As we know, PPG signal is a complex physiological signal; therefore, the low quality of raw PPG signals makes it challenging to extract physiological characteristics correctly.
The accurate identification of feature points is very important, especially based on the PPG morphology method, and the PPG signal quality is the key. Because the sampling frequency is only 125 Hz in the MIMIC database, this could lead to the identification error of each characteristic point. Therefore, this actually limits the database from being extended to blood pressure research, especially based on PPG morphology, to achieve the dynamic monitoring of blood pressure. Moreover, many recordings have ECG, ABP (invasive, from one of the radial arteries), and PPG (named "PLETH") in the MIMIC database. However, collecting satisfactory recordings with ECG, ABP, and PPG simultaneously [33] is not easy for many reasons, such as various heart diseases and abnormal or missing signals.
In addition, the ABP signal is a continuous invasive blood pressure signal collected using a catheter. Thus, there is a little difference between the dataset in this study and our past research, which collected the blood pressure using an Omron 7201 cuff BP device [26]. Even so, the result of this study is similar to but just a bit lower than the past. This indicates that it is feasible to use the PPG morphological features to manage BP levels. Fortunately, the feature set with PAT feature and PPG features significantly improved the BP classification performance. This emphasizes the importance of arterial wave propagation theory in BP formation.
Note, it is assumed that the linear relationship between BP and PAT calculated from the MIMIC database are inconsistent from subject to subject. If all signals were synchronized, perhaps the correlation would be more salient. However, there is an overall trend of correlation between BP and PAT in the recordings used from the MIMIC database.
The proposed method could play a significant role in the early detection of hypertension in lowand middle-income countries (LMICs). Note that an estimated 1.04 billion people had hypertension in LMICs in 2010 [40]. Having a non-invasive method that relies on ECG and PPG signals, which follows the framework recommended in Ref. [41] for tackling noncommunicable diseases by achieving simplicity and reliability, may decrease morbidity and mortality rates, especially for those living in LMICs.
Conclusions
PPG morphological features were shown to achieve better classification performance than PAT using the MIMIC database. PPG signals contain sufficient physiological information about the activity of the heart and arteries. Although they are easily affected by many factors, the 10 evaluated PPG features achieved an acceptable classification performance. This indicates that the PPG signal, which is the status response of the heart and arteries, varies according to the BP levels, such as normotension, prehypertension, and hypertension. Interestingly, adding the PAT feature to the PPG feature set improved the overall classification performance, even though not all ECG and PPG signals in the MIMIC database were synchronized. Our results show that the PAT feature and PPG features have great potential to manage BP levels.
Conflicts of Interest:
The authors declare no conflict of interest. | 2018-09-16T06:23:00.033Z | 2018-09-01T00:00:00.000 | {
"year": 2018,
"sha1": "b8215a2055e8fe088ce15e21f422a4142c77200d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4418/8/3/65/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b8215a2055e8fe088ce15e21f422a4142c77200d",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
147767008 | pes2o/s2orc | v3-fos-license | Place Identity : How Tourism Changes Our Destination
According to the social identity theory, once people have categorized themselves and others into different group, they will contrast themselves and others, and their thinking and behaviors will become bounded up with in-group membership. There will be an emotional significance to our identification with a group, when outsiders come into a destination, indigenes will find the differences between the outsiders and themselves, then divide them into different groups that can reinforce the identification about their group even awake and strengthen place identity. Based on social identity theory and the comparative case study of Lijiang (a world culture heritage in China) and Palma (a tourist island in Spain), this essay is going to explain how tourism awakes place identity and affects identity boundary which causes a series phenomena that happened in our daily life no matter where we are, such as culture recover, maintaining the link with space, in-group favoritism, out-group bias and conflicts.
Introduction
Over decades, many researches argued that culture crisis is caused by tourism, most of them focus on the cultural commoditization and cultural authenticity, but there are few researches noticed the impacts on place identity and how the link between identity and destination social change, especially in China.In this article, I try to show how tourism effects on identity and the interaction between identity and society development processing and prove it is so common that even in the different end of the world also present the same situation.Through the power of capital, and the relationship between people and space, there are series consequences.In the first and second parts of this article, I will explain identity theory which is the basement theory of this article, through that way we can know why identity is so important for our thinking and behavior, and how it affects our daily life.In the third part, I will show a series evidences that how identity affects social changes by original case study, then I will get the conclusion of this essay that tourism can awake and strengthen place identity and affect social identity boundary.
Self-Identity
People confirm themselves by selecting and enacting identities (Heise & MacKinnon, 2010).Self-identity is that you can identify who you are and distinguish both yourself and others, you can know that you are separate and different from others.It is the method that helps us to realize the world and ourselves.Self-identity makes everyone becoming the unique one, defining them through their demographics (e.g., age, gender), unique characteristics (e.g., intelligent, interesting), values and beliefs, role identities (e.g., father, manager), and combination with the different experiences (private and public) (Heise & MacKinnon, 2010) and memories.People's behaviors in interaction with others in social set-tings are governed by their conception of themselves, so self-identity is so important that helps us to set the standards which bounds our behaviors and as a gyroscope keeps behaviors consistent (Jetten, Spears, & Manstead, 2001).
growth (Inguglia & Musso, 2013).People classify themselves as an in-group member to establish the self-identity and social identity and also to allow individuals to recognize other humans by type, in this way, people can get power and support from their group, get self-confident though being a member of a nice group and draw on mental constructs that set expectations and guide behavior as they navigate their social interactions (Cuhadar & Dayton, 2011).There is a normal example, if someone comes from a beautiful city, every time others praise the city, the one will be proud of that.Besides that, individuals classify others considering their similarities and differences with themselves.Through categorization, the similarities between self and in-group members are accentuated whereas the differences between self and out-group members are exaggerated (Hogg & Abrams, 1988;Tajfel, 1982).That is, individuals create a perception that they are identical to other members of the same category and behave accordingly with the category membership.
Individuals prefer to define themselves through their immersion in relationships with others and with a distinct group and find out their self-evaluation from this in-group identity (J.Breckler & G. Greenwald, 1986).Group identity will provide a sense of belonging and a sense of distinctiveness for the in-group members (Brewer, 1991).Connectedness and belonging entail fundamental differences, in this way the self is construed.In social identity process, members of a group come to internalize group membership to their self-concepts and evaluate themselves and others from the view of their membership in specific groups (Tajfel & Turner, 1986).Furthermore, instead of being too personalized or too inclusive, individuals prefer to define themselves in terms of distinctive category memberships, Social identity theory suggests that, strong identification with an in-group make people's behaviors are bound strongly with the in-group membership (Tajfel & Turner, 1979).
Since most of time, minorities feel more insecure and threat than majority, one possible way to cope with threat is having a strong orientation toward the in-group and that also will drive them to discriminate the out-group.Due to this reason, minority groups usually try to compensate feelings of insecurity by strengthening their positive social identity through discriminating against the majority (Simon, Aufderheide, & Kampmeier, 2008) and produce a negative feelings towards out-groups.As the research of Crocker and his colleagues, perception of discrimination based on one's group membership may make the individual identify with in-group more strongly and may increase the rejection of out-group members (Crocker, Voelkl, Testa, & Major, 1991).Correspondingly, in-group identification of minority group members was found to be stronger than majority group members.That is to say, the relative social position of the in-group determines people's level of identification with their groups; the lower the status of the group, the stronger the connection.
Group identity has been shown to be a central concept in understanding phenomena in social psychology, sociology, anthropology and political science (Chen & Li, 2009).Individuals compare their group with other (out) groups in order to evaluate their position and to achieve a positive and distinct identity (Tajfel, 1982).Through these comparisons individuals realize their group's value and relative status.
Intergroup Bias
The concept of "intergroup bias" is defined as an individual tend to favor or evaluate his/her own group more positively than other groups (Tajfel, 1982).Base on categorization, people can favor their group and/or discriminate the out-group.Tajfel and his colleagues (Tajfel, Billig, Bundy, & Flament, 1971) claimed that categorization may drive individuals to behave differentially towards different members (in-group and out-group) spontaneously even when they gain no benefit from this behavior.Furthermore, even exploration of intergroup similarities and in-group dissimilarities can't reduce in-group favoritism (Tajfel et al., 1971).It was clearly shown that even when there is no conflicts between different groups, people still display a kind of in-group favoritism.Through favor their in-group, people can achieve a positive group distinctiveness that will protect, enhance, and preserve the value of their group.It can be observed through discriminative behaviors toward out-group, through prejudiced attitudes, and stereotyped cognitions.Discriminating out-group is one of the ways to show intergroup bias (Akbaş, 2010).
Place Identity
Place identity is a kind of group identity.In other words, place can be seen as a group criterion, people who come from the same place is easily to form a group.Desires to preserve ecological or architectural characteristics of a place have a direct impact on the strength of place attachment felt by individuals, notably through self-pride and self-identity.People experience stronger attachments to places that they can identify with or otherwise feel proud to be a part of (Scannell & Gifford, 2010).The idea here is that by participating in local social activities, a person can develop a sense of belonging which is a feeling of the relationship between place and self, other in-group members and self.The feeling of belonging can be so strong to help them establish self-identity, and let them try to improve themselves to match the place.Once someone thinks himself or herself is linked with the place strongly, place can become a symbol of self (McCabe & Stokoe, 2004), he or she will try to protect the place benefit these behaviors can be caused by practical benefits (e.g., economic benefits) or mental benefits (e.g., honor).Since place can affect emotional preferences, usually in-group members have the same preferences (McCabe & Stokoe, 2004), so group members usually have the same attitude towards other groups, out-group bias is easy to happen.Besides that, when adverse consequences appear, in-group members always shirk the responsibility.
Social identity theory argues that one of the key determinants of group biases is the need to improve self-identity.The desire to view one's self positively is transferred onto the group, creating a tendency to view one's own group in a positive light, and outside groups in a negative light (Billig & Tajfel, 1973).That is, in-group members will find a reason, no matter how insignificant it is, to prove their own group is superior.That's the reason why conflicts between different groups occur easily.
In a role of in-group member, they can do the things that they will never do as an individual.Because of that, in-group identity is also important to social movement participants, political activists, and others banding together to fight for or against social change by working on shared goals and action plans.
Social identity theory is used in many areas but tourism, we rarely link identity with tourism.It is normal for us to ignore the truth that we are human being (Yifu, 2013).With place identity awaking up, some conflicts between different groups will be more and more serious.In previews researches, there are many cases that tourism caused social demonstration.It is not only a process by which a set of individuals interacts to create a shared sense of identity or group consciousness ("Identity", n.d.), also the reflection of the power of identity, which intensifies the conflicts between local residents, authority, and tourists.
Place identity refers to the group size, property, place, history, race and other factors ("The Research on APPlieation of CognitiveMaP-As the Spatial Cognition of xi' an Tourism Image for Example", 2003).It reflects the familiarity of individuals or groups for the place as well as insiders conscious (Xiang yang, Dong fang, Guo xing, & De ming, 2015), so most of time, place identity will easily happen when individuals who share the same history, the same custom or they are the same minority.So in this article, I choose the minorities group to be the object.As the vulnerable groups, minorities are more impressionable by the culture shocking.It is not easy for them to get much information about the outside, and they have not formed a stable value and culture system either, once the heterogeneous culture flood in, they may easily feel the cultural shock, their self-identity will be strengthen by the different values and tourists gazing.Besides that, the insecurity sense spurs them to join a group which can give them the shield (like a kind pride of in-group, sense of belonging) to resist out-side power and that reacts on self-identity.
It is interactive that individual have a strong place identity, they will find what they can get from the group relationship, and establish self-identity, and this emotion will reinforce the link between both themselves and the group.Since tourism activity related to many different subjects, and these subjects exchange their culture, view of the conmen things, their behavior norms, and in this process, groups are formed (Xiang et al., 2015) and the boundary of groups become more and more clear.
Place Identity Was Awaked and Strengthened
Self-identity is producing with our growth, we depends it to distinguish who we are, place identity also exist in our daily life, but most of time, we can't observe clearly.There is a common sense of place identity that can't exist as a separate object without out-group, it interact with heterogeneous comparison.Local residents' self-identity is not so strong at the beginning of tourism development, even they know who they are, the distinction between self and others is still not so obvious.In-group homogeneity is especially strong when no motivational forces exist to distinguish the self from others within the group (Brewer, 1993;Simon, Pantaleo, & Mummendey, 1995).According to the social identity theory, the consequence of self-categorization is that in-group members tend to emphasize the similarities between both self and other in-group members, and emphasize the differences between both self and out-group members.This accentuation occurs for all aspects, for instance behavioral norms, speech styles, attitudes for common things, beliefs and values, and other properties that are considered to be related to in-group categorization (Stets & Burke, 2000).When there is few tourist, the indigenous are very similar, same language, same customers, same life style, face the same problems, so they can't realize the difference with others either get a strong in-group feeling,in other word, it means in the beginning of tourism development (or before tourism came), indigenes live in similar groups, don't have a clear self-categorization.With tourists flowing, the motivational forces appear.
Besides the obvious appearance difference, the word tourists sometimes is a meaning or a symbol of difference, it means the word self has represented a classification, it reflects the power of different groups (e.g., In Seychelles tourists equid rich).This kind of attribute indicates tourism can promote groups formation.Thus a series phenomenon arrives in turn.Since the history invasion of England, Welsh was marginalized in a long term, through tourism blossomed, the distinguished character was portrayed, the image of Welsh was spreaded out, in other words, tourism offered a chance to the place identity to rebuild and strengthen (Pitchford, 1995).Through the case study of Rollin' Down the River Festival (America), Bres and Davis found that tourism did lead to a positive self-identity for the local group (De Bres & Davis, 2001) and in some cases, tourism even be the root cause of place identity generation (Smith & Smith, 1989).Tourism acts as a catalyst for the re-interpretation of identity among members of the local group who depend on or are otherwise affected by the industry (Jamison, 1999).The research of Cajun of America present tourism turned around the place identity, they used its waning livelihoods of fishing and hunting patterns, unique forms of music and freedom spirit has successfully rebuilt its place identity (Esman, 1984).This is an interesting phenomenon, through it we can find how the tourism, the capital impact destination society.
Method
To find out the link between tourism and place identity and the consequence of place identity awaking, I had interviewed 35 responses randomly in Lijiang, China from 4. Oct. to 16. Oct. in 2015 and interviewed 3 persons in Mallorca, Spain, from June to August in 2015.All of the responses are indigenes.In Lijiang, there are 24 women and 11 men, the age distribution from 17 to 83 years old, 63% are in 30-45 years old group.Through this research, there is no clear difference between different genders.But the age can affect their attitude, older will hold a stranger negative attitude towards out-groups than younger.And in Mallorca, these persons are my friends, because of language barrier, I didn't interview many people, it is a long term to get the original material.
I choose Lijiang, this little town because it was a remote needy and uninformed town even in ten years ago, but now it is a famous tourism city in China.And there is only NAXI (Note 1) living in Lijiang, it is easier to discuss the tourism impacts.According to the identity theory, the minority is easier to be influence by outsides.Of course, the another reason is I had been there in 3 years ago, and it changed too much than I obtained from books or other media, I really want to know why and how it converted from an idyll to a commercial city.
And I choose Mallorca, because I had been to Mallorca more than 8 months, and I found the phenomenon in China also exist on the other side of the world.I think it will be more interesting and persuasive if I put them together.I want to show there is something that is common of whole human society.
Place Identity Awaking
In Lijiang case, capital becomes the biggest motivation to awake local residents place identity.Lijiang is an ancient town which has a history of over 800 years, before 1996, its' name only spread in adventurers cycle.Tourism blossom came from 1996 because of a large earthquake, this earthquake was widely reported by domestic and international media.Lijiang became famous not only the earthquake, but also the special building structure, there were no building collapsed in the center of the old town even it was a 7 magnitude earthquake, thus, all over the world shifted their attention to the little ancient town (Xiao, 2013).Then the old town was scribed on World Culture Heritage list by UNESCO in 1997, this also promoted tourism development.Tourism even converts the remote, agrarian old town to a famous international city, and this industry also became the only pillar industry of Lijiang.In 1995, GDP of the little town was only about 30 million, and this number soared to more than 35 billion in 2014 ("Lijiang 2014 Statistical Report on National Economy and Social Development", n.d.).Since the tourism development, more and more outsiders carrying capital come to Lijiang to find new opportunities.As the respondents told me, because rent fee is pretty high for the backward town, now, almost the houses in Lijiang old town were rented out to the outsiders for business.It also means almost the local residents had to move out to the new city and make room for economy.Capital changed the city morphology, and sent a signal that all the outsiders are rich, smart, and can management a successful business that the local residents can't do.In other words, indigenes found a new world and new life style under the influence of capital; they focus on the differences between themselves and the outsiders, group formed.Indigenes depicted outsider image then strength it in their daily life, it is a kind of out-group bias.
Another motivation pushes place identity awaking is eyes of others.As has increasingly been emphasized in symbolic interactionist theory, people are motivated to verify self-identity in the eyes of others (Jetten et al., 2001).When tourists flow into a marginal destination, they take their eyes which represent power to intrude the local culture.Under the power, something must be changed.Through the eyes of outsiders (such as tourists and "rich" shop owners) with the different appearance, language and culture, which represented the particular and power, in the comparison self-identity of indigenes is strengthened.When we feel the sense of ourselves, the next step is to choose a group to join in.As human beings, we inherited the innate feature genome Need to belong from our ancient ancestors (Marilynn & Gardner, 1996).We don't like uncertain things which are almost accompanied by dangers and "Need to belong" is exactly a fundamental to avoid uncertain situation so that we can protect ourselves from the dangerous world.
Maintain the Link with Land
There is an interesting thing when I interviewed indigenes in Lijiang.All the time they answered my questions, the subject always was We Naxi or Our Lijiang, solo We or Lijiang never show in their words.In other words, they always linked themselves with the minority or the land, that's a kind of place attachment (Manzo, 2006).Correspondingly, indigenes also have a very clear diacritical word to distinguish They Outsider Businessman and You Tourists.This situation also happened in the other side of the word, Palma, which is the capital of Mallorca island in Spain.Mallorca has become a mass tourism destination (Picornell, 2014), and now, half century passed, Palma still contain a strong appealing for tourists, but their self-identity and place identity don't change too much.When I ask "are you Spanish?".
They always told me "Mallorquin" (Note 2), but not Spanish.The word is composed of Mallorca which clearly point out the land link.It means they have the strong sense of belonging to this Mallorca Group.The relationship between the residents and place is so important because their daily life and work are attached to a place, the relation with place is based on experience and activity, and it is from this perspective that their reads the situation, claiming that any other local would share the same understanding.
It is the character of place identity, place becomes a symbol of self (McCabe & Stokoe, 2004), it is more obvious in minorities areas or economy backward places, place identity strength the self-identity, so when they talk about themselves, they always try to emphasize the link with place.
To prove place identity is linked with tourism development process and it can present a different sort in different stage, I compared another old town Baisha which is closed to Lijiang and also inhabited by Naxi, but haven't highly developed, I find indigenes live here don't have a strong feeling of their hometown as people of Lijiang Old Town.It is not so often to use "We Naxi" or this kind of words.They don't praise it neither complain it, but most of them want to the government develop their town quickly so that they can earn more money and live a better life.Conversely, all of the original residents in Lijiang Old Town have a strong feeling of it and like it used to be, always told me how clear the water in the old town, how beautiful of the architectures, and complained the dirty and noisy now.Almost of them described a peaceful and beautiful idyll to me, they are smiling while they are talking about this memory, and bitter face while they are talking about present situation or another groups, Their attitudes are so similar.It is easy to find they are so proud of their own old town and have a hostility attitude towards other groups.
Culture Recover
Previous researches have proved that tourism promote the revival of local culture.It can be seemed as a result of being more in-group.Actually it is a kind of response to tourism intrusion.Tourism is a kind of cultural consumption, the commercialization of tourism will inevitably exist.Commercialization has been unquestionable impact on the local culture, through economic measures, tourism force people to improve their states of knowledge, which enhance the pride of self, and it is the essential part of self-identity.
For instance, through being the guards for tourists, Nepalese Sherpas got the financial rewards and established their own confidence and to reconstitute traditional productive relations in their new economy (Adams, 1992).For the capital, Mayas brushed up their disappeared traditional knowledge which is applied to tourism activities (Medina, 2003).Maori had been assimilated since the Christianity spread in New Zealand and lose their traditional culture, but when they find it was the vital identity attraction for tourists as the original and ethnical inhabitants in New Zealand, they even won the game with local authority (Graburn, 2009).From these researches, we can see no matter what the original intention is, the fact is tourism revived the destination culture and stimulated local economy recover even helped indigenes wining on politics.It protects indigenes benefits.In the comparison with heterogeneous cultures, local residents rediscovered the importance of their own culture and special identity.
We came to Lijiang to do the field research in 2015, and interviewed about 35 persons.In the case study of mine, Dongba culture (a typical culture of Naxi people who are the only minority in Lijiang Old Town) is also revived with local tourism development.
"We have Dongba culture, the most famous one is Dongba Script, and the paper of Dongba.But we can't recognize Dongba Script, it' s only used by Dongba, a special religious group in our minority." Destination wants to attract tourism, they try to excavate the local culture and hope excessively propagandizes itself, under the authority power, advertisements, books, videos, guidebooks about local history and cultural customs become more and deeper inside, it led more people (e.g., local residents, tourists) can get deeply involved in local knowledge and indigenous can feel the recognition of outside thus strengthen the place identity, they are proud of the in-group member, and readily admit the place identity of themselves.
With tourism development, more and more tourists flooded into a little town, they take photos, gaze the local people's life, produce meanings, and invest in many different ways.As I say, minorities are more easily influenced by the tourism development processing, so when they see these things that they have never see, it gives them a chance to find the value of themselves and the original place.So it's not difficult to understand that place identity awakening can promote the indigenes and local government to realize the importance of the local culture and resources and take some action to protect them, it is an action to protect in-group benefits.But from our research, indigenes don't really know how and why the Dongba culture is so import for them, they know that only because government and dealers disseminate the concepts.They know it is useful for the tourism industry.Actually, they don't use Dongba Script either the paper of Dongba in their normal life, they even don't think their children should learn the local language.
"There are too many language in the world, there is no time for him to acquire local language which can't be used in the outside world."--response 3
In-Group Favoritism and Out-Group Discrimination: The Conflicts Prelude
Obviously, tourism plays an important role in the processing of Lijiang.In 1997, this town was inscribed on the World Heritage list, but it was criticized by UNESCO because of the commercialization in 2003 and 2007.And the attitudes of outside world towards Lijiang also changed a lot, criticism become more and more aloud.And the conflicts of different groups in Lijiang become more and more serious.It is an interesting phenomenon for me and I really want to know why it can turn to this situation and how indigenes think about outside criticism.
"how do you think about the criticism towards Lijiang?"
"All the adverse aspects are totally caused by tourists and the shop owners, they ruined our environment, and produce a bad destination image, and it destroy the beautiful image of our Lijiang.We, Naxi people, are loyalty in love, never do something unchaste.You know, many years ago, there were not so many tourists or shops, we, Naxi, live in the old town, life is peaceful.Mountain cover with snow, the stream is drinkable, the grand is clean, and there were no thieves or frauds.But now, you can see, cutpurses rage, water dirty, snow disappeared, all of this caused by you, outsiders, and they, shop owners."--respondense 8 Indigenes think they have good tradition and custom, they have a fabulous environment, they love their homeland and always protect it, so when there is someone tend to criticize their hometown or the local people, they respond a firmer stand.
When people distinguish different groups, they also form a different attitude towards these groups (Pruitt & Kim, 2013).Humans have an innate tendency to favor their own group over others, proclaiming how "each group nourishes its own pride and vanity, boasts itself superior, exists in its own divinities, and looks with contempt on outsiders" (Sumner, 1906).In-group members usually have the same attitude towards outsiders.With the processing of tourism, indigenes tend to protect in-group members and keep distance away from out-group members to hold their in-group benefits.It becomes an invisible barrier to block the communication with different groups, so when the preferences is formed, it will be fortified in the blocked box ( Pruitt & Kim, 2013) and finally become a kind of Out-group derogation and the fuse of different groups conflicts.
Most of time, place identity awakening is with the capitalization and land gentrification, it makes many indigenous people lose their homeland, so place identity awakening can cause many conflicts between different groups especially with local residents.There is an interesting example in Franquesa's research, a dilemma happened between Mique who is a baker in the old center of Palma, and Ingrid, a Swedish woman who bought a house near Miquel's bakery, the conflict between both of them only because the chimney smoking of the bakery in the morning (Franquesa, 2011).In this case, we can see with the development of tourism, original residents will realize that they have the sovereign of this land.It is not only the conflict between two men, it is the conflict between the local resident and new neighborhood, it is about two groups.Local group think they have the right of control of the space, it is a strong attribution.Since the tourism development, they are losing the right.As an in-group member, he or she will want to protect the benefit that they ever had, but individual power is faint, the original residents can do nothing but move away from the land to avoid to be evicted when they face the powerful capital and authority.And also for the power distance, the animosity of in-group members will be strengthened, conflicts is unavoidable.
In Lijiang, the situation is the same.The implication is that the observed group interested relationships should be understood as having been aggravated by tourism.The main conflict is not between tourists and indigenes, as some kind of floating population, tourists come and go, so it' s not so easy to have a sustained conflicts.And indigenes know tourists means revenue exactly.But tourism not only involves tourists, it also brought other outsiders who want to take up a business in destination.In Lijiang, the conflict is in connection with shop owners and indigenes.
"How do you think the outsiders who come here for business?""They are too crafty, they rented our house in a low price, and earn great money with it, they cheated us.And they ruined our Lijiang, but we can do nothing with it."
--response15
Though this words, we can see the main point of contention is land leasing.Indigenes think the rent fee is low and unfair, but when I interviewed the businessman, they told me, "The rent fee is pretty high, indigenes are so cunning and defy the low, willful inflation of prices even we have the contract.It is too hard to management business here, maybe we can't do that anymore.We help them to build this place, without us, here would still be a backward place."
--response21
"How do you think about Lijiang?Do you like it now?""Of course, it's my home, I love it, and all the Lijiang people must love it.But as you see, now here is too aloud, too many shops whose the owners are all outsiders, they destroy our life space, grab the benefits of tourism, they are too crafty, we Naxi, don't like them… About 7 or 8 years ago, our Lijiang is pretty peaceful, but now, all of the houses rented out, all the local people moved away.I lost all my old friends, so I don't have choice, I have to move out.I can't recognize it now, when I walk in the old town, I feel like I'm an outsider walk in a strange place, it's not my hometown, I don't know what it is."
--respondent 12
The respondent 12 is a manager of a local pharmacy, when she said that words, she was so upset and angry.Form her words, we can find something.First, she has a strong feeling of belonging to Lijiang.Second she doesn't like shop owners who came from the outside world, and thinks they grab the benefits of tourism.Third, she feels helpless and indignant when she faces the situation that all her friends move away.
All my respondents, there is no one showing a sense of responsibility of this situation, they never think many years ago, they rented out the houses by their own willing, so they should also rest the responsibility for the consequence.Both indigenes and outsiders think innocent of themselves and claim all the faults caused by other groups.
Tourist contact may result in unforeseen outcomes (Jamison, 1999), in this case, we can see how the tourism impacts place identity and changes the local society.Besides the local landscape, it also changes the local social network.Place identity is closely tied to the land and local network or we can say in-group membership.Place becomes an object detached from them, transforming place into a landscape or a spectacle to be compared, evaluated, and possessed, but not "dwelt within" (Franquesa, 2011).This is why the outsiders come from outside world never think Lijiang is their home no matter how long they live here, only a place to conduct a business.But for indigenes, here is their homeland, is the place they growing, living, and carrying memories.This is also the discrepancy between local groups and outsiders groups.
Identity Boundary Extending
Pedro, a tour guide from the main land of Spain and living Palma more than 25 years told me "when I move to Palma, they told me Mallorquin is the man whose family were born and raised for at least four generations.But now, my children, they think they are Mallorquin definitely, because they born here.But I still don't know my identity, and sometimes I feel confused." Though this example, we can see two interesting phenomena.First, on the aspect of original residents, we can find the place identity boundary is extending, without the extending, Pedro' s children should not claim they are Mallorquin, because it need the social confirm, so it means local residents also have the same opinion.We can find Mallorquin is not limited in born here and raised for four generations, any one born here or even get married here will be think as a Mallorquin.Obviously tourism is also the important reason in this situation.In 2014 the Balearic Islands received 11,363,645 foreign tourists (Instituto de Estudios Turísticos, n.d.-b).In summer, it can be the most popular destination even beyond Cataluna (Instituto de Estudios Turísticos, n.d.-a).With the large number of tourists, there are many romantic stories in this island, many outsiders (Note3) (or the one not Mallorquin) get married with the local residents.When we talk space, we must focus attention on relations, people' s life and experience is rooted in the space, in their culture, in their history (Cohen, Butler, & Hinch, 1996), the special relations will link people and the space, but I must point out although there is the special relation, it still needs another elements, the time and memory to work.There is the reason why Pedro can't make sure his place identity even though he lives here more than 25 years.There is a gap in his memory within the land.This is the second I want to say, tourism can extend identity boundary through culture communication, but group barrier still exist, once someone have the allegiance to a group, his or her thinking and behavior still be bounded up with this group.
Conclusion
The tourism development pattern of old town in China is hyping destination, attracting investment funds through policy measures, for instance, sale or rent out the ancient house in a low price or promise a tax break.This article reveals how tourism changes impacts indigenes life style and concepts then impacts the destination social development processing.And find a paradoxical phenomenon, indigenes want to get more money from tourism and live a better life, but they don't like the business men who truly bring money and promote local economy.Most of time, indigenes think themselves are innocent even only the victim of tourism, they never think they rent out the house totally by their own willing and should be response for the consequences.
First, tourism emphasizes the difference between outsiders and indigenes, forces them to classify themselves and tourists, under the power of the intereaction of identity, indigenes are forced to be a whole one.
Second, Capital poured in, a mass of tourists flowed in, and foreign businessmen came in, all of this promotes the group forming.Indigenes values and life styles were changed by the coaction of I mentioned above.In the eyes of heterogeneous groups, place identity has been awaked and strengthened.They get a strong sense of belonging, be proud of their own group, indigenes find the value of local culture and generate a sense of belonging with the building and the homeland, self-identity was strengthened.Out-group derogation was produced, indigenes strength the difference between themselves and others, and think all the faults were caused by others (shop owners who come from outside, tourists).
Third, once the group has been formed, in-group members will tend to protect in-group benefits and have the out-group derogation that led to the conflicts between indigenes and outsiders.Here, the conflicts are not physical, but people are the bedrock of a place, this kind of favoritism or bias are also very important for the future of destination.When there are groups, group barrier is existent.When group barrier is hard, conflicts are more easily caused.When there are groups, group barrier is existent.When group barrier is hard, conflicts are more easily caused.
Besides, tourism promotes the combination of different cultures, so it can extend the boundary of social identity.Tourism is like a spark to fire the kindle (the identity and capital), and then the steppe is fired.Everything is linked and can't escape from the processing. | 2018-12-27T05:38:16.358Z | 2016-04-15T00:00:00.000 | {
"year": 2016,
"sha1": "36997199ec39294a9c179c797faec85f1513d2b3",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5539/ijps.v8n2p76",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "36997199ec39294a9c179c797faec85f1513d2b3",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
267643341 | pes2o/s2orc | v3-fos-license | Multi-type concept drift detection under a dual-layer variable sliding window in frequent pattern mining with cloud computing
The detection of different types of concept drift has wide applications in the fields of cloud computing and security information detection. Concept drift detection can indeed assist in promptly identifying instances where model performance deteriorates or when there are changes in data distribution. This paper focuses on the problem of concept drift detection in order to conduct frequent pattern mining. To address the limitation of fixed sliding windows in adapting to evolving data streams, we propose a variable sliding window frequent pattern mining algorithm, which dynamically adjusts the window size to adapt to new concept drifts and detect them in a timely manner. Furthermore, considering the challenge of existing concept drift detection algorithms that struggle to adapt to different types of drifting data simultaneously, we introduce an additional dual-layer embedded variable sliding window. This approach helps differentiate types of concept drift and incorporates a decay model for drift adaptation. The proposed algorithm can effectively detect different types of concept drift in data streams, perform targeted drift adaptation, and exhibit efficiency in terms of time complexity and memory consumption. Additionally, the algorithm maintains stable performance, avoiding abrupt changes due to window size variations and ensuring overall robustness.
Introduction
Multi-type concept drift (CD) detection is widely applied in cloud and security applications [1].The mining of CD in frequent patterns (FPs) with latent risks, allows the monitoring of changes in potential-risk patterns and related data models in cloud environments [2][3][4].It is used in cloud computing [5], security information detection [6][7][8][9], blockchain-based [10], healthcare data prediction [11], intelligent data processing in IoT [12][13][14][15], recommendation systems [16,17], and fault container instance seq finding [18].CD mining reduces data leakage risk due to sudden data mutations, and improves privacy, and can be used for the timely identification of model performance deterioration or data distribution changes so that adjustments and optimizations can be applied.The aim of the present study is to address CD in hidden risk patterns and provide decision support for future research.Challenges in this field include data streams' high velocity and variability, which cause issues with computational resource availability.For example, data streams may be stored in a temporal DB in a highly trusted and low-latency industrial network, but there is a risk of data tampering during transmission52, 53.Concept drift [19] is also a prominent issue in decision-making within the e-commerce data stream domain [20].It refers to the phenomenon where the underlying concepts or relationships in the data change over time.In e-commerce data streams [21], CD can occur due to evolving user preferences, market trends, or external factors.To mitigate the impact of CD on decision-making, it is crucial to develop mining models that can effectively adapt to these changes in e-commerce data streams [22].These models should be able to detect and adjust to shifts in data patterns, ensuring that the decision-making process remains accurate and relevant over time.
Riverola et al. demonstrated an improvement to a successful e-mail filtering model to track spam domain CD [23], while Gulla et al. discussed a new approach to detect semantic drift using concept signatures [24].A new Bayesian framework [25] in data stream pattern recognition was presented for feature selection.Ruano-Ordás et al. presented a detailed study of CD in the e-mail domain considering types of CD and message classes (spam and ham) [26].Ding et al. studied entropy-based time domain feature extraction for online CD detection [27].Rabiu et al. aimed to provide a literature review of models to guide researchers and practitioners [20].Two variants of the model [28] have been proposed to minimize negative transfers in high-volume model transfer frameworks.CD-tolerant transfer learning, which adapts the target model and source domain knowledge to changing environments, has not been well explored.A hybrid ensemble approach dealt with this problem when target domain data were generated chunk by chunk from non-stationary environments [21].In 2023, Liu et al. introduced two new CD handling methods, namely error contribution weighting and gradient descent weighting [22], which are based on the principle of continuous adaptive weighting and aim to improve detection and handling of CD, adapting to changes in data streams in constantly evolving environments.There are also other classical active detection algorithms, such as the drift detection method [29] and the early drift detection method [30].
The above mentioned studies were predominantly focused on the detection of CD in data streams.However, there is indeed a research gap when it comes to detecting and identifying multiple types of CD during FP mining processes in complex data streams.To address the challenges encountered in mining FPs on evolving data streams [31][32][33], we propose the dual-layer variable sliding window concept drift detection (DLVSW-CDTD) algorithm, which utilizes a dual-layer variable sliding window whose size is determined dynamically based on the occurrence of multi-type CD.The core idea of DLVSW-CDTD is to adjust the window size based on the stability of the data stream and the utilization of a duallayer sliding window.When the data stream is in a stable state without CD, the window size continuously increases to capture FPs over longer periods.This is done to fully utilize longer periods of data for mining FPs and avoid prematurely discarding useful information [34].
CD indicates significant changes in data distribution or feature relationships, which may render the previously mined FPs inaccurate or less useful.To adapt to such changes, our algorithm automatically adjusts the window size to capture the new FPs that emerge due to CD [35].The VSW-CDD algorithm achieves dynamic window adjustment by considering the evolution of the data stream and the changes in FPs [36].This approach effectively adapts to changes in the data stream and maintains high accuracy and efficiency in the process of FP mining [37].
Dynamic changes in data and CD is an ongoing problem in cloud computing environments.CD is a change in the statistical characteristics of data over time, which can be caused by various factors such as user behavior, system configuration, application updates, etc. [38].Such changes may have a significant impact on data analysis and decision making, therefore, detecting and responding to CD is an important task in data analysis and processing [39].
FP mining is a method of finding frequently occurring patterns or correlations in big data, and it has a wide range of applications in many fields, such as market analysis, social network analysis, and anomaly detection [40].However, in the process of FP mining, CD may negatively affect the mining results, so it is necessary to detect and deal with CD [41].
The double-layer variable sliding window strategy is an effective method for detecting CD by considering both local and global statistical properties [42].However, existing dual-layer variable sliding window strategies have some problems in dealing with multiple types of CDs, such as not being able to effectively deal with CDs at different levels of granularity, or to accurately recognize multiple types of CDs that occur simultaneously [43].
Aiming at the above problems, in this paper a multitype CD detection method is investigated, which can effectively deal with CD in FP mining under a dual-layer variable sliding window strategy [44].The main contribution of this paper is to propose a new multi-type CD detection method, which can consider both local and global statistical properties, can detect CD at different granularity levels, and can accurately identify multiple types of simultaneous CDs [45].The research results in this paper will help to improve the efficiency and accuracy of FP mining, which is of great theoretical significance and important for solving problems in practical applications [46].For example, in recommendation systems in cloud environments, changes in user behavior can be monitored in real time using the proposed multitype CD detection method, so as to adjust the recommendation strategy and improve the accuracy and user satisfaction with the recommendation system [47].In addition, the method can also be applied in the fields of anomaly detection, system monitoring and decision support in cloud environments [48].
The structure of the remaining paper is as follows: The second section provides an introduction to the relevant technical background knowledge.The third section presents the proposed algorithm, where the limitations and issues of fixed sliding window mining for data streams are first discussed, and then a new window dynamically adjusted based on the concept of data stream size is proposed.Subsequently, we propose the DLVSW-CDTD algorithms to effectively detect different types of CD during the data stream mining process.In the fourth section, extensive experiments are conducted using real and synthetic datasets obtained using the open-source data mining library SPMF [25].The results confirm the efficiency and feasibility of the algorithm.In the fifth part, the main work of this paper is summarized, and a brief overview and prospect of future research directions is given.
Overview of the data stream frequent pattern
In cloud computing environments, FP mining faces a series of challenges.First, the volume of the data streams is huge, and traditional FP mining methods cannot handle such a large amount of data effectively.Second, the data in the stream are dynamically changing, i.e., CD occurs, which requires real-time monitoring and updating of FPs.In addition, to ensure the cost-efficient utilization of cloud resources, efficient computation and storage management is necessary to meet the requirements of high concurrency and high throughput.
To address the above challenges, researchers have proposed many effective FP mining methods for data streams.These methods mainly include: sliding windowbased methods, tree-based methods, statistics-based methods, etc.These methods achieve efficient FP mining through the utilization of distributed computing and storage resources.For example, sliding window-based approaches detect FPs by sliding a window on the data stream and statistically learning the data inside the window; tree-based approaches discover FPs by constructing and traversing a tree structure; and statistics-based approaches discover FPs by building a statistical model that describes the distribution and changes in the data stream [40,41].Table 1 shows the symbol overview of FP.
Generally, a set of continuously-arriving data is defined as a data stream, expressed as 2. The data in this example contain 5 transactions.T i represents the sample arriving at time i.Each datum has a unique identity, denoted as a TID.
In the data stream, the FP P is defined as the number of samples containing P , denoted as freq(P) .The support of the FPs P is expressed as support(P) = freq(P)/n , where n is the number of samples included in the data stream.The concept of the FP is defined as follows.Given a data stream DS which contains n samples, we define a minimum support threshold θ , whose the value range is (0, 1] .If the mode P satisfies Eq. 1, it is called the FP.
Introduction of the window model Window model
In FP mining, a dual-layer variable sliding window strategy for multi-type CD detection is an effective way to deal with rapid changes in data streams.Window modeling is a data processing technique that captures changes (1) support(P) = freq(P)/n ≥ θ in data by sliding a window over the data stream.In CD detection, window modeling can facilitate the tracking of changes in data distribution.In multi-type CD detection, the window model needs to handle different types of data, which increases the complexity of processing.However, this complexity can be handled efficiently through the use of a two-layer variable sliding window strategy.Cloud computing is a computing model that allows users to access shared computing resources over the Internet.Cloud computing provides powerful computing power and storage space to handle large-scale data.In multi-type CD detection, cloud computing can provide the required computing resources and storage space to handle large-scale data streams.Cloud processing can be facilitated through data stream slicing.In multi-type CD detection, the combination of window modeling and cloud computing can provide an effective solution.First, the window model can capture changes in the data, while cloud computing can provide the required computational resources and storage space to handle these changes.Second, by using a dual-layer variable sliding window strategy, we can handle different types of data more efficiently.Finally, the distributed computing resources in the cloud can accelerate data processing and improve the efficiency of CD detection.The combination of window modeling and cloud computing plays an important role in multitype CD detection.By integrating these two techniques, we can effectively cope with rapid changes in the data stream and enhance the efficiency of CD detection.
There are three commonly used window models, namely the landmark, the sliding and the damped window models, with the sliding model being the most commonly used.The sliding window model is also divided into two types, namely the fixed-width sliding window, where the number of samples in the window is fixed, and the other is the variable sliding window, where the number of data in the window is variable.Data processing takes place in different ways, as shown in Figs. 1 and 2, respectively; the relevant symbols are shown in Table 3.
As shown in Fig. 1(a) and Fig. 1(b), a fixed-width sliding window can handle the latest data by directly removing the expired data.Figure 2(a) shows the state of the fixed sliding window when there is no new data input.When the latest data T new ′ enters the window, the new ′ − new bars between T new−N +1 and T new ′ −N +1 will be removed from the window.The details are shown in Fig. 1(b).In general, the length of the window N is not very large to avoid the CD of the data within the window.
Assuming that the size of a given window is N , as shown in Fig. 2 The fixed sliding window approach involves determining the window size based on prior or experience.This can be done by referring to previous studies or considering the characteristics of the dataset.Once the window size has been set, it remains constant throughout the mining process.On the other hand, variable sliding window algorithms can adjust the window size in different ways.For instance, Hui Chen et al. [42] proposed a time decay model to differentiate patterns in recent transactions from historical transactions, ensuring that the most recent information in the data stream is given prominence.They utilized the concept of FP change to dynamically determine the appropriate window size [43].
Attenuation model
Usually, data stream changes over time are unpredictable.In such cases, it is undesirable to treat all samples as equally important.In general, the latest data generated is more valuable than historical data.Therefore, the attenuation model is an effective method to deal with this kind of time-sensitive data streams.In essence, attenuation models involve the association of weights of historical data or modes with time, and with the passage of time, these weight coefficients change accordingly to emphasize the importance of recent data.
When setting up the attenuation factors, there are usually three types of settings.Random decay factors are stochastic, and may cause instability of the FPs obtained by the mining algorithm.Fixed values are usually based on previous related studies, and the quality of the effect depends on the knowledge of experts.Dynamically calculated are obtained by combining other parameter values in the algorithm design.After the experimental verification of some studies, this approach was selected for the proposed algorithm.In current mining studies of data streams, decay models are commonly used in combination with window models.
Conceptual drift processing method for cloud computing
In data stream mining, the arriving data may change over time due to the inherent temporal nature of the data stream [45].This phenomenon is generally known as CD.In cloud computing environments, the CD encountered in data preprocessing is mainly due to the diversity of data sources.To address these challenges, the following measures can be applied.In the data cleaning stage, by removing duplicate, invalid, or erroneous data, the data quality can be improved, which mitigates the impact of CD.In the feature selection stage, to counteract the impact of CD on features, representative and stable features can be selected to reduce CD's impact on the model.In the data labeling stage, diverse labeling methods and labelers should be used to enhance the accuracy and reliability of the data.
Currently, CD is generally classified based on the speed of concept change [48].As shown in Fig. 3(a), a solid circle marked with numbers is used to represent each paragraph of data, and the numbers represent the chronological order.It can be seen that the transition between Concept 1 and Concept 2 is fast, and the old Concept 1 is soon replaced by Concept 2 with a completely different data distribution.This type of drift is referred to as mutant CD.On the contrary, as shown in Fig. 3(b), the transition between Concept 1 and Concept 2 is slow; the former is replaced gradually, and the concepts are more or less similar before and after the drift, so this drift is referred to as gradual CD.
Among the many methods that deal with CD, the window-based CD monitoring method is one of the common methods.Larger windows are associated with higher performance accuracy, but they may also contain unnoticed CD, while smaller windows facilitate better detection of CD [19].For example, Husheng et al. [49] proposed a CD-type identification method based on multi-sliding windows.The method consisted of three stages; first, the drift position was detected during the first detection stage by sliding the base window forward.Then, during the growth stage, the growth of the accompanying window was used to detect the drift length and identify the drift categories based on the drift length.Finally, during the tracking stage, the drift subcategories are identified based on the different tracking flow ratio curves generated during the window tracking process.Therefore, this method is able to effectively identify the type of CD, accurately analyze the key information in the online learning process, and improve the efficiency and generalization performance of streaming data analysis and mining.
Therefore, most existing studies adopt a "circuitous" strategy to detect CD.It involves determining whether a data stream has experienced CD by considering the "possible cause of CD" and the "possible consequences after CD" [50].For instance, Lu et al. [51] tackle realtime data and pre-existing CD, focusing on detecting causes through a data-driven approach and comprehensively handling CD in terms of time, content and manner.
The proposed DLVSW-CDTD algorithm
The DLVSW-CDTD algorithm is an algorithm for handling CD in cloud computing environments.In the following, we will focus on the algorithm's relation to cloud computing.
In cloud computing environments, updating must account for the aging and overfitting of the model.To enhance the accuracy and robustness of the model, the following measures can be adopted.By utilizing the realtime data processing capabilities of cloud computing, the performance of the model is monitored and evaluated in real time.This ensures timely detection of aging or overfitting phenomena.In practical applications, occurrences of CD require corresponding adaptation of the data or modification of mining models depending on the CD type.Therefore, the new algorithm proposed in this paper is suitable for detecting multiple types of CD.
To address the issue of CD in the process of FP mining, the DLVSW-CDTD algorithm (Dual-layer Variable Sliding Window-CD Type Detection), which utilizes a dual-layer variable sliding window model to handle CD phenomena in data streams and applies it to FP mining.The model utilizes two algorithms based on CD detection and type detection of the variable sliding window, aiming to handle various types of CD problems and ensure that FPs are based on the latest trends in the data.
The DLVSW-CDTD algorithm framework
In this section, the proposed FP mining algorithm is introduced.The algorithm model framework is primarily divided into five parts, as shown in Fig. 4.After the data pre-processed, they first pass through the first module, which applies the Variable-Size Window Drift Detection algorithm (VSW-DD) to detect CD.Upon detecting CD, the second module is used to adapt or modify the mining model accordingly for Multi-type Concept Detection.This algorithm is suitable for data mining scenarios with multiple CDs.
The DLVSW-CDTD is an algorithmic framework for handling CD in cloud computing environments.In the following, we will focus on the aspects of the framework that are specifically related to cloud computing.
In this algorithm, based on the concept of CD detection, the window size is initially set manually, and is then adjusted to adapt to the data stream according to the FP changes, the potential distribution changes of the data and the type of CD.Then, different types of CD are detected using the length of the embedded dual-layer window, and different attenuation coefficients are applied to reduce the impact of different types of CD, to dig out the latest concepts of the FPs, and to reduce the impact of CD on mining.When the data are constantly updated, at each instance a new data pane is inserted and the presence of mutation or gradient is assessed.
In the first stage of the DLVSW-CDTD framework, involving the algorithm initialization.The second stage is the application of the VSW-CDD algorithm for concept detection with the variable window size, and includes two aspects.The first aspect is the FP set mining using the initialized window, and saving the results in a monitoring prefix tree through the insertion of a In the fourth stage (Step6-Step12), according to the data stream characteristics of the different CD types, the embedded dual-layer window is adapted based on the detected effect to analyze the data within the window.Finally, in the fifth stage, the latest set of FPs is output.
Design of the DLVSW-CDTD algorithm
In the algorithm design of this section, the associated symbolic definitions are shown in Table 4.
Step 1: window size initialization and frequent pattern mining
The window is defined through its initial size and other parameters such as pane size, minimum change threshold and minimum support threshold.The relevant parameters are set according to literature values, and can be adjusted according to the experimental results.After window initialization, the FP-growth algorithm is used to mine the FP set and save the results in a simple compact prefix tree (SCP-Tree), as described in the following.
Step 2: build the prefix tree
The SCP-Tree structure adopted in this paper is a simple and compact tree-like data structure similar to FP-Tree.The FPs are inserted in the SCP-Tree incrementally when each data pane arrives, and the tree is dynamically adjusted through branch sorting.
Let p be a non-root node in the SCP Tree.If p is a regular node, then it contains 5 fields.The structure is shown in Fig. 5, where p.item is the project name; p.count records the support count of the item; p.next points to The change rate of the frequent term set at the t j within the short window relative to the t i LFChange ti (t j ) The change rate of the frequent term set at the t j within the long window relative to the t i the next node of the same project name (represented by a link in the tree; if no relevant branch exists, the link to the node from the header table is inserted); p.parent records the parent node, and p.children is the header pointer of the child node.
If p is a tail node, the structure diagram is shown in Fig. 6.The tail node contains two additional fields: p.precount records the support count before the checkpoint and p.curcount represents the support counts after recording the checkpoint.The initial values of both these fields are 0. The SCP-Tree also contains a Head_Table structure, which records the total support count per item in the tree and ranks the counts in descending order.Starting with an empty tree, each incoming data pane needs to be inserted into the SCP-Tree.
Step 3: insert a new pane and update the prefix tree
After completing the window initialization process, new data are added to the window by updating the support for all the relevant FPs.For new items included in the newly-arriving data, a new node is created and the support count of the items is added to the header table.To truly identify all new sets of frequent items, all individual items in the existing window need to be monitored through a support count update.After inserting the full pane, the prefix tree is scanned and updated.
Step 4: concept drift detection
CD detection in the DLVSW-CDTD Algorithm is divided into two parts, namely the detection of the process variables that cause CD, and the other is the detection of the change of the mining results caused by the CD.
Step 5: detection of variables based on causing conceptual drift
Hypothesis testing is a method to infer the distribution characteristics of the population data based on the characteristics of the distribution of the sample data [23].Its purpose is to judge whether there is a sampling error or essential difference between samples, or between a sample and the population.The common types of test assumptions include the F-test, the T-test and the U-test.In CD detection based on the data distribution, the distance function is commonly adopted to quantify the distribution relationship of the old and new data samples [24].
Step 6: detection of mining differences based on the occurrence of conceptual drift
In data mining, the underlying distribution of the data stream changes due to CD, so that the FP set changes accordingly.To better reflect recent changes, old concepts must be replaced immediately.In the problem of FP mining, the concept of an FP refers to the set of FPs, which is used as the target variable of the model description.Then, the change in FPs determines the difference between the two concepts.The concept of FP change is defined as follows: Let F t 1 and F t 2 represent the set of frequent terms at time points t 1 and t 2 , respectively.Thus, F + t 1 (t 2 ) = F t 2 − F t 1 is the set of new FPs at t 2 relative to t 1 , while F − t 1 (t 2 ) = F t 1 − F t 2 is the set of infrequent terms at t 2 but frequent items at t 1 .The rate of change of the set of frequent terms at t 2 relative to t 1 is defined as shown in Eq. 2.
where F t 1 , is the number of frequent item sets in the set F t 1 , and the rate of change is a value between 0 and 1.
The test defining this rate of change is FChange , and the return value According to the defined threshold of this change value, if the FChange calculated during the mining exceeds the threshold given by the user, the concept is considered to have changed.
Step 7: concept drift test index
Definition 3.1: The index of test CD R is defined and used to determine whether a data sample in the window has CD or not.It consists of two parts of the detection results, namely the detection results of data distribution (2) Specifically, φ is obtained according to the statistical test results of the Euclidean distance between windows, and the coefficient values of three tests are F test , t test and U test , and are used to distinguish the difference form of the distance distribution.When the Euclidean distance distribution between two windows passes the F-test and the t-test, the value of φ is 0; when the F-test but the U-test is passed, the value of φ is 1; when the F-test is passed, the value of φ is 2; when the F-test and the U-test fail, the value of φ is set to 3.
Step 8: window size adjustment
According to the sliding window algorithm design, a new window represents new information in the input data stream.Since our ultimate goal is to mine the set of frequent items on the data stream, after each new insertion pane, the amount of change FChange in the FP set is first determined.To improve the efficiency of the algorithm, we use the two sets F + and F − to represent new FPs and new infrequent patterns and track the changes of the associated FP sets at checkpoint CP.The two sets are updated after each insertion pane.
As shown in Fig. 7, the process of window size change is as follows.Concept changes are detected based on the associated patterns of the inspection node (CP) and are used to determine whether CD has occurred within the window when new data are inserted.The position of the inspection node is not fixed and moves forward accordingly as concept changes are detected.The initial position of the inspection node is marked using the TID identifier of the last data in the initialized window.The CD test index R is calculated after each pane insertion.
Step 9: long and short embedded dual-layer window design and FPM
In this section, the window design for two common CD types in the data stream, mutant drift and gradient drift, is introduced.To address the characteristics of these two drift types, a dual-layer window structure is designed.The window structure is shown in Fig. 8 A long window is divided near its head to create space for a short window.Therefore, each time a data input is detected, the short window is given priority for detection.If no CD is detected in the short window, the data in the long window are examined.As shown in Fig. 8, the dual window has its head on the right and the tail on the left.The head of the long window corresponds to a short window, which is responsible for detecting abrupt CD in the data stream, while the long window is used to detect gradual CD.
The long window size is represented by |LW|, the short window size by |SW|, and the relative relationship between them is determined through λ, calculated as shown in Eq. 6. λ is a preset value that determines the shape of the log membership function used, and remains unchanged across the split window regardless of the window size.
When the data stream begins to enter the window, after initialization in the dual-layer window, the parameter settings include the initial window size, the relative relationship of the long embedded window λ, the minimum support threshold δ, etc., as shown in Table 4.The FP-Growth algorithm is still used to mine the embedded long and short windows.The initial default window data represents the latest data concepts.Furthermore, in the dual-layer window model presented in this section, an attenuation module is designed for drift adaptation, which calculates the relative weighted support of an item by calculating the ratio of the weighted support count to the sum of all item counts.
Step 10: concept drift type detection mechanism
According to the designed dual-layer embedded window, the input data streams are subjected to inspection using long and short windows.Equations 7 and 8 are used to calculate the concept change of FPs in the long and short windows, in order to determine if CD has occurred in the window.
If the FP set of two time nodes t 1 and t 2 are consid- ered and SF t 1 and SF t 2 are used to represent short win- dows at these time points, then SF + t 1 (t 2 ) = SF t 2 − SF t 1 is the set of new sets of FPs with a short window at t 2 rela- tive to t 1 and SF − t 1 (t 2 ) = SF t 1 − SF t 2 is the set of terms where short windows are infrequent at t 2 , but frequent at t 1 .Then, the rate of change in the FP set at the short window t 2 relative to t 1 is SFChange t 1 (t 2 ) .The calcula- tion is shown in Eq. 7.
where SF t 1 is the number of items in the set SF t 1 , and the rate of change is a value between 0 and 1.Similarly, the formula for the rate of change LFChange t 1 (t 2 ) at the long window t 2 versus t 1 is shown in formula 8: When data enter the window, the changes in FPs in the embedded short window are first calculated.If the rate of change of the FP set is greater than or equal to the given threshold, it is considered that the data in the short window have undergone a mutation, thus the presence of CD is considered to have occurred in the entire window.Similarly, if the rate of change of FPs in the long window exceeds the threshold, the data in the window are gradually drifting.The determination of CD types can be seen in Table 5.
Step 11: concept drift adaptation
Based on the design of the CD detection mechanism, when CD is detected in the short window, mutant CD occurs.For such drift, the drift data in the window are eliminated, the detection node CP S of the short window is moved for- ward, and a new window ω is formed to continue mining.
Figure 9 shows the process of eliminating the effects of old data and forming a new window when mutant CD has occurred within the window.Consistent with the previous design, the data pane is inserted from left to right into the current window.Concept changes within a short window are measured against the detection node CPs, and the amount of changes to the FP set SFChange is calculated after each insertion pane.As shown in Fig. 9, after one or more data panes are inserted into the window, if a concept change is detected in the inspection node of a short window, the data information between the current checkpoint and the previous check node in the window is given a weight value of 0, which will eliminate the impact of mutant data.Then, a new window ω is formed and the checkpoint is moved to a new node CP S where concept change is detected.If no concept changes are detected within the short window, the FP changes within the long window.When When detecting CD in an embedded sliding window, the data within the window is considered as gradual CD.Therefore, an appropriate decay function is adopted to reduce the impact of drift on mining results, and then a new window size is formed to continue mining with the decay settings.
In the long and short embedded dual-layer window designed in this section, due to the input characteristics of the data stream, each data arrival to the window occurs at different times.Since a membership function effectively reflects the degree of membership of each element in the set, a logarithmic membership function is used to assign weights to each sample in the window.The calculation method of the weight is shown in Eq. 9. (9) where θ(i) indicates the weight of the data in bar i in the window (the data at the end of the window is considered as the first data); e is the natural constant; λ is a preset value, and λ determines the shape of the membership function used.
Step 12: weighted FP tree formulation
According to the modifications made to the mining model in the drift adaptation part of the algorithm in this section, a Weighted Simple Compact Pattern Tree (WSCP-Tree) is designed and constructed.It is also referred to as the weighted FP tree.A fuzzy membership function is utilized to detect gradient drift in the data, assigning different weights accordingly.The weight of the detected mutation drift data is set to 0.
If the current data do not drift and belong to the latest concept, their weight value is set to 1.The weight calculation for data exhibiting a gradient CD follows Eq. 9.
The structure design of the node and header pointer table for the WSCP-Tree proposed in this section is depicted in Figs. 10 and 11, respectively.The corresponding names are listed in Table 6.
As shown in Fig. 10, each node in the weighted FP tree consists of four regions, namely, the project name, the parent node, the weight value, and the next node of the same project.In Fig. 11, the weighted FP tree header pointer table structure contains the project name, weight, and the first node with the same domain value as the project name.
In the pseudo-code for DLVSW-CDTD algorithm, in lines 1-3 the window is initialized and the relevant parameters are set, and the TID of the last data of the initial window is set to the initial check node of the embedded dual-layer window.Starting from line 4, the process of detecting CD and adaptation drift is iterated; in row 5, new data are inserted in the window; in line 6, the corresponding FP set is updated.Lines 7-14 process the item set that becomes frequent or infrequent after the new pane insertion.CD is detected on line 15, the window is determined and the decay weight is assigned, and then the FP mining results are updated.
Experimental results and analysis
Based on the design of several modules, in this section a dual-layer variable window FP mining algorithm is proposed for CD type detection, the DLVSW-CDTD algorithm.The pseudo-code for the algorithm is shown below.All the steps described above are represented in this pseudo-code as follows, and some of the steps use function calls instead of detailed procedures.The pseudo-code for DLVSW-CDTD algorithm.
Introduction of the experimental environment and data set Experimental environment
A random transactional data generator provided in the open source data mining library SPMF [25] was used to generate a comprehensive transactional data set, as follows.
The meaning of letters in the name of the data set is: D: number of sequences in the data set; T: average number of item sets in each transaction datum; I: the average size of the item set of a potential frequent sequence; K: abbreviation of the number 1000; for example, Article 10,000 is denoted as Article 10 K in this paper.The dataset generated by this section is T20I6D100K.The main software environment is shown in Table 7.
Open source data mining library SPMF
We use a random transactional data generator provided in an open source data mining library SPMF [25] to generate a comprehensive transactional data set.The relevant information of the data set is introduced as follows.I: the average size of the item set in the potential frequent sequence, K: the abbreviation of the number 1000, such as the data in Article 10,000, are represented by the data in Article 10 K in this paper.The generated data set in this section is T20I6D100K.
Performance verification of the DLVSW-CDTD algorithm
In the first set of experiments, the DLVSW-CDTD Algorithm was evaluated for its ability to achieve two desired functions: detecting CD in the data stream and adjusting the window size after detecting the CD.To demonstrate how the DLVSW-CDTD Algorithm adjusted the window size following a CD occurrence, a dataset with a predefined CD at a specific location was used for the experiment.It was chosen to highlight the algorithm's effectiveness in adapting the window size in response to detected CD.
Dataset setting
For the frequent and infrequent items in the T20I6D200K-X dataset generated using the open source data mining library SPMF, the minimum support was set to 2%.Then, 50% of the frequent and infrequent items in the dataset were selected and a new set of items was formulated formulate, which was inserted at the end of the first dataset to synthesize a new data set with a clear CD point.
Related parameter setting and description
As per the DLVSW-CDTD algorithm's design, the first check node (i.e. the data of article 20 K) was located after the window initialization.When the algorithm detects a change in the concept of the data, the checkpoint will move to a new location.According to the dataset T20I6D200K-X's design, in the section 70 K data, there is a CD, so the R-value here must be greater than or equal to 3, and it is obvious from Fig. 12 that the experimental results were as expected.At the 70 K data, the R-value was 3, the algorithm considered the first 20 K data from the initial node to the checkpoint as drift (i.e.expired) data, removed them from the window, and adjusted the window size to 50 K.In addition, the checkpoint was moved to the point where the conceptual change was detected (i.e.Article 70 K data).Similarly, after Article 200 K data, a conceptual change was detected again, and the window was re-sized to 130 K, marked in bold in Fig. 12.
Window size change
To verify whether the value of the initial window and pane sizes will affect the CD detection effect and the process of window size adjustment, different values for the initial window and pane sizes were used to perform the above experiments without changing the minimum support and correlation thresholds.
The initial window and pane size were changed simultaneously to 10 K and 5 K, respectively, and the experimental effect is shown in Fig. 13.Then, only the initial window size was changed to 40 K while the pane size remained unchanged to 10 K.The experimental results are shown in Fig. 14.Finally, the initial window size was set to 20 K, and the pane size was changed to 5 K.The experimental effect is shown in Fig. 15.
As shown in Figs. 13, 14 and 15, for the same dataset, the size of the initial window and the data pane did not affect the CD detection effect.Both conceptual changes were detected in the data in Articles 70 K and 200 K, and the final window size used by the algorithm was the same.
In conclusion, the size of the initial window and pane will not affect the CD detection and window size adjustment of the DLVSW-CDTD algorithm; the algorithm will adaptively adjust the size of the window with the CD.Compared to fixed-size sliding window approaches, the set of FP obtains reflects the latest concept in the data.
Threshold and correlation analysis
For the experiments presented in this section, the BMS-Webview-1 dataset was used.For all experiments in this data set, taking into account relevant literature studies, the pane size, initial window size, minimum support and threshold FChange were set to 5 K, 20 K, 2% and 0.1, respectively, and then different change thresholds were used to show the effect of this parameter on concept change detection and window size.
From Figs. 16, 17, 18 and 20, it can be seen that for four different CD change thresholds, the window size changes as the input data increase.In Figs.16 and 17, the concept change threshold was set to 2, and it can be seen that the data stream was detected at 45 K, 65 K, 65 K and 85 K, and the concept test values exceeded 2. The size of the window was adjusted to new values, i.e. to 25 K, 20 K and 20 K, respectively.In Fig. 18, the CD test threshold was 3, and the CD was detected in the seventh pane and the last pane, and the window size decreased to 35 K and 30 K, respectively.Finally, in Fig. 19, no conceptual change was detected during the data mining process with a test-of-concept change threshold of 4. This is due to the high setting of the concept change threshold relative to the data stream.Thus, as shown in Fig. 20, the window size continuously increases as the pane inserts.
Since the DLVSW-CDTD algorithm performs two functions during CD detection section, namely data distribution analysis and mining result detection, R consists of two parts, namely the FP change index FChange and the test index φ of the data distribution.The value of R determines whether CD has occurred.Therefore, the correlation between the two parameters and R was analyzed.
As can be seen from Fig. 20, compared with the FP change index FChange , the value of φ, i.e. the test index of the data distribution, affects the R-value more, which is also in line with the experimental expectation.When there are more input data and FP tends to saturate, the change of FChange will become smaller.
Dual-layer window concept drift detection
In this experiment, the proposed dual-layer window detection algorithm DLVSW-CDTD is compared to other algorithms which can timely detect mutant and gradient CD.The data in the BMS-Webview-1 were used for the comparison.Because part of the detection steps of the DLVSW-CDTD algorithm directly call the CD detection part of the variable sliding window, the size of the window must change with the detected CD, thus directly verifying whether the proposed DLVSW-CDTD algorithm can detect two types of CD.
As shown in Fig. 21, as the data input size increases, the length of the embedded window of the FP change rate is also changing.The point beyond the threshold is marked in red, and corresponds to the mutant and gradient CD.According to the design of the algorithm, the corresponding window size must change; the effect is shown in Fig. 22.
Execution time
To further analyze the performance of the proposed algorithm, this execution time experiment was designed to test the running time of each function of the proposed DLVSW-CDTD Algorithm.The data volume was fixed to 60 K, or 60,000 data samples.The test content included the proportion of time spent on different types of CD detection and processing of dual-layer windows, the time ratio of frequent update modes, and the proportion of other operation times.The experimental results are shown in Table 8.
In conclusion, in the overall operation of the DLVSW-CDTD algorithm, the time consumption is mainly concentrated in the update of FP modes.The main purpose of the algorithm is to construct the weighted FP tree, while the time consumed in data reading, CD detection and drift adaptation to the modules is small.These processing steps are characterized by an overall fast calculation speed and a high processing efficiency, so the time proportion spent on them was less than 10%.In addition, the absolute operation time of each module was also relatively stable, and it did not produce relatively large fluctuations as the size of the initial window changed.
(a) and Fig. 2(b), variable sliding windowbased methods handle conceptual changes caused by the most recent entry window by expanding and contracting the window's width.As shown in Fig. 2(a), when the recently entered data do not cause a conceptual change, the sliding window scales from size N to N + new ′ − new to handle the latest data.As shown in Fig. 2(b), when the data in the
Fig. 1 aFig. 2 a
Fig. 1 a Handling the transaction T new .b Handling the transaction T new ′
Fig. 3
Fig. 3 Common type of concept drift
1 ,Fig. 5 Fig. 6
Fig. 5 Schematic diagram of the conventional node structure in the SCP Tree
( 6 )Fig. 7
Fig. 7 Schematic diagram of window size change and check node change
Fig. 8
Fig. 8 Schematic diagram of the embedded dual-layer window
Fig. 9
Fig. 9 Schematic diagram of detecting node change and window size change
Fig. 10 Fig. 11
Fig. 10 Schematic diagram of the node construction of the weighted frequent pattern tree
Fig. 24
Fig.24 Comparison of the memory consumption of the algorithm
Table 2
Data Stream DS
Table 3
Symbol description of the window model
Table 4
Symbol descriptionSFChange ti (t j ) SymbolMeaningθ(i)The weight of the data in bar i in the window
Table 5
Concept drift type determination of data in the window
Table 6
Introduction to the weighted frequent pattern tree node name
Table 7
Main software environment and the version
Table 8
Running time proportion of each function of the algorithm under different initial window sizes Comparison of the execution times for the different algorithms | 2024-02-14T16:09:30.930Z | 2024-02-12T00:00:00.000 | {
"year": 2024,
"sha1": "e1211980437b285afa8c90562c511d42cbc784a5",
"oa_license": "CCBY",
"oa_url": "https://journalofcloudcomputing.springeropen.com/counter/pdf/10.1186/s13677-023-00566-9",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e947203a7e272d915c466c85cc587fe803bf989d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
246701738 | pes2o/s2orc | v3-fos-license | Prognostic value of pulmonary ultrasound score combined with plasma miR‐21‐3p expression in patients with acute lung injury
Abstract Purpose The aim of this study was to explore the value of the combination between lung ultrasound score (LUS) and the expression of plasma miR‐21‐3p in predicting the prognosis of patients with acute lung injury (ALI). Patients and methods A total of 136 ALI patients were divided into survival (n = 86) and death groups (n = 50), or into low/middle‐risk (n = 77) and high‐risk groups (n = 59) according to APACHE II scores. Bioinformatics was used to explore the mechanism of action of miR‐21‐3p in humans. Real‐time fluorescent quantitative PCR was used to detect the expression of miR‐21‐3p in plasma, and LUS was recorded. Receiver operator characteristic (ROC) curve and Pearson correlation were also used. Results The LUS and expression level of plasma miR‐21‐3p in the death and high‐risk groups were significantly higher than those in the survival and low/middle‐risk groups (p < 0.01 and p < 0.05). miR‐21‐3p expression leads to pulmonary fibrosis and promotes the deterioration of ALI patients by regulating fibroblast growth factor and other target genes. ROC curve analysis showed that the best cutoff values for LUS and plasma miR‐21‐3p expression were 18.60 points and 1.75, respectively. LUS score and miR‐21‐3p combined predicted the death of ALI patients with the largest area under the curve (0.907, 95% CI: 0.850–0.964), with sensitivity and specificity of 91.6% and 85.2%, respectively. The expression level of plasma miR‐21‐3p was positively correlated with LUS in the death group (r = 0.827, p < 0.01). Conclusion LUS and expression level of miR‐21‐3p in plasma are correlated with the severity and prognosis of ALI patients, and their combination has a high value for the prognostic assessment of ALI patients.
have received treatment and can cooperate with the researcher. The exclusion criteria were as follows: (1) patients with onset over 24h prior and unclear clinical diagnosis; (2) presence of complications such as severe infection, malignant tumor, diseases of the blood system, autoimmune diseases, viral hepatitis, tuberculosis, bronchial asthma, and other respiratory diseases; and (3) incomplete medical records or inability to cooperate with the researcher.
| Research methods
A retrospective study was conducted to analyze the data of ALI patients at admission, including age, gender, body mass index, heart rate, respiratory rate, and underlying diseases. According to the diagnosis of ALI as the starting point of the study, and the recovery and discharge or death of ALI patients as the endpoint of the study, patients were divided into survival (86 cases) and death groups (50 cases).
The acute physiology and chronic health evaluation II (APACHE II) scoring system software was used to calculate the APACHE II score value on the onset day of all ALI patients. The patients were divided into two groups: the high-risk group had APACHE II score >20 for 59 cases and, in the low/medium-risk group, APACHE II score ≤20 was present in 77 cases.
| Pulmonary ultrasound scoring
A Sono ultrasonic instrument with a phased array convex probe, which was 3.5-10.0 MHz, was used. All patients were in the supine position and examined by the same specially trained ultrasound observer, who focused on 12 lung areas, including bilateral anterior chest wall, lateral chest wall, and upper and lower posterior chest walls. The quantitative assessment standard for ultrasonic imaging provides a point score to the most serious performance evaluation: areas of consolidation of the lung (3 points), severely reduced lung ventilation area (2 points), pulmonary ventilation area (1 point), and normal ventilation area (0 points). The sum total of the score of the 12 lung areas is the LUS (score from 0-36 points); the higher the LUS, the more serious the illness.
The protein interaction network (PPI) based on the obtained target genes was combined with the STRING database (https://strin g-db. org/) to analyze its protein-protein interaction network (PPI network). The CytoHubba plug-in was used to screen 20 HUB genes.
The HUB target genes obtained were enriched and analyzed using the clusterProfiler package of R software, and the ggploT2 package was used to visualize the results.
| Statistical methods
The SPSS19.0 statistical software was used for analysis, measurement data were represented by (x ± S), and group t-tests were used for comparison of two independent samples. The chi-squared test was used to compare groups. The value of LUS score combined with plasma miR-21-3p expression level in predicting the death of ALI patients was analyzed using receiver operating characteristic (ROC) curves. Areas under the curve (AUC) were compared using a Z-test. Pearson correlation was used to analyze the correlation between each index. The difference was statistically significant with p < 0.05.
| Comparison of demographic data
Compared with the survival group, the APACHE II score in the death group was significantly higher, and the difference was statistically significant (p < 0.05). There were no significant differences in gender, age, body mass index, underlying diseases, etiology, or respiratory rate and heart rate between the two groups (p > 0.05) ( Table 1).
| Bioinformatics analysis
From Targetscan, miRDB, and miRWalk, 3664, 12508, and 595 target genes were obtained, respectively. A total of 420 target genes were obtained with high accuracy through intersection ( Figure 1).
The protein-protein interaction of these target genes was analyzed using the STRING database, and the protein interaction network results of the interaction between target genes were obtained ( Figure 2). Cytohubba was used to obtain 20 hub genes by an MCC algorithm ( Figure 3). The enrichment analysis of hub target genes showed that the cell sets of these hub target genes were divided into the spliceosomal complex, catalytic step 2 spliceosome, etc. The molecular functions for these include disordered domain-specific binding, basal transcription machinery binding, etc., important in processes including the fibroblast growth factor-receptor signaling pathway, cellular response to fibroblast growth factor stimulus, etc.
KEGG pathway analysis showed that these target genes might be involved in the regulation of endometrial cancer, breast cancer, etc.
| LUS score and plasma miR-21-3p expression levels
LUS score and plasma miR-21-3p expression levels in the death group were significantly higher than those in the survival group (p < 0.01, Table 2). Compared with the low/medium-risk group, LUS score, plasma miR-21-3p expression level, and mortality were significantly increased in the high-risk group (p < 0.01, Table 3).
| Value of LUS score combined with plasma miR-21-3p expression
The with a statistically significant difference (p < 0.05, Figure 5).
| Correlation analysis
Pearson correlation analysis showed that plasma miR-21-3p expression level was positively correlated with the LUS score in the death group (r = 0.827, p < 0.01). There was no significant correlation be-
| CON CLUS ION
In conclusion, LUS score and plasma miR-21-3p expression level are correlated with the severity and prognosis of ALI patients and are expected to be used as new biomarkers for ALI. The two combined tests have high value for predicting the prognosis of ALI patients, and also provide a new idea for understanding the pathogenesis of ALI, as well as targeted therapy. However, this study is at a preliminary exploratory stage, and more prospective studies are needed to further confirm the value of miR-21-3p for ALI patients.
ACK N OWLED G M ENT
We sincerely thank some students for helping us in data collection and collation.
CO N FLI C T S O F I NTE R E S T
The authors declare no potential conflicts of interest.
AUTH O R CO NTR I B UTI O N S
LR conceived study design. ZG and GF conceived the content concept. GF and ZG performed the data collection, extraction, and analyzed the data. LR and WQ interpreted and reviewed the data and drafts. WQ reviewed the final draft. All authors were involved in literature search, writing the study, and had final approval of the submitted and published versions. All authors contributed to data analysis, drafting, or revising the article, have agreed on the journal to which the article will be submitted, gave final approval of the version to be published, and agree to be accountable for all aspects of the work.
DATA AVA I L A B I L I T Y S TAT E M E N T
All data generated or analyzed during this study are included in this published article. | 2022-02-11T06:24:36.674Z | 2022-02-10T00:00:00.000 | {
"year": 2022,
"sha1": "0c96deffb14be816e4cc3f6f9b1b40be6bcc98ac",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jcla.24275",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "22697a669b15c085d9c1ebe5f6c5879382bb8ed2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263767388 | pes2o/s2orc | v3-fos-license | Mixed Gated/Exhaustive Service in a Polling Model with Priorities
In this paper we consider a single-server polling system with switch-over times. We introduce a new service discipline, mixed gated/exhaustive service, that can be used for queues with two types of customers: high and low priority customers. At the beginning of a visit of the server to such a queue, a gate is set behind all customers. High priority customers receive priority in the sense that they are always served before any low priority customers. But high priority customers have a second advantage over low priority customers. Low priority customers are served according to the gated service discipline, i.e. only customers standing in front of the gate are served during this visit. In contrast, high priority customers arriving during the visit period of the queue are allowed to pass the gate and all low priority customers before the gate. We study the cycle time distribution, the waiting time distributions for each customer type, the joint queue length distribution of all priority classes at all queues at polling epochs, and the steady-state marginal queue length distributions for each customer type. Through numerical examples we illustrate that the mixed gated/exhaustive service discipline can significantly decrease waiting times of high priority jobs. In many cases there is a minimal negative impact on the waiting times of low priority customers but, remarkably, it turns out that in polling systems with larger switch-over times there can be even a positive impact on the waiting times of low priority customers.
Introduction
There are three ways in which one can introduce prioritisation into a polling model.The first type of priority is by changing the server routing such that certain queues are visited more frequently than other queues [6,19].This type of prioritisation is quite common in wireless network protocols.A second type of prioritisation is through differentiation of the number of customers that are served during each visit to a queue.This type of prioritisation is inflicted through the usage of different service disciplines.For example, one can serve all customers in a queue before switching to the next queue (exhaustive service), or one can limit the amount of customers that are served to, e.g., only those customers present at the arrival of the server at the queue (gated service).Typically, this will have a negative impact on the waiting times of the customers in queues that are not served exhaustively.The third way of introducing priorities is by changing the order in which customers are served within a queue, which is a popular technique to improve performance of production systems, cf.[2,22].The present paper introduces a new service discipline, referred to as mixed gated/exhaustive service, that combines the last two types of prioritisation.
In the polling model considered in the present paper a single server visits N queues in a fixed, cyclic order.Some, or even all, of the queues contain two types of customers: high and low priority customers.For these queues we introduce a new service discipline, called mixed gated/exhaustive service based on the priority level of the customer.A polling system with high and low priority customers in a queue with purely gated or exhaustive service has been studied in [1,2].The mixed gated/exhaustive service discipline can be considered as a mixture of these two service disciplines where low priority customers receive gated service and high priority customers receive exhaustive service.A more detailed description is given in Section 2. Since the number of customers served during one visit in a queue with gated service is different from the number served during a visit with exhaustive service, the mixed gated/exhaustive service discipline introduced in the present paper combines the second and the third type of prioritisation.A variation of the model under consideration, namely a polling system where low priority customers are served only if there are no high priority customers present in any of the queues, has been studied in [12].
Polling models have been studied for many years and because of their practical relevance many papers on polling systems have been written in a mixture of application areas.The survey of Takagi [21] on polling systems and their applications from 1991 is still very valuable, although the last couple of years interest in polling models has revived, partly triggered by many new applications.The motivation for the present paper is to present a service discipline that combines the benefits of the gated and exhaustive service disciplines for priority polling models.The specific application that attracted our attention is in the field of logistics.Consider a make-to-order production system with a single production capacity for multiple products.In many firms encountering this situation, the products are produced according to a fixed production sequence.The production capacity, where the production orders queue up, can be represented as a polling model by identifying each product with a queue and the demand process of a product with the arrival process at the corresponding queue.For a more detailed description of fixed-sequence strategies in the context of make-to-stock production situations, see [23].In the context of this production setting, the situation with two or more priority levels -as studied in detail in the present paper -is oftentimes encountered in practice, where production departments have to supply both internal and external customers, the latter of which is commonly given a preferential treatment.A different application stems from production scheduling in flexible manufacturing systems where part types are often grouped with other types sharing (almost) similar characteristics, such that no change of machine configuration, i.e. setup time, is required when switching between these part types (see, e.g., [17]).Since no setup time is required to switch between these types, it can be seen as customers of different types being served in the same queue.The introduction of priorities can be useful to efficiently differentiate between different parts grouped within one queue.These two applications make the practical relevance of the inclusion of multiple priority levels in the studied polling model evident.Finally, we should keep in the back of our mind that the results of the present paper are certainly not limited to these production settings, but may be used in many other fields where polling models arise, such as communication, transportation and health care (e.g., surgery procedures where an urgency parameter is assigned to each patient).
The present paper is structured as follows: first we discuss the model in more detail and we determine the generating functions (GFs) of the joint queue length distribution of all customers at visit beginnings and completions of each queue.In Section 4 we determine the Laplace-Stieltjes Transforms (LSTs) of the distributions of the cycle time, visit times and intervisit times.These distributions are used to determine the marginal queue length distributions and waiting time distributions of high and low priority customers in all queues.The LST of the waiting time distribution is used to compute the mean waiting time of each customer type.A pseudo-conservation law for these mean waiting times is presented in Section 7. Furthermore, we introduce some numerical examples to illustrate typical features of a polling model with mixed gated/exhaustive service.Finally we discuss possible extensions and future research on the topic.
Notation and model description
The model considered in the present paper is a polling model which consists of N queues, labelled Q 1 , . . ., Q N .Throughout the whole paper all indices are modulo N , so Q N +1 stands for Q 1 .The queues are visited by one server in a fixed, cyclic order: 1, 2, . . ., N, 1, 2, . . . .The switch-over time of the server from Q i to Q i+1 is denoted by S i with LST σ i (•).We assume that all switch-over times are independent and at least one switch-over time is strictly greater than zero.Each queue contains two customer types: high and low priority customers, although the analysis allows any number (greater than zero) of customer types per queue.High priority customers in Q i are called type iH customers and low priority customers in Q i are called iL customers, i = 1, . . ., N .Type iH customers arrive at Q i according to a Poisson process with intensity λ iH , and type iL customers arrive at Q i according to a Poisson process with intensity λ iL .The service times of type iH and iL customers are denoted by B iH and B iL , with LSTs β iH (•) and β iL (•).All service times are assumed to be independent.We introduce the notation ρ iH = λ iH E(B iH ) and similarly ρ iL = λ iL E(B iL ).The total occupation rate of the system is ρ = N i=1 ρ i , where ρ i = ρ iH + ρ iL is the fraction of time that the server visits Q i .Service of the customers is gated for low priority customers and exhaustive for high priority customers.In more detail: each queue actually contains two lines of waiting customers: one for the low priority customers and one for the high priority customers.At the beginning of a visit to Q i , a gate is set behind the low priority customers to mark them eligible for service.High priority customers are always served exhaustively until no high priority customer is present.When no high priority customers are present in the queue, the low priority customers standing in front of the gate are served in order of arrival, but whenever a high priority customer enters the queue, he is served before any waiting low priority customers.Service is non-preemptive though, implying that service of a type iL customer is not interrupted by an arriving type iH customer.The visit to Q i ends when all type iL customers present at the beginning of this visit are served and no high priority customers are present in the queue.Notice that if the arrival intensity λ iH equals 0, then Q i is served completely according to the gated service discipline.Similarly we can set λ iL = 0 to obtain a purely exhaustively served queue.Both the gated and the exhaustive service discipline fall into the category of branching-type service disciplines.These are service disciplines that satisfy the following property, introduced by Resing [16] and Fuhrmann [10].
Property 2.1 If the server arrives at Q i to find k i customers there, then during the course of the server's visit, each of these k i customers will effectively be replaced in an i.i.d.manner by a random population having probability generating function h i (z 1 , . . ., z N ), which can be any N -dimensional probability generating function.
, where β i (•) denotes the service time LST of an arbitrary customer in Q i , and λ i denotes his arrival rate.For exhaustive service h i (z 1 , . . ., z N ) = π i j =i λ j (1 − z j ) , where π i (•) is the LST of a busy period distribution in an M/G/1 system with only type i customers, so it is the root in (0, 1] of the equation π i (ω) = β i (ω + λ i (1 − π i (ω))), ω ≥ 0 (cf.[7], p. 250).Property 2.1 is not satisfied if Q i receives mixed gated/exhaustive service, because the random population that replaces each of these customers depends on the priority level.In the next section we circumvent this problem by splitting each queue into two virtual queues, each of which has a branching-type service discipline.This equivalent polling system satisfies Property 2.1, so we can still use the methodology described in [16] to find, e.g., the joint queue length distribution at visit beginnings and completions.All other probability distributions that are derived in the present paper can be expressed in terms of (one of) these joint queue length distributions.
Joint queue length distribution at polling epochs
In the present section we analyse a polling system with all queues having two priority levels and receiving mixed gated/exhaustive service, but in fact each queue would be allowed to have any branching-type service discipline.Denote the GF of the joint queue length distribution of type 1H, 1L, . . ., N H, N L customers at the beginning and the completion of a visit to Q i by respectively As discussed in the previous section, the polling model under consideration does not satisfy Property 2.1, which often means that an exact analysis is difficult or even impossible.For this reason we introduce a different polling system that does satisfy Property 2.1 and has the same joint queue length distribution at visit beginnings and endings.The equivalent system contains 2N queues, denoted by The switch-over times S i , i = 1, . . ., N , are incurred when the server switches from Q iL * to Q (i+1)H * ; there are no switch-over times between Q iH * and Q iL * .Customers in this system are so-called "smart customers", introduced in [4], meaning that the arrival rate of each customer type depends on the location of the server.Type iH * customers arrive in Q iH * according to arrival rate λ iH unless the server is serving Q iL * .When the server is serving Q iL * , the arrival rate of type iH * customers is 0. The reason for this is that we incorporate the service times of all type iH customers that would have arrived during the service of a type iL customer, in the original polling model, into the service time of a type iL * customer.In our alternative system, type iL * customers arrive with intensity λ iL and have service requirement B * iL with LST β * iL (•).There is a simple relation between B iL and B * iL , expressed in terms of the LST: B * iL is often called completion time in the literature, cf.[20], with mean the gate of Q iL * being set at the visit beginning of Q iH * .The synchronised gated service discipline is introduced in [15] and does not strictly satisfy Property 2.1.However, it does satisfy a slightly modified version of Property 2.1 that still allows for straightforward analysis; see [3] for more details.During a visit to Q iL * only those type iL * customers are served that were present at the previous visit beginning to Q iH * .The joint queue length distribution at a visit beginning of Q iH * in this system is the same as the joint queue length distribution at a visit beginning of Q i in the original polling system.Similarly, the joint queue length distribution at a visit completion of Q iL * is the same as the joint queue length distribution at a visit completion of Q i in the original polling system.In terms of the GFs: where z is a shorthand notation for the vector (z 1H , z 1L , . . ., z N H , z N L ).The GFs of the joint queue length distributions at a visit beginning and completion of Q iH * are related in the following manner: , we can repeat these steps N times to obtain a recursive expression for V b iH * (•).This recursive expression is sufficient to compute all moments of the joint queue length distribution at a visit beginning to Q iH * by differentiation, but the expression can also be written as an infinite product which converges if and only if ρ < 1.We refer to [16] for more details.
Cycle time, visit time and intervisit time
We define the cycle time C i as the time between two successive visit beginnings to Q i , i = 1, . . ., N .The LST of the distribution of C i , denoted by γ i (•), can be expressed in terms of V b i (•) because the type iL customers that are present at the beginning of a visit to Q i are those type iL customers that have arrived during the previous cycle.It is convenient to introduce the notation , where z iH and z iL are the arguments that correspond respectively to type iH and iL customers.Using this notation we can write: Hence, the LST of the cycle time distribution is: , which does not depend on i.Higher moments of the cycle time distribution do depend on the cycle starting point.
We define the intervisit time I i as the time between a visit completion of Q i and the next visit beginning of Q i .The type iH customers present at the beginning of a visit to Q i are exactly those type iH customers that arrived during the previous intervisit time I i .Hence, , where I(•) is the LST of the distribution of I i .This leads to the following expression for the LST of the intervisit time distribution of Q i : The LSTs of the distributions of the cycle time and intervisit time are needed later in this paper.For the visit time of Q i , V i , we mention the LST here for completeness but it will not be used later: 5 Waiting times and marginal queue lengths
High priority customers
Since high priority customers are served exhaustively, we can use the concept of delay-cycles, sometimes called T -cycles (cf.[21]), introduced by Kella and Yechiali [14] for vacation models to find the waiting time LST of a type iH customer, where waiting time is understood as the time between arrival of a customer into the system and the moment when the customer is taken into service.The waiting time plus service time will be called sojourn time of a customer.When it comes to computing waiting times in a polling system with priorities, one can use delay-cycles for any queue that is served exhaustively, cf.[1,2].A delay-cycle for a type iH customer is a cycle that starts with a certain initial delay at the moment that the last type iH customer in the system has been served.In our model this initial delay is either the service of a type iL customer, B iL , or (if no type iL customer is present) an intervisit period I i .The delay cycle ends at the first moment after the initial delay when no type iH customer is present in the system again.This is the moment that all type iH customers that have arrived during the delay, and all of their type iH descendants, have been served.In [1] delay-cycles have been applied to a polling system with two priority levels in an exhaustively served queue.For a type iH customer in the polling model in the present paper, the same arguments can be used to compute the LST of the waiting time distribution.The fraction of time that the system is in a delay-cycle that starts with the service time B iL of a type iL customer is ρ iL 1−ρ iH , and the fraction of time that the system is in a delay-cycle that starts with an intervisit period We can use the Fuhrmann-Cooper decomposition [11] to obtain the LST of the waiting time distribution of a type iH customer, because from his perspective the system is an M/G/1 queue with server vacations.The vacation is the service time B iL of a type iL customer with probability ρ iL 1−ρ iH , and an intervisit time I i with probability 1−ρ i 1−ρ iH .This leads to the following expression for the LST of the waiting time distribution of a type iH customer: Equation (5.1) is similar to the equation found in [1] for high priority customers in an exhaustive queue.Note that the intervisit time I i is different though, with LST I i (•) as defined in Equation (4.2).
The GF of the marginal queue length distribution of type iH customers can be found by applying the distributional form of Little's Law [13] to the sojourn time distribution: This leads to the following expression: (5.2)
Low priority customers
In this subsection we determine the GF of the marginal queue length distribution of type iL customers, and the LST of the waiting time distribution of type iL customers.In order to obtain these functions, we regard the alternative system with 2N queues as defined in Section 3. The number of type iL customers in the original polling system and their waiting time (excluding the service time) have the same distribution as the number of type iL * customers and their waiting time (again excluding the service time, which is different) in the alternative system.From the viewpoint of a type iL * customer, the system is an ordinary polling system with synchronised gated service in Q iL * .
We apply the Fuhrmann-Cooper decomposition to the alternative polling model with 2N queues and type iL * customers having completion time B * iL .Using arguments similar as in the derivation of Equation (3.7) in [3], we find the general form of the GF of the marginal queue length distribution: where ρ * iL = ρ iL 1−ρ iH and β * iL (•) is given by (3.1).Furthermore, N * iL|I end and N * iL|I begin are the number of type iL * customers at respectively the visit beginning and visit completion of Q iL * .The visit beginning corresponds to the end of the intervisit period I iL , and the visit completion corresponds to the beginning of the intervisit period.Substitution into (5.3)leads to the following expression: where we use that ), because this is the mean number of type iL * customers that arrive during the intervisit time of Q iL * .
Applying the distributional form of Little's Law to (5.4), we obtain the LST of the sojourn time distribution of a type iL customer.Since the sojourn time is W iL + B * iL , the LST of the waiting time distribution immediately follows: . (5.5)
Moments
Differentiation of the waiting time LSTs derived in the previous section leads to the following mean waiting times: where B iH,res denotes a residual service time of a type iH customer, with E(B iH,res ) = . We use a similar notation for the residual service time of a type iL customer, the residual intervisit time, and residual cycle time.Furthermore, X iH and X iL are respectively the number of type iH and type iL customers at the beginning of a visit to Q i , so E(X iH X iL ) is obtained by differentiating V b i (z iH , z iL ) with respect to z iH and z iL (and then setting z iH = z iL = 1).
We now present an alternative, direct way to obtain the mean waiting time for a type iL customer by conditioning on the event that an arrival takes place in a visit period, or in an where S = N i=1 S i , and K i is the number of priority levels in Q i .In this expression Z ii is the amount of work at Q i when the server leaves this queue and depends on the service discipline.It is well-known that for gated service, E(Z ii ) = ρ 2 i E(C) and for exhaustive service, E(Z ii ) = 0.The pseudo-conservation law also holds for polling systems with mixed gated/exhaustive service in some or all of the queues.If Q i receives mixed gated/exhaustive service, we have K i = 2, and E(Z ii ) = ρ iL ρ i E(C).
Example 1
In order to illustrate the effect of using a mixed gated/exhaustive service discipline in a polling system with priorities, we compare it to the commonly used gated and exhaustive service disciplines.In this example we use a polling system which consists of two queues, Q 1 and Q 2 .Customers in Q 1 are divided into high priority customers, arriving with arrival rate λ 1H = 2 10 , and low priority customers, with arrival rate λ 1L = 4 10 .Customers in Q 2 all have the same priority level and arrive with arrival rate λ 2 = 2 10 .All service times are exponentially distributed with mean 1.The switch-over times S 1 and S 2 are also exponentially distributed with mean 1, which results in a mean cycle time of E(C) = 10.The service discipline in Q 2 is gated, the service discipline in Q 1 is varied: gated, exhaustive and mixed gated/exhaustive.Results for a queue with two priority levels and purely gated or exhaustive service are obtained in [1].
Table 1 displays the mean and the variance of the waiting times of the three customer types under the three service disciplines.We conclude that the mixed gated/exhaustive service is a major improvement for the high priority customers in Q 1 , whereas the mean waiting times of the low priority customers in Q 1 and the customers in Q 2 hardly deteriorate.Of course in systems where ρ 1H is quite high, the negative impact can be bigger and one has to decide exactly how far one wants to go in giving extra advantages to customers that already receive high priority.When comparing the mixed gated/exhaustive strategy to a system with purely exhaustive service in Q 1 , we conclude that the improvement is not so much in the mean waiting time for high priority customers, but mostly in the mean and variance of the waiting time for customers in Q 2 .
Gated Exhaustive
It is noteworthy that the mixed gated/exhaustive service discipline does not always have a negative effect on the mean waiting time of low priority customers in Q 1 , E(W 1L ), compared to the gated service discipline.If, for example, the switch-over times are taken to be deterministic with value 10, the mean waiting time for low priority customers is significantly less for the mixed gated/exhaustive service than for gated service, as can be seen in Table 2. Compared to gated service, type 1H customers benefit strongly from the mixed gated/exhaustive service discipline, and even type 1L customers benefit from it.The mean waiting time for customers in Q 2 has increased, but only marginally.
In order to get more understanding of this surprising behaviour of the waiting time of low priority customers as function of the arrival intensities λ 1H and λ 1L , we use a simplified model which leads to more insightful expressions, but displays the same characteristics as the model that was analysed in the previous paragraph.Instead of analysing a polling model, we analyse an M/G/1 queue with multiple server vacations.The queue, denoted by Q 1 to use familiar notation, contains high (type 1H) and low (type 1L) priority customers.Also here high priority customers are served before low priority customers.The service times of both customers types are exponentially distributed with mean 1.This is for notational reasons only, for this example we actually only require that both service times are identically distributed.One server vacation has a fixed length S. If the server does not find any customers waiting upon arrival from a vacation, he takes another vacation of length S and so on.In order to stay consistent with the notation used earlier, we denote the occupation rate of high and low priority customers by respectively ρ 1H and ρ 1L .The total occupation rate is ρ = ρ 1 = ρ 1H + ρ 1L .Note that in this example λ 1H = ρ 1H and λ 1L = ρ 1L .We now compare the mean waiting times of type 1L customers in the system with purely gated service and the system with mixed gated/exhaustive service.For this simplified model, we can write down explicit expressions that have been obtained by differentiating the LSTs and solving the resulting equations.These expressions could also have been obtained by using Mean Value Analysis (MVA) for polling systems [22,24].
Gated service: Mixed G/E service: Now we analyse the behaviour of these waiting times as we vary λ 1H between 0 and ρ, while keeping λ 1H + λ 1L = ρ constant.Substitution of λ 1H = 0 shows that the mean waiting times in the gated and mixed gated/exhaustive system are equal: Letting λ 1H → ρ leads to the following expressions: .
Two interesting things can be concluded from these two equations for the case λ 1H → ρ: • for fixed ρ, E(W 1L ) in a gated system is always less than E(W 1L ) in a mixed gated/exhaustive system, • the difference between E(W 1L ) in a gated system and E(W 1L ) in a mixed gated/exhaustive system does not depend on S.
Focussing on the mean waiting time of type 1L customers only, we conclude that a gated system performs the same as a mixed gated/exhaustive system as ρ 1L = ρ, and that a gated system always performs better when ρ 1L → 0. For 0 < ρ 1L < ρ the vacation time S determines which system performs better.By taking derivatives of (8.1) and (8.2) with respect to ρ 1H and letting ρ 1H → 0, one finds that the mean waiting time of a type 1L customer in a mixed gated/exhaustive system is less than in a purely gated system when ρ 1H → 0, if and only if S > 2ρ 1+ρ .Since a gated system always outperforms a mixed gated/exhaustive system when λ 1H → ρ, for S > 2ρ 1+ρ there must be (at least) one value of λ 1H for which the two systems perform the same.Further inspection of the derivatives gives the insight that in a gated system the relation between E(W 1L ) and λ 1H is a straight line, which can also be seen immediately from Equation (8.1).In a mixed gated/exhaustive system, the relation between E(W 1L ) and λ 1H is not a straight line, both the first and second derivative with respect to λ 1H are strictly positive.This means that for S ≤ 2ρ 1+ρ the gated system always performs better than the mixed gated/exhaustive system for any value of λ 1H > 0, and for S > 2ρ 1+ρ the mixed gated/exhaustive system performs better than the gated system for 0 < λ 1H < λ * 1H .The value of λ * 1H can be determined analytically: From this expression we conclude that lim S→∞ λ * 1H = ρ.Although we have studied only the vacation model, the conclusions are also valid for more general settings, like polling models with non-deterministic switch-over times, but the expressions are by far not as appealing.
We visualise the findings of the present section in Figure 1, where we show three plots of the mean waiting time of type 1L customers against λ 1H .The model considered is the same as in the beginning of the present section (two queues, gated service in Q 2 ) except for the switch-over times S 1 and S 2 , which are now deterministic.We compare gated service in Q 1 to mixed gated/exhaustive service for three different switch-over times (notice that the scales of the three plots in Figure 1 Example 2 In the previous example we showed that the mixed gated/exhaustive service discipline does not necessarily have a negative impact on the mean waiting times of low priority customers.In this example we aim at giving a better comparison of the performance of the gated, exhaustive and mixed gated/exhaustive service disciplines in a polling system with priorities.The polling system considered consists of two queues, each having high and low priority customers.The switch-over times S 1 and S 2 are exponentially distributed with mean 10.Service times of all customer types are exponentially distributed with mean 1.The arrival rates of the various customer types are: λ 1H = λ 1L = 1 10 , and λ 2H = λ 2L = 7 20 .The total occupation rate of this polling system is ρ = 9 10 , and we deliberately choose a system where the occupation rates of the two queues are very different, and the switch-over times are relatively high compared to the service times.The reason is that we envision production systems as the main application for the present paper (see also Section 1).In these applications large setup times are very common (see, e.g., [23]).
Table 3 shows the mean and variance of the waiting times of all customer types of this polling system for all combinations of gated, exhaustive and mixed gated/exhaustive service.We leave it up to the reader to pick his favourite combination of service disciplines, but our preference goes out to the system with exhaustive service in Q 1 and mixed gated/exhaustive service in Q 2 because in our opinion the best combination of low mean waiting times and moderate variances is obtained in this system.
Possible extensions and variations
Many extensions or variations of the model discussed in the present paper can be thought of.In this section we discuss some of them.
A globally gated system.The globally gated service discipline has received quite some attention in polling systems.Instead of setting the gates at the beginning of a visit to a certain queue, the globally gated service discipline states that all gates are set at the beginning of a cycle, which is the start of a visit to an arbitrarily chosen queue.The model under consideration can be analysed using similar techniques if high priority customers are served exhaustively, but low priority customers are served according to the globally gated service discipline.One would first have to build a similar model that contains 2N queues and determine the joint queue length distribution at visit beginnings and endings.The cycle time, starting at the moment that all gates are set, can be expressed in terms of the GF of the number of customers at the beginning of that cycle.Waiting times for high priority customers can be obtained using delay-cycles again, and waiting times for low priority customers can be obtained using the Fuhrmann-Cooper decomposition.The LST of the waiting time distribution of low priority customers gets more complicated as the queue gets served later in the cycle.
More than two priority levels.It is possible to analyse a similar model as the one of Section 2, but with more than two, say K i , priority levels in Q i .These K i priority levels still have to be divided into two categories: high priority levels 1, . . ., k i that receive exhaustive service, and low priority levels k i + 1, . . ., K i that receive gated service.The methodology from Section 5 can be used, combined with the techniques that are used to analyse a polling model with multiple priority levels, cf.[2].
A mixture of gated and exhaustive without priorities.One could think of a system where each queue contains two customer classes having respectively the exhaustive and gated service discipline, but service is First-Come-First-Served (FCFS).The model is similar to the model discussed in this paper, with the exception that no "overtaking" takes place.Customers that are served exhaustively will not be served before any "gated customers" standing in front of this gate, but they are allowed to pass the gate.The joint queue length distributions at polling epochs and the cycle times are the same as for the system considered in the present paper.Since no overtaking takes place, the waiting times can be found without the use of delay cycles.Nevertheless, analysis of the waiting times is quite tedious because a visit of a server to Q i consists of three parts.The third part is the service of exhaustive customers behind the gate, the first part is the service of the gated customers that have arrived during the "previous third part" and the second part is the FCFS service of both gated and exhaustive customers that have arrived during the previous intervisit time of Q i .A combination of this non-priority mixture of gated and exhaustive, and the service discipline discussed in the present paper is discussed by Fiems et al. [8].They introduce, albeit in the different setting of a vacation queue modelled in discrete time, a service discipline where high priority customers in front of the gate are served before low priority customers waiting in front of the gate.The difference with the model discussed in the present paper, is that high priority customers entering the queue while it is being visited can pass the gate, but are not allowed to overtake low priority customers standing in front of the gate.
Table 1 :
Numerical results for Example 1.The switch-over times S 1 and S 2 are exponentially distributed with mean 1.The mixed gated/exhaustive service discipline is compared to gated and exhaustive service.
Table 2 :
Numerical results for Example 1. Switch-over times are deterministic: are different).Figure 1: Mean waiting time of type 1L customers in the polling model discussed in Example 1.For gated and mixed gated/exhaustive service E(W 1L ) is plotted against λ 1H while keeping λ 1L + λ 1H constant.The switch-over times S 1 = S 2 = S/2 are deterministic.
Table 3 :
Queue Service discipline E(W iL ) E(W iH ) Var(W iL ) Var(W iH ) Expectation and variance of the waiting times of the polling model discussed in Section 8, Example 2. | 2014-08-01T03:42:45.000Z | 2009-12-01T00:00:00.000 | {
"year": 2014,
"sha1": "8a525521e9c93cfd75bc5af4dd5059b818efc8ca",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11134-009-9115-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "2f6ebf7637e3a86e08d908a5df7bb1fb7f6119e5",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Chemistry",
"Mathematics"
]
} |
114638454 | pes2o/s2orc | v3-fos-license | Best Value Procurement in Construction and its Evolution in the 21 st Century : A Systematic Review
This research attempts to facilitate client needs by describing the priority indicators that could help in decision making for awarding contracts. The indictors are recognized as key variables that impact the subsequent decisions of contracts award. The authors present a hierarchical review of relevant literature and integrate the factors that help in decision making using the Best Value Approach. This framework is comprised of eight dimensions of Best Value contributing factors – cost, risk, performance, quality control, health and safety, project control, current workload and delay claims. These eight dimensions aid the individual clients and organizations in selecting the most suitable contractor. The authors provide a brief understanding of Best Value contract strategy and the basis for the contract award in terms of business choice, managerial capacity and competency. This paper aims to provide a philosophy as to how Best Value decision making could be influenced by the ranking of contributing factors. This work also help in decision making by providing a hierarchical arrangement of the influential factors and the corresponding criteria for Best Value contract award.
Introduction
In the construction sector, project success is defined in a unique way.Project success is defined by meeting design goals, fulfilling user satisfaction, organizational development and developing technological infrastructure of the country.Projects can be undertaken successfully by achieving the milestones in the designated time, conforming to the quality standards and satisfying the cost impact on the end user (A.M. Liu and Walker, 1998).For many decades, the procurement of most of the construction projects has been carried out under traditional low-bid approach.In the traditional process of contractor selection, most of the projects suffer in terms of time and cost due to the subjective bias in clients' selection mechanisms.According to the user agency, the same level of performance could not be achieved due to subjective bias in contractor`s selection process (D.Kashiwagi and R. Byfield, 2002).The complex and risky decision making in low-bid approach results in misunderstanding, reactive contractor behavior, decreased quality of work, and hostile relationships (J.Kashiwagi et al., 2010).Many owners tend to select based on the lowest price in exchange for quality of work.The actual value of a contractor depends both on cost and project specific criteria (PSC).Supreme value can be measured from contractor's credentials, or 'selection criteria', during prequalification and final tender evaluation stages (Wong et al., 2000).Most research focuses on augmenting the long term performance of projects by evaluating the key factors in selection process (Cheng and Li, 2004).In selection process, the insertion of significant elements that meet the explicit needs of the project, confirms that the selected contractor is the most qualified to build the facility.The Best Value (BV) selection method identifies the most qualified contractor based on verifiable past performance metrics instead of more traditional criteria (Abdelrahman et al., 2008).
Clients and their representatives have to deal with bidding processes which are sometimes very arduous and challenging.The traditional low-bid system of contractor selection is often used because it very easy since it does not involve a lot of effort in evaluation of contractor expertise, personnel and performance, thus making documentation easier (D.Kashiwagi and R. E. Byfield, 2002).There is a level of satisfaction with this process on part of various stakeholders like designers, vendors, suppliers, engineers and project managers (Waara and Bröchner, 2006).This process assumes that the contractors will provide good quality regardless of the price.On the other hand, Best Value ensures that the most qualified contractor is selected regardless of the price.Therefore, understanding the Best Value system can greatly benefit both clients and contractors.
Best Value
Best Value (BV) is an efficient and effective approach that minimizes the detailed wasteful communication and information, and creates a "win-win" scenario for both the client and contractor; the highest possible value at a lowest cost, high vendor profit, and minimal project cost and time deviations (D.Kashiwagi et al., 2012).BV examines various factors that need to be considered in procurement processes to enrich the long term performance and significance of the construction (Chan et al., 2004).BV underlines effectiveness, value for money and performance criteria.It focuses on establishment of best practices for public sector organizations such as formulating verifiable standards and develops sufficient contractual arrangement in delivering services to the public (Akintoye et al., 2003).
Concept of BV
The foundation of BV is based on the concept that by using multiple criteria, vendor competition increases and transparency increases thereby making it more difficult for vendors to mislead clients in their proposals.Undeniably, the quality grounds are not the same for each contractor.Therefore, it is preferable for the procurement party to select a vendor with the optimal quality at an accurate price (Herbsman et al., 1995).All the quality standards could not be implemented on a project at lowest cost.Therefore, it is thoughtful to use a cost-time tradeoff approach (Shen et al., 1999).
Best Value Contract Strategy
The BV contract strategy is implemented in various stages.It consists of a competitive selection phase, a clarification phase and finally the execution phase (J.Kashiwagi et al., 2010).Comprehensive comparison of values and prices is done in competitive selection phase.Since this process caters most of the factors mutually, BV is always the "Best Value proposed for the lowest price" and is relative (D.Kashiwagi et al., 2014).After identification of BV, the contractor should ascertain what they are going to do in clarification phase in which the contractor is encouraged to justify his capability, performance and expertise.The detailed proposal (clarification) is then put into contract along with the contractor's price.The contract is finally signed and the contractor is obligatory to deliver the project in execution phase (Savicky et al., 2014).
Comparison of BV and traditional methods
In the traditional low-bid contract system, the bidders do not have any pricing information of other competitors and the bidder who offers the lowest price wins the contract.Consequently, all the bidders tend to lower their bid-price just to win the contract (Yasamis et al., 2002).This lowbid selection method hinders the quality of the product and services because bidders are not inclined to fully understand the needs of the client (D. T. Luu et al., 2005).As a result of contractor's diminutive performance, the whole project might suffer time and cost overruns which provides a gateway for legal issues like arbitration/ litigation (Assaf and Al-Hejji, 2006).
The BV is different from the traditional method in a sense that it utilizes the expertise of industry professionals by minimizing the management and control of vendors.Experts can think in the best interest of the owner, identify the risks associated to the project and able to prescience the consequences of decision making (D.Kashiwagi and R. Byfield, 2002).Since the owner is not the expert, it is the responsibility of the expert vendor to deliver the project assignments and to compete upon the capabilities to identify and resolve the problems with their accompanying prices.Based upon the expertise, the vendor then clarifies in detail the procedures to be adopted to meet the client's expectations (Chan et al., 2002).
Advantages of BV Procurement
The prime advantage of BV is that it identifies expertise as the only factor that can minimize risk of nonperformance and any attempt to manage and control a vendor is inefficient and costly (D.Kashiwagi and R. E. Byfield, 2002).By using performance information, expert vendors show their high performance on similar projects and address the needs and concerns of client (Abdelrahman et al., 2008).BV encourages the vendor to describe and provide accurate solutions to the problem and methodology that a non-expert vendor can identify expert vendor and utilize expertise to lower cost and risk (Kelly et al., 2009).
Disadvantages of Traditional Procurement
Low-bid practices result in poor wages and working condition and low environmental standards, thus declining the quality and sustainability of products and services (Baloi and Price, 2003).Designers, project managers, politicians, and contractors were comfortable with the existing traditional ''low-bid'' process.This process ''assumes'' that all contractors will provide an ''equal'' quality product but most of the clients find the contractor who offers to undertake the project at the lowest price (Flyvbjerg, 2013).The major reason why the low-bid process continues to be used, despite its subjectivity and bias, is because it is easy to document and explain a low bid (D.Kashiwagi et al., 2014).
BV Contributing Factors
BV is not an isolated concept, it has its origins and contributions within the project performance and team related factors.This study suggests that BV is most effective when it is based on key evaluation criteria for contractors.Based upon the study of previous researches, the criteria contribute the project award are:
Cost
Cost is one of the most significant criteria for measuring project success.It is defined as the basis at which the general conditions that are mentioned in contract stimulate the project completion within the expected budget (Bubshait and Almohawis, 1994).It cannot be suggested as the cost that is only constituted in tender sum, but it covers cost which is being utilized in various stages of project leading form inception, designing, and execution to maintenance.Overheads and profits of contractors are also summed up in cost.It can be measured as unit cost or lump sum.In acquisition, price plays a vital role where the requirements are well defined and risks are negligible.On contrary, where requirements are not well defined, non-price criteria may dominate (Watt et al., 2010).The Best Value Source Selection (BVSS) energizes creativeness and improvement from contractors who intended to fulfill the requirements of public projects and augments the flexibility in selecting best proposal (Zhang, 2006).
Risk
Project risk is an ambiguous event whose occurrence negatively impacts the project outcomes such as cost, quality, schedule and scope (Rose, 2013).In measuring risk, identified risks are further ranked both qualitatively and quantitatively.In this way, the risks are highlighted for further analysis.Project risks and their sources can be classified using various approaches.
From the perspective of contractor, project-related risks can be classified that have an impact on project performance in terms of cost (Baloi and Price, 2003).Incentive-based contracts were introduced to overcome the issues that occur in traditional forms of payment.Both client and contractor share the risks and the reward in incentive-based contracts (Floricel and Miller, 2001).
Performance
Past performance of contractor is evaluated prior to its selection.In this process, various attributes such as human resources, machinery and equipment, skill level of project team, optimized resource utilization and number of key personnel are evaluated.In order to improve the overall performance of contractors, they must focus to complete the project in stipulated time, reduce delays and establish good relationships with sub-contractors (Xiao and Proverbs, 2003).Contractor enactment play a dynamic role in success of project since it is the party who has the duty to deliver the project.Augmented contractor performance definitely enhances the user gratification, contractor repute and their effectiveness in the market.Research shows that there is much room for further investigating the contractor performance (Alarcón and Mourgues, 2002) .The contractors who are able to finish by the deadline of project are more viable to bring out future projects (Chan et al., 2002).Therefore, during selection, those contractors who have excellent past performance record should be given preference in contract award (Khosrowshahi, 1999).
Quality Control
The assessment of quality is subjective.In the construction industry, quality is defined as the totality of features required by the products or services to satisfy a given need; fitness for purpose (Arditi and Gunaydin, 1997).Specification is defined as workmanship guidelines provided to contractors by client at commencement of project execution (Boukamp and Akinci, 2007).Corporate-level quality refers to the quality expected from a construction company in addition to the product and/or service quality.Corporate quality culture promotes quality conscious work environment and corporate-level quality in a construction company.It establishes and promotes quality and continuous improvement through values, traditions and procedures (Arditi and Lee, 2003).Contractors achieve client satisfaction by establishing strong quality culture and delivering higher quality services and facilities.Owners expect that the contractors must deliver the highest quality in each aspect.Therefore, it is of importance to owners to encourage the contractors who follow high quality standards (Cox et al., 2003).
Health and Safety
Health and safety is defined as the extent to which the general conditions are implemented on the project without major injuries and accidents on site (Bubshait and Almohawis, 1994).In a rapidly built environment, general reminders to implement safety are very important to avoid fatalities.Additionally, warning signs must be displayed to develop a safe and healthy environment at workplace.These warning signs keep the workers attentive to follow safety rules, enable them to communicate the hazards, provide them the necessary instructions about using personal protective equipment (Toole, 2002).
Project Control
The project monitoring and controlling process should be initited from planning phase which involes appropriate breakdown into smaller components, using performance metrics and anayltical tools, Earned Value Management (EVM) and performance forecasting (Nepal et al., 2006).The procedure of evaluating project cost and perfomance has been significantly analyzed (Rose, 2013).In order to quantify the progress based on WBS and cost accounts, several models have been developed.The researchers are still an awful long way from achieving the lowest possible level of scope breakdown to evaluate progress without messing with data handling (Chan et al., 2001).
Delay Claims
In the construction process, delay claims are considered to be an area of uncertainty and severance (Wood and Ellis, 2005).The cost of disruptions is production related and often problematic to justify.Several issues may arise such as how to alleviate the risks relating estimation, resource utilization, poor workmanship, plant breakdown, deprived quality or impaired material (Shi et al., 2001).In case of potentially problematic aspects of delay claims in a construction project, study reveals that various aspects like pre-contract negotiation, clarity in project scope, and agreement between contractor, owner and project team are likely to lessen the conflict among parties and increase the certainty in achieving project success (Aibinu and Odeyinka, 2006).
Current Workload
Current workload refers to the number and size of projects that a company is carrying out at the moment.It gives the information that whether the resources will be available for a particular project depending upon the workload during construction (Singh and Tiong, 2006).
A company having undertaken few projects at one point in time, then they would have ample capacity of resources to incorporate on multiple projects.In case the company has undertaken many projects then the resources will be distributed, as hence a limited capacity will be available for the projects (Al-Harbi, 2001).
Methodology
A systematic review has been conducted to develop a typology of existing work.Tranfield et al., (2003) stated that systematic review delivers collective discernments through theoretical interfusion of prevailing studies.The traditional approach for qualitative research encompasses the summarized findings which results the accumulation of knowledge as understood through current literature of different fields of knowledge (Ruediger Kaufmann et al., 2012).In contrast to the qualitative approach, management research is wide-ranging and has diverse logic which requires quantitative study of heterogeneous publication from various journals and conferences (Edmondson and McManus, 2007).The methodology for systematic review is rather more flexible and account for different conceptualizations and reasoning of the reviewed studies (Chai et al., 2013).
Based on the previous research regarding contractor selection procedure, a total of 19 factors have been identified.The sources used for searching the literature included "ASCE," "Science Direct," "Taylor & Francis Online," "Cibw117" and "Emerald Insight" etc. Semantic technique and keywords are used in searching process.A total of 62 research publications from different journals of project management, and construction engineering and management published between the years 2000-2015 have been studied.This particular period is selected to focus on the recent trends and examine the attributes that are presently effective in this area of research.The identified factors have been shown in Table 1.
Overview of Best Value Contributing Factors Typologies
No.
Grouping & Analysis
A total of 19 factors have been identified that affect the decision making in selecting the most suitable contractor as shown in Table 1.Upon further studies and investigation of related literature, these factors are grouped into eight main criteria.These criteria are developed by extracting the factors from the previously carried out relevant research and available literature.As a result, the above mentioned factors are referred to as sub-criteria and their grouping has resulted into formulation of main criteria as shown in Table 2.
Yearly appearance of Factors
In the next step, yearly appearance of these factors has been studied in order to observe the temporal progress in the published literature.An attempt has been made to classify these factors on the basis of year of appearance.For inclusion in the table, a factor has to appear at least once every two year.The yearly appearance has been shown in Table 3.
Appearance and Criticality of Factors
After reviewing 62 papers on this subject, the factors show various differing trends.For the sake of understanding and simplicity, the appearances of factors are calculated in the 62 research papers.This shows the frequency of occurrence of each factor in research papers which have been studied in the selected period of publications.The appearance of each factor has been calculated by taking the ratio of its occurrence in research papers to the total number of studied research papers.This not only provides an insight to the latest trends on procurement strategies for the past 15 years but also in finding the number of appearances and further calculating the relative significance or criticality of the identified factors.Their frequency of appearance and their importance are shown in Table 4.The factor "past performance and expertise of company" possesses highest percentage (56.54%).
It indicates that BV procurement strategy has great emphasis on evaluating the contractors on the basis of their past performance.The competency and seriousness of contractor could only be determined by measuring performance of executed projects.The second most important factor is the "quality control measure".It includes the processes adopted by the contractors to determine quality policies and steps that need to be taken to ensure client satisfaction.The factor "health and safety performance" show a significant contribution as it ensures the proper handling and usage of equipment and to facilitate the worker with adequate personal protective equipment (PPE).The other important factor is "proposed tender price".It enables the client to make comparison between the tenders and cost plan to assess the inherent value within different tenders and allowing values for money.
Considering the above data, the criticality of factors enabled us to determine their relative percentages; some factors such as performance, health and safety, and quality control have greater percentages as discussed above.Although factors like risk, cost and project control are showing less deviation comparatively.Figure 1 below illustrates this comparison and shows that both delay claims and current workload have lowest percentages.
Relative Contribution of Each Criteria
Figure 1: Relative Frequency of Criteria.
Figure 1 presents a clear picture of the components that the researchers have devised through the 21 st century in BV literature.Since the execution phase is the most critical of a project and involves many risks, it has been delegated to the contractor who has the responsibility to complete according to the requirements of the owner.Some attributes are pivotal for contractor selection in which the performance is the most imperative.This includes attributes such as "past performance and expertise of company," "number of key personnel," "optimized resource utilization" and "training and skill level of project team".The first one has higher criticality and the last has lower.As a general rule, individual attributes may have varying importance but if any of them is reported to have very high frequency, the averaging effect will result in an importance boost into the overall criterion.
Classification of Criteria on the basis of Journal
In the next step, the frequency of factor appearance in major journals was categorized.It is deduced based on detailed observations that some journals have evaluated many factors while some have only examined one.It is evident in Figure 2 that "International Journal of Project Management" has included all the factors.So it may be considered as the most comprehensive journal that researchers can seek guidance from.Some journals like "Construction Management and Economics," "Benchmarking: An International Journal" and "Journal for the Advancement of Performance Information & Value" constituted six criteria.Furthermore, "Automation in Construction" and "Building and Environment" included only one factor.This shows that they do not share the same level of comparative focus on the BV literature.The classification on the basis of journals have been shown in Figure 2:
Classification of Identified Factors on the basis of sources
In the final step, the sources of articles covering these factors have been identified.Famous libraries of research publications like "ASCE library" and "Science Direct" constituted all the eight factors and most of the papers regarding this field have been downloaded from these sources."Taylor and Francis Online" is on the second rank."Emerald Insight" and "Cibw117" included six factors each.Factors along with their respective sources are given in Table 5:
Factors Identification Chart
There are several factors that influence the success of project enactment which were identified through an in-depth review of articles as mentioned previously.The contractor and subcontractor perform activities in the construction stage.The elements include contractor performance, site supervision, contractor cash flow, overheads, effective cost control system and onsite communication.An attempt has been made to formulate a new structure that includes the criteria affecting the project success is developed.It can be used as basis for further examination on selection criteria for general construction projects and specific projects like roads, buildings, dams, bridges, etc. Therefore to provide more ease in finding the literature about BV, a more systematic way of project success is established.
The published literature has been limited to 21 st century to make it comprehensive and to identify the latest trends regarding the topic.Initially, some work was carried out using BV in which the researchers had identified some factors that would affect the decision making.Since every research is an ongoing flux, it is not viable to only rely upon the factors that have been initially identified.Efforts have been made to find loopholes that affect the long term decision making process.As time progressed, the conditions that were previously reigned in a particular area did not necessarily remain the same in upcoming decision making process, and hence, an inference can be made about the futuristic change in the process.As a result, the maturation of the phenomenon is necessary to be studied.
Considering the literature on BV, the analysis is graphically represented in Figure 3.It shows the crux of this research by indicating the factors which have been identified by the researchers initially.Some factors have been eliminated and new factors have emerged successively, whereas some of them show no change in their appearance over the period of study.Therefore, all of the aforementioned factors should be considered in contractor selection using BV approach.This distribution also shows that some factors like cost, quality control, project control and performance have appeared continuously which shows that, despite an evolution of new factors, they demonstrate equal strength over the time.Their continuous emergence in each year shows the significance of these criteria in decision making process of contractor selection.
It is important to note that publications in 2000-2001 have considered all the factors, excluding delay claims that arise on construction sites, suggesting that most of the criteria were resolved at the early stages of research in BV procurement process.After that, it can be observed that current workload was also not reported in 2002-2003.Risk is a key criterion that a contractor should be capable of mitigating but the content analysis shows that it has not been contemplated from 2004-2007.Ample research has been carried out in risk management but risk in decision making has not been considered in the mentioned years.In a similar way, some factors have been ignored in successive years while some have been reported.
The objectives of the BV tendering process guarantee its competitiveness, transparency, equity, fairness and efficiency.Contractors should be clustered on the basis of their capability to meet project requirements.BV provides an efficient way of clearing out the incompetent contractors by assessing them on the basis of identified criteria.
Additionally, the past performance, which has been rendered as the most significant criteria, needs to be substantiated in the selection process.The previous records of contractors should be kept in a register which can be effectively reused for the upcoming projects as it provides evidence for improving the contractor's performance and maintaining their business propagation.
The key point is that presently all of the identified factors show some importance.Considering the fact that project execution phase is the most difficult among all phases, it is essential to investigate the contractor ability to meet the execution by examining the aforementioned factors.This shows that the construction industry has evolved in terms of the contractor selection processes.The historical development of Best Value contributing factors is shown in Figure 3.
Conclusions & Recommendations
The Best Value Approach for contractor selection focuses primarily on past performance and the level of quality that the contractor has delivered on previous projects.In traditional methodologies, cost is typically the only selection factor.Despite the fact that the selection process in a traditional low bid system is seemingly simpler, it has a lot of issues regarding project delivery, schedule and quality control.Thus it poses serious questions on the project success.Apart from these attributes, research shows that there are some other factors that need to be addressed.This research focuses on the said factors which have been reported in the past few years and through their evolution over time.
The process of contractor selection considering criteria other than low-bid can strengthen the overall success of the project.The current research has presented some paramount practices in this area and also highlighted a well-regulated approach to contractor selection.The aim is to augment the schedule and quality of construction projects while nurturing satisfying and constructive working atmosphere among the parties involved.Such an environment can only be achieved by targeting factors that are mentioned above in contractor selection process.In order to strike a balance in successful project outcomes, criteria like quality control, performance, health and safety must be considered on priority.
The results provide a significant contribution to the body of knowledge regarding contractor selection.Particularly, this research underlines the prominence of typical criteria that is used in contractor selection.The appearance of each criterion and their criticality guides researchers to develop a weighting system during contractor evaluation.In doing so, a win-win situation can be achieved for both the users and tenderers, particularly with respect to risk, performance and quality control.
In recommendation, currently it is observed that all the identified factors are being considered.Some factors like performance, project control, quality control, cost, health and safety appear most frequently in recent publications.In this study, the factor, current workload, which is placed at bottom position, must be contemplated for future studies.If the contractor has undertaken several projects simultaneously then it is cumbersome to monitor and administer all of them equally.As a result, poor quality and performance hinders the project success.Hence during selection, besides performance and quality control, number and size of projects in hand must also be evaluated.
Based upon the analysis of existing literature, it is authenticated that BV procurement strategy is simple to implement and flexible enough to adjust to the project specific and client preferable requirements.These criteria not only discourse the ultimate performance and overall cost of the work but also subsidize to the efficient execution of the work.It is quite cumbersome for the agencies to completely inspect quality into the work.Therefore, such awarding mechanism is needed that state the Value rated elements for decision making.
The industry needs a more robust and flexible decision making model since every construction project is unique in the sense that each project differs in site conditions, associated risks, human resource etc.In most circumstances, where projects suffer many disputes in terms of cost and schedule, it is difficult to identify what the best solution.This ultimately results in disputes and time deviations focused on solving such issues.If all such factors are catered before awarding the contract, such issue could be eliminated which would definitely save time and money and keep the relationship between parties pacified.
Figure 2 :
Figure 2: Appearance of Factors in various Journals.
Figure 3 :
Figure 3: Historical development of Best Value contributing Factors. | 2019-04-15T13:06:28.918Z | 2016-06-01T00:00:00.000 | {
"year": 2016,
"sha1": "a54e04a3c324cef86f5254e42ecea3c5d7fd67e1",
"oa_license": "CCBY",
"oa_url": "http://journal.cibw117.org/index.php/japiv/article/download/44/43",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a54e04a3c324cef86f5254e42ecea3c5d7fd67e1",
"s2fieldsofstudy": [
"Engineering",
"Business"
],
"extfieldsofstudy": [
"Engineering"
]
} |
263834636 | pes2o/s2orc | v3-fos-license | Causal inference for disruption management in urban metro networks
Urban metro systems can provide highly efficient and effective movements of vast passenger volumes in cities, but they are often affected by disruptions, causing delays, crowding, and ultimately a decline in passenger satisfaction and patronage. To manage and mitigate such adverse consequences, metro operators could benefit greatly from a quantitative understanding of the causal impact of disruptions. Such information would allow them to predict future delays, prepare effective recovery plans, and develop real-time information systems for passengers on trip re-routing options. In this paper, we develop a performance evaluation tool for metro operators that can quantify the causal effects of service disruptions on passenger flows, journey times, travel speeds and crowding densities. Our modelling framework is simple to implement, robust to statistical sources of bias, and can be used with high-frequency large-scale smart card data (over 4.85 million daily trips in our case) and train movement data. We recover disruption effects at the points of disruption (e.g. at disrupted stations) as well as spillover effects that propagate throughout the metro network. This allows us to deliver novel insights on the spatio-temporal propagation of delays in densely used urban public transport networks. We find robust empirical evidence that the causal impacts of disruptions adversely affect service quality throughout the network, in ways that would be hard to predict absent a causal model.
Introduction
Commitment to a carbon neutral future presents enormous challenges for patterns of consumption, production, and energy use. Changes in the transport sector will play a key role in achieving net zero emissions, requiring more sustainable movements including a shift in passenger transport away from private car. Sustainable transport choices are better incentivised when alternatives to private car are efficient, reliable, and resilient. Metro systems form an important component of mass public transport in cities, characterised by large capacity and high-frequency services that can deliver vast volumes of passengers to central locations in small windows of time. However, metros experience disruptions frequently due to infrastructure or rolling stock failure, extreme demand shocks, bad weather or natural disasters, leading to a decline in service quality and ultimately in attractiveness and patronage (Zhang et al., 2016). To manage and respond to disruptions, operators' recovery strategies require detailed information about the number of travellers affected, delay times, and the crowding density inside trains and on platforms across the metro network. This information provides the foundation for prediction and management of future interruptions, and for the provision of real-time information that can mitigate the impact of disruptions on passengers.
In the literature, a large number of simulation studies have been conducted to quantify the impacts of hypothetical disruption scenarios. Over the years, these simulation-based research has evolved from solely using complex network theory to encompassing system-performance analysis (Derrible and Kennedy, 2010;Mattsson and Jenelius, 2015;Sun et al., 2015;Sun and Guan, 2016;Malandri et al., 2018;Yap and Cats, 2022). However, the absence of real disruption data and user responses raises doubts about the validity of the assumptions made regarding virtual interruptions and the corresponding passenger behaviour. In contrast, empirical studies have adopted observational data, such as passenger surveys and smart card data, to analyse the effects of disruptions (Rubin et al., 2005;Silva et al., 2015;Zhu et al., 2017;Yap and Cats, 2020;Zhang et al., 2021). Most existing empirical research typically assumes that metro disruptions occur randomly. However, it is crucial to acknowledge that some internal or external factors may have a significant confounding influence on the likelihood of metro failures, as well as the corresponding impact of disruptions, which needs to be considered in the quantification (Melo et al., 2011;Zhang et al., 2021). Moreover, metro stations are interconnected through an infrastructure network, which implies that disruption impacts can propagate from any point to the entire network. This interference problem is challenging to address even using state-of-art intervention evaluation methods.
To address the above gaps, we propose a novel causal inference framework to quantify both the direct and spillover causal effects of metro disruptions on system performances, which is characterised by passenger demand, average travel speed/journey time, and on-board crowding level. Our synthetic control method is unique in the way that it allows interference among metro stations, and the weighted average of unaffected observations (synthetic control) is used to approximate the counterfactual outcomes of disruptions. Specifically, the multi-day high-frequency smart card data (over 4.85 million trips per weekday) enable us to create a control group for each metro station across the disrupted day using data observed on days when disruptions did not happen in the entire metro network. Thus, the proposed approach estimates the network-wide impact of disruptions by relaxing the non-interference assumption and eliminating any potential confounding bias.
We conducted a case study of four urban lines of the Hong Kong Mass Transit Railway (MTR) and illustrated it with a randomly selected disruption on the Island Line. This application indicates that delays and crowding can spread throughout the entire disrupted line and potentially affect other lines within the network. The propagation of spillover effects takes time and the disrupted station may recover earlier than other parts of the network, with impacts on downstream stations lagging behind that on upstream stations. Additionally, we find that interchange stations, those with connections to more than two metro lines, are more resistant to disturbance from disruptions, especially for journey time, travel speed and crowding density. The above unbiased information offers important insights for metro operators, enabling them to improve the quality of real-time information provision and replacement service planning during disruptions. The success of this case study also demonstrates that our causal framework is easy to use, scalable, and can be extended to other complex transportation systems for intervention assessment and operational decisions.
The rest of the paper is organised as follows. Section 2 reviews the literature on quantifying the effects of metro disruptions, with a focus on empirical research. Section 3 presents the proposed synthetic control framework. This section outlines the fundamentals of Rubin's potential outcome framework and the customised synthetic control design to estimate the disruption impact across a metro network by leveraging high-frequency smart card data. In Section 4, we detail the case study on the Hong Kong MTR with an example of network-level disruption impacts propagation. Finally, results and conclusions are discussed in Section 5 and Section 6.
Literature review
There are various ways in which we can model and quantify the impacts of metro disruptions. The literature is rich in simulation studies of various interruption scenarios. Simulations enable the measurement of the impact of disruptions on network accessibility, vulnerability, and resilience. Traditional simulation-based analyses rely on complex network theory, which converts the metro network into a scale-free graph (Derrible and Kennedy, 2010;Mattsson and Jenelius, 2015). By hypothetically removing nodes or links, measures such as location importance, betweenness centrality and global efficiency are derived to reflect the changes in the topological structure of the metro network under disruption Yang et al., 2015;Sun and Guan, 2016;Sun at al., 2018;. More advanced simulation-based research also incorporates metro operations into impact estimation (Mattsson and Jenelius, 2015). Via passenger-route assignment, disruption impacts are often quantified as the variance in ridership distribution, passenger delays, operational costs, or crowding level under different hypothetical scenarios (Sun at al., 2018;Rodrí guez-Núñez and Garcí a-Palomares, 2014;Cats and Jenelius, 2014;Malandri et al., 2018;Adjetey-Bahun et al., 2016;M'cleod et al., 2017;Yap et al., 2022). The simulation approach can imitate a wider range of situations through the experimental settings, from station closures to network crashes Sun and Guan, 2016;Chopra et al., 2016;Zhang et al., 2018: Ye andKim, 2019). However, in general many assumptions have to be made to infer passengers' responses to virtual disruptions. Without observing passengers' behaviour during real incidents, the veracity of the simulation approach is largely questionable (Zhang et al., 2021).
Due to the abovementioned concerns, empirical studies have gained popularity with the availability of a variety of data sources, from user surveys (Rubin et al., 2005;Zhu et al., 2017) to automated smart card data (Silva et al.,2015;Yap and Cats, 2020;Zhang et al., 2021). The latter has emerged as the mainstream data source in recent years due to their advantages in terms of accuracy, cost-effectiveness, and ease of long-term observation (Sun at al., 2013;Kusakabe and Asakura, 2014). For example, applied smart card data to estimate the disruption impact as the difference in passenger assignment outcomes under real incidents and normal conditions. Silva et al. (2015) estimated the loss of demand at entry and exit stations under disruptions from smart card data and the topology of the metro network.
The above-discussed empirical literature typically assumes that metro disruptions occur randomly. However, factors such as the underlying travel patterns, the type of signalling, passenger behaviour, the condition and age of the infrastructure and rolling stock, and weather conditions may have a significant confounding influence on the likelihood of metro failures, as well as the corresponding impact of disruptions, hindering our ability to truly measure cause and effect patterns of disruption (Melo et al., 2011;Wan et al., 2015;Brazil et al., 2017). Zhang et al. (2021) relaxed the random disruption assumption and proposed a propensity score matching method to quantify average causal disruption effects that addresses potential bias from confounding factors. The design of their causal inference method still suffers from the limitation that it cannot capture the spillover effects of disruptions on nearby metro stations (also known as the "interference" phenomenon). Metro stations are connected by an infrastructure network and served sequentially by trains, meaning that disruption impacts can propagate from any point to the entire network. This shortcoming is challenging to address using traditional causal inference methods.
This research contributes to the literature by leveraging large-scale smart card data and historical disruption data to develop a novel statistical method which advances the field in two ways. First, adoption of a causal perspective renders our empirical approach free from confounding bias which would otherwise arise in naï ve statistical analyses due to the non-random nature of disruption occurrence. Second, our causal inference framework enables operators to measure network spillover effects, following the pattern of delay propagation throughout the entire system. To the best of our knowledge, this is the first study that provide empirical evidence of the spillover disruption impacts and their spatio-temporal propagation.
Methodology
To measure the impact of disruptions on a metro system, we use Rubin's potential outcome framework to establish causality (Rubin, 1974). We define metro disruptions as 'treatments' (or interventions) and the objective of our analysis is to quantify the direct and indirect causal effect of treatments on 'outcomes' related to the quality of service provision. Specifically, we are interested in estimating station-level impacts on (i) travel demand, (ii) journey times, (iii) travel speed of passengers, and (iv) crowding density on board. The detailed definition of each outcome measure is provided in the Appendix.
Synthetic control framework
In this research, we define the study unit as the status of a metro station = 1, … , on a given day = 1, … , , during interval = 1, … , . We consider 15-minute-long intervals. The station is classed as treated if it encounters a service interruption of at least five minutes in the 15-minute interval. The treatment assignment variable, denoted by ∈ {0,1}, records whether station has been exposed to disruptions during interval on day . Under the assumption that there are no hidden versions of the treatment (consistency assumption), see (Imbens and Rubin;, we use ( ) to denote the potential outcomes of metro service provision, namely the total inflow and outflow of passengers, the average journey time, average travel speed, and the density of crowding. More specifically, (1) are counterfactual potential outcomes, only one of which is observed.
Causal inference studies commonly make the stable unit treatment value assumption (SUTVA), which requires that the outcome for each unit should be independent of the treatment status of other units (Graham et al., 2014). However, due to interference between stations in the metro network, SUTVA is unlikely to hold. We customize the synthetic control method to estimate causal effects of disruptions in the absence of SUTVA.
To create the synthetic counterfactual outcome, we create a donor pool from data observed on days when disruptions did not happen in the entire metro network: is a set of such undisrupted days with cardinality . This design of the donor pool benefits from the fact that high-frequency smart card data contain observations for all time intervals from multiple days. To quantify the impact of a disruption that starts at station on day at time and ends at time , we construct a vector of outcomes = { , , … , }, where is the two-dimensional vector of outcomes for station during time intervals = , … , on the disrupted day and undisrupted days (i.e., + 1 days). We assume that this disruption has no effect on outcomes before the treatment period . Conversely, after , all stations in the network can be affected by this disruption. Since we stack the data of the treated day followed by undisrupted days, = ( ) for = 1 and = ( ) for = 2, … , + 1, ∈ . Note that = 1 if ≥ and = 0 otherwise.
For a specific time interval of a treated/affected station , the counterfactual outcome is defined as a weighted average of the outcomes in the donor pool, where = ( 2 , … , +1 ) ′ is a × 1 vector of non-negative weights that sum to one (Abadie, 2021). See the next subsection for the way we determine these weights. The synthetic control estimator of the counterfactual outcomes is: while the causal effect of the treatment is estimated by With the definitions above, during and after a given disruption, the direct causal effects on a treated station is derived as where denotes the observed outcome of the treated unit on the disrupted day in interval .
Furthermore, denotes the weight of the ℎ day in the corresponding donor pool for station , and (0) denotes the observed outcomes for the same station-interval pair on the ℎ day.
Similarly, the indirect causal spillover effects of a disruption at station on the performance of other station ( ∈ 1, … , ∖ ) is derived as where (1) denotes the observed outcomes for the affected units of other (non-disrupted) stations during and after a given disruption; and (0) denote the weight and outcomes of the ℎ day in the corresponding donor pool for station . Figure 1 illustrates the design of the synthetic control framework for metro disruptions.
The choice of weights
A simple way of constructing synthetic counterfactuals is to assign equal weights = 1/ to each unit in the donor pool. The estimator for is then where the synthetic control is the unweighted average of observed historic outcomes in the donor pool.
In this research, we apply the method proposed by Abadie and Gardeazabal (2003) and Abadie et al. (2010) Predictors are selected such that they are unaffected by the treatment (service interruption), but they do influence the outcomes, which may include pre-interruption values of .
Weights are optimised to ensure that the resulting synthetic control units best resemble all relevant characteristics (predictors) of the treated unit before the disruption. That is, given a set of non-negative constants = ( 1 , … , ) , the optimal synthetic control weight vector * = ( 2 * , … , +1 * ) ′ is obtained from the following minimisation problem: where the positive constants 1 , … , reflect the relative importance of the predictors on the outcomes. Each potential choice of produces a corresponding set of synthetic control weights ( ) = ( 2 ( ), … , +1 ( )) ′ . We choose , such that ( ) minimises the mean squared prediction error (MSPE) of this synthetic control with respect to outcome before the disruption: where the synthetic control weights 2 ( ), … , +1 ( ) are functions of , for a pre-intervention To determine the optimal values of and , we follow Abadie (2021) and the steps below.
i). Divide the pre-intervention periods (before disruption occurs) into an initial training period ( = 1, … , 0 ) and a subsequent validation period ( = 0 + 1, … , − 1 ). The lengths of the training and validation periods can be application specific. ii). With training period data on the predictors, compute the synthetic control weights under given constants ̃( ) by solving the optimisation problem in Equation [7]. iii). Using data from the validation period, minimise the MSPE in Equation [8] with respect to . iv). With the validation period data on the predictors, use the resulting * to calculate optimal weights ̃ * =̃( * ), according to Equation [7].
The Case study
Our case study application is based on large-scale automated data from four urban lines of Hong Kong MTR, the Island Line, Tsuen Wan Line, Kwun Tong Line and Tseung Kwan O Line, with 49 stations in total. A map of the partial network that we study is provided in Figure 2. The daily service hours of the four lines start at 6:00 and end at 24:00, which is then divided into 72 intervals of 15 minutes. The following data are used to estimate the direct and spillover causal effects of disruptions.
Pseudonymised smart card data: The Hong Kong MTR provided smart card data from 01/01/2019 to 31/03/2019. The dataset contains information on the time and location of tap-in and tap-out transactions throughout the system, recording individual trips. Based on the data, we compute aggregate passenger flows at station entries and exits, passenger's average journey time, the average travel speed (Zhang et al., 2021), and crowding density for each target station. The resolution of time stamps exacts to one second.
Automated vehicle location (AVL) data and incidents logs:
The MTR provided AVL data and incident information data during the same study period, which are used to generate historical disruption logs . The AVL data contain information on train ID, service ID, the timestamp of train movements (including precise departure and arrival times), and the location of train movements (including station, line and directions). The resolution of time stamps exacts to one second. Incident logs are manual inspection record of incidents, including information on the time and location, cause and duration of disruptions. Readers are referred to Appendix for more details on our disruption data.
Weather data: We collect data on outside temperature, wind speed and precipitation status from the web portal Weather Underground of Hong Kong. Based on hourly historical observations, we estimate weather conditions for all selected stations at 15-minute intervals.
Mega events in Hong Kong: From 01/2019 to 03/2019, we collect information, including the location and time, on three types of mega-events held in Hong Kong: concerts, sports matches and exhibitions. Data sources include official news and government records. 1 Figure 2. The map of four urban lines that we study in the MTR network (highlighted in colour).
Results
Our study period runs from 1/1/2019 to 31/3/2019, and our analysis of data from Hong Kong MTR covers 54 weekdays, excluding holidays and a day when data from few lines are missing. The results of the case study are presented through a randomly selected disruption which occurred during the evening peak hours at Chai Wan station, the Eastern terminus of the Island Line, and lasted for 27 minutes.
Synthetic control design
The time of a service day is divided into 72 intervals of 15 minutes each, and the metro station in each 15-minute interval (station-interval) is our study unit. On Monday 11/3/2019, the selected disruption occurred at 17:41 and ended at 18:08. Thus, Chai Wan station was interrupted (treated) during this period (time interval: 47 to 48), 2 while the other 48 stations on the four urban lines were still functioning normally. Note that within the entire network, no other disruption occurred on the same day. 3 Under the proposed framework, a treated station-interval is compared to a synthetic control unit that we generate from the "donor pool", that is, historic observations from the same station in the same time interval, but on different days. In this study, 13 weekdays with no disruption are used to construct the donor pool.
We use the untreated and unaffected units from the donor pool to construct a "synthetic" control unit, of which the characteristics approximate that of the treated unit. The counterfactual outcome of the treatment is estimated by the untreated outcome of the synthetic control unit. We create a synthetic control unit by weighting historic observations of the same station-interval pair from undisrupted days. The weights are set to maximise the synthetic control's ability to replicate observed exogenous characteristics (predictors) and metro service outcomes in the immediate pre-intervention time intervals at the treated station.
To account for the non-randomness of disruption occurrence, we consider partial confounding factors of metro disruptions when selecting predictors. These are summarised in Table 1. We divide 46 preintervention intervals into a training period (first 23 intervals) and a subsequent validation period (last 23 intervals), and construct the synthetic control in a three-stage iterative process (Abadie and Gardeazabal, 2003;Abadie et al., 2010). First, we optimise the set weights in the donor pool (as functions of given positive constants) to predict a combination of the observed predictors in the training period. Second, we optimise the choice of the given constants to minimise the mean squared prediction error of metro service outcomes in the validation period. Finally, with the validation period data we calculate the optimal synthetic control weights based on the constants obtained in the previous stage. Both optimisation problems are formally specified in section Methodology.
For the disrupted station, --0.022 0.006 -* "-" represents the weight is less than 1e-04. Figure 3 benchmarks the predictive power of (i) our synthetic control design against two naive approaches: (ii) taking the unweighted average of the historic observations in the donor pool and (iii) using the time-invariant average of pre-disruption observations (i.e., before-after comparison). We compare all three estimates to the observed data of the disrupted station. This figure shows that our weighted synthetic control can closely approximate the temporal pattern of each transport service outcome before the disruption occurrence, while the unweighted average sometimes fails. The naive before-and-after comparison cannot capture the changes in the pre-intervention time series of the outcome variables, demonstrating the need for causal inference methods to identify the true spatiotemporal effect of disruption. By comparing the post-disruption patterns of the outcomes in the data to their synthetic counterfactuals, we observe the direct causal effect of the disruption at Chai Wan station. Panel (b) of Figure 3 shows that the treatment reduced the passenger flow leaving the station by around 50% during the peak of the service interruption. There is only a small impact on the entry ridership, with a decrease of just 5% during the disruption; see panel (a). This suggests that passengers at Chai Wan station had few alternative routes to avoid the delay. With regards to the average journey time, this disruption delayed the average passenger by over 11 minutes (S.E. 0.008) per trip. Therefore, the mean travel speed also experienced a significant drop by up to 9 km/h (S.E. 0.004). The density of standing passengers inside trains grew from 0 to 0.635 person per square metre due to the accumulation of travellers while train movements were interrupted. Finally, with the resumption of train services, the impacts on exit ridership, average journey time, average speed and crowding density reached a turning point and gradually converged to the undisrupted counterfactual curve.
Synthetic control performance
For the disrupted station, Table 3 reports the mean values of the average travel speed predictors before the disruption. Its columns represent (i) the data, that is, ̅ observed on 11/3/2019, (ii) synthetic control counterfactuals ̅ * derived with the method above, (iii) the unweighted average ̅ of observations in the donor pool, and (iv) a single unit ̅ from the donor pool (observed on 11/2/2019). The results in Table 3 illustrate that the weighted synthetic control in column 2 provides a rather accurate approximation of the values of the predictors in column 1. By contrast, the unweighted average and the single control unit both lose accuracy in the reproduction of predictors prior to the disruption. We also validate the prediction of the pre-intervention outcomes as shown in Table 4. For all metro performance measures, the weighted synthetic control outperforms the other two methods, which is in line with prior expectation based on the literature.
Spillover disruption effects and propagation
In the same manner as implementing the proposed framework for the disrupted station, we obtain the weights and synthetic control estimations for other non-disrupted stations in the metro network. Adopting the same causal approach for links that did not directly receive the disruption, allows us to recover the pattern of treatment interference in the network. We use average travel speed and crowding density as examples to illustrate how the impacts of this disruption spread to the other 48 stations spatially and temporally. Figure 4(a) shows that the second station has been severely affected, and the impacts spread from the third to the seventh station. Then, during the second 15-minute interval, as shown in Figure 4(b), the disruption impacts continue to propagate along the Island Line until the tenth station, with the first four stations all in severe delay. More interestingly, stations along the connecting Tseung Kwan O Line in the Northeast are also affected, so we find evidence of a disruption spillover between lines. The original disruption at the terminus of the Island Line disruption ends at 18:15 and train services are restored. As a consequence, in Figure 4(c) we find that disruption effects weaken at the first four stations to a moderate level. However, the remaining downstream stations of the entire Island Line, as well as the Southern part of the Tsuen Wan line, remain affected. In Figure 4(d), another 30 minutes later, the average travel speed recovers entirely around the original location of the disruption. But the delays remain very serious along the central and Western sections of the Island Line. Finally, in Figure 4(e), one hour after the disruption, the average travel speed returns to normal at most stations. See the temporal evolution of average speed plotted separately for each station in Appendix Figure A1. Figure 5 shows how the impact on crowding spreads over time and space at each station along the Island Line. During evening peak hours, the crowding density of most stations increases right after the disruption occurred and then gradually decreases to zero within 3 hours, with considerable fluctuations. For some busy inner-city stations, the standing density reaches over 6 passengers per square metre due to the disruption, causing significant discomfort for passengers (Haywood et al., 2017;Bansal et al., 2022), even though the original disruption occurred at a remote part of the network.
Conclusions
Urban metros play an important role in the shift towards more sustainable transport and net-zero carbon emissions. However, service disruptions pose various challenges for metro systems, including delays, crowding, and a decline in passenger satisfaction and patronage. Objective measurement of disruption impacts is necessary for metro operators to understand the severity of disruptions, assess the performance of their services, and manage and mitigate future metro disruptions. Such accurate and unbiased information about the spatiotemporal effect of disruptions across the metro network is a key resource in the provision of efficient, reliable, and resilient metro services.
We propose a causal inference framework to quantify the direct and indirect (spillover) effects of disruptions on passenger demand, average journey time, average travel speed and crowding density on board. Our approach is novel in relation to the existing literature for two reasons. First, we identify the causal impact of disruptions net of confounding influences that may affect both the occurrence and impact of disruptions. Second, we extend our causal analysis to the entire network, thus exploring the propagation of disruption impacts beyond the station where the failure occurs. The proposed synthetic control framework directly addresses interference effects, i.e., that transport service outcomes at station A affect the operating status of station B, with the aid of multi-day high-frequency smart card data. Thus, disruption impacts that spread throughout the entire network can be captured. To the best of our knowledge, this is the first study that estimates indirect disruption impacts and analyses the propagation of such effects, creating unbiased network-level empirical evidence.
The proposed method is applied in a case study of four urban lines in the Hong Kong MTR and an arbitrarily selected disruption on the Island Line. Based on the comparison of observed undisrupted metro service outcomes and counterfactual controls constructed from historic observations of a donor pool, convincing evidence has been found to support the prediction accuracy and validity of the synthetic control design.
An illustrative application of the method revealed practical insights on the process of disruption propagation in the metro network. We showed that delays may spread along the entire metro line, and even connect to other lines within the network. The propagation of spillover effects takes time, with impacts on downstream stations lagging behind those on upstream stations. Service levels may recover earlier at the disrupted station than elsewhere in the network. The unbiased measurement of this spatial and temporal lag provides important information for metro operators and could help them improve the quality of information provision and replacement service planning during disruptions.
Another interesting finding is that interchange stations with more than two metro lines are more resistant to disturbance from disruptions, especially for journey time, travel speed and crowding density. A possible explanation is that passengers at interchange stations have access to alternative routes to continue their trips, thus reducing the probability of delays and trip cancellation. Our practical lesson is that metro operators should devote increased attention to disruption mitigation at less connected stations.
In a practical setting, the research developed in this paper can be used to improve disruption management in urban mass transit systems, hopefully rendering them more resilient to unpredictable events and thus more attractive as a sustainable mode of travel for passengers. | 2023-10-12T09:10:47.101Z | 2023-10-11T00:00:00.000 | {
"year": 2023,
"sha1": "c6cff2740eceb3271899ca006c307edccbaa91c5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "c6cff2740eceb3271899ca006c307edccbaa91c5",
"s2fieldsofstudy": [
"Engineering",
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
119286326 | pes2o/s2orc | v3-fos-license | Discovery of Gas Bulk Motion in the Galaxy Cluster Abell 2256 with Suzaku
The results from Suzaku observations of the galaxy cluster Abell2256 are presented. This cluster is a prototypical and well-studied merging system, exhibiting substructures both in the X-ray surface brightness and in the radial velocity distribution of member galaxies. There are main and sub components separating by 3'.5 in the sky and by about 2000 km s$^{-1}$ in radial velocity peaks of member galaxies. In order to measure Doppler shifts of iron K-shell lines from the two gas components by the Suzaku XIS, the energy scale of the instrument was evaluated carefully and found to be calibrated well. A significant shift of the radial velocity of the sub component gas with respect to that of the main cluster was detected. All three XIS sensors show the shift independently and consistently among the three. The difference is found to be 1500 $\pm 300$ (statistical) $\pm 300$ (systematic) km s$^{-1}$. The X-ray determined absolute redshifts of and hence the difference between the main and sub components are consistent with those of member galaxies in optical. The observation indicates robustly that the X-ray emitting gas is moving together with galaxies as a substructure within the cluster. These results along with other X-ray observations of gas bulk motions in merging clusters are discussed.
Introduction
Galaxy clusters are the largest and youngest gravitationally bound system among the hierarchical structures in the universe. Dynamical studies of cluster galaxies have revealed that some systems are still forming and unrelaxed. X-ray observation of the intracluster medium (ICM) provided further evidence for mergers through spatial variations of the gas properties. Remarkably, sharp X-ray images obtained by Chandra have revealed shocks (e.g. Markevitch et al. 2002) and density discontinuities (or "cold fronts"; e.g. Vikhlinin et al. 2001). These are interpreted as various stages of on-going or advanced mergers and suggest supersonic (for the shock) or transonic (cold front) gas motions. Cluster mergers involve a large amount of energies and hence influence numerous kinds of observations. In particular, possible effects of gas bulk motions on the X-ray mass estimates have been investigated extensively mostly based on numerical simulations (e.g. Evrard et al. 1996, Nagai et al. 2007). This is mainly because that the cluster mass distribution is one of the most powerful tools for the precise cosmology. Furthermore, cluster mergers heat the gas, develop gas turbulence, and accelerate particles, which in turn generate diffuse radio and X-ray halos.
To understand physics of cluster mergers, gas dynamics in the system should be studied. The gas bulk motion can be measured most directly using the Doppler shift of X-ray line emission. These measurements are still challenging because of the limited energy resolutions of current X-ray instruments. Dupke, Bregman and their colleagues searched for bulk motions using ASCA in nearby bright clusters. They claimed detections of large velocity gradients, such as that consistent with a circular velocity of 4100 +2200 −3100 km s −1 (90% confidence) in the Perseus cluster (Dupke & Bregman 2001a) and that of 1600 ± 320 km s −1 in the Centaurus cluster 1 (Dupke & Bregman 2001b). These rotations imply a large amount of kinetic energy comparable to the ICM thermal one. Note that they used the ASCA instruments (GIS and SIS) which have gain accuracies of about 1% (or 3000 km s −1 ). Dupke & Bregman (2006) used also Chandra data and claimed a confirmation of the motion in the Centaurus cluster. These important results, however, have not yet been confirmed by other groups. For example, Ezawa et al. (2001) used the same GIS data of the Perseus cluster and concluded no significant velocity gradient. In addition, Ota et al. (2007) found that the Suzaku results of the Centaurus cluster are difficult to reconcile with claims in Dupke & Bregman (2001b) and Dupke & Bregman (2006). In short, previous results by Dupke and Bregman suggest bulk motions in some clusters but with large uncertainties.
Currently the Suzaku XIS (Koyama et al. 2007a) would be the best X-ray spectrometer for the bulk motion search, because of its good sensitivity and calibration (Ozawa et al. 2009). In fact, Suzaku XIS data were already used for this search in representative clusters. Tight upper limits on velocity variations are reported from the Centaurus cluster (1400 km s −1 ; 90% confidence; Ota et al. 2007) and A 2319 (2000 km s −1 ; 90% confidence; Sugawara et al. 2009) among others.
In order to improve the accuracy of the velocity determination and to search for gas bulk motions we analyzed Suzaku XIS spectra of the Abell 2256 cluster of galaxies (A 2256, redshift of 0.058). This X-ray bright cluster is one of the first systems showing substructures not only in the X-ray surface brightness but also in the galaxy velocity distribution (Briel et al. 1991). In the cluster central region, there are two systems separating by 3. ′ 5 in the sky. Motivated by this double-peaked structure in their ROSAT image, Briel et al. (1991) integrated the velocity distribution of galaxies from Fabricant et al. (1989) over the cluster, fitted it to two Gaussians, and found the two separated peaks in 1 The error confidence range is not explicitly mentioned in the reference.
2 the velocity distribution. The two structures are separated by ∼ 2000 km s −1 in radial velocity peaks of member galaxies, as given by table 1. Berrington et al. (2002) added new velocity data to the Fabricant et al. (1989) sample, used 277 member galaxies in total, and confirmed the two systems along with an additional third component (table 1). This unique finding motivated subsequent observations in multiple wavelengths. For example, radio observations revealed a diffuse halo, relics, and tailed radio emission from member galaxies (e.g. Rottgering et al. 1994). The Chandra observation by Sun et al. (2002) revealed detailed gas structures in and around the main and second peaks. Furthermore, there are some attempts to reproduce merger history of A 2256 using numerical simulations (e.g. Roettiger et al. 1995). Thus, A 2256 is a prototypical and well-studied merging system and hence suitable to study the gas dynamics. We have carefully evaluated instrumental energy scales of the Suzaku XIS, used iron K-shell line emission, and found a radial velocity shift of the second gas component with respect to the main cluster. The X-ray determined redshifts are consistent with those of galaxy components. This is the most robust detection of a gas bulk motion in a cluster.
Observations
Suzaku observations of A 2256 were performed on 2006 November 10-13 (PI: K. Hayashida). The XIS was in the normal window and the spaced-row charge injection off modes. The observation log is shown in table 2. Figure 1 shows an X-ray image of the cluster. Detailed descriptions of the Suzaku observatory, the XIS instrument, and the X-ray telescope are found in , Koyama et al. (2007a), Serlemitsos et al. (2007) To verify the XIS gain calibration, we have also used data from the Perseus cluster observed in 2006 with the same XIS modes as those for A 2256 (table 2). These data were already used for the XIS calibration (e.g. Ozawa et al. 2009) and scientific analyses (Tamura et al. 2009;Nishino et al. 2010).
Data Reduction
We used version 2.1 processing data along with the HEASOFT version 6.9. In addition to the XIS standard event selection criteria, we have screened data by adding the following conditions; geomagnetic cut-off-rigidity > 6 GV, the elevation angle above the sunlit earth >20 • and the dark earth >5 • . We used the latest calibration file as of July 2010. Using these files, we corrected the XIS energy scale. The data from three CCDs (XIS 0, XIS 1, and XIS 3) are used.
We examined the light curve excluding the central bright region events (R < 6 ′ ) for stablebackground periods. There was no flaring event in the data. The instrumental (non-X-ray) background was estimated using the night earth observation database and the software xisnxbgen (Tawa et al. 2008).
We prepared X-ray telescope and CCD response functions for each spectrum using software xissimarfgen (Ishisaki et al. 2007) and xisrmfgen (Ishisaki et al. 2007), respectively. The energy bin size is 2 eV.
To describe the thermal emission from a collisional ionization equilibrium (CIE) plasma, we use the APEC model (Smith & Brickhouse 2001) with the solar metal abundances taken from Anders & Grevesse (1989).
Energy Scale Calibration
The main purpose of this paper is to measure Doppler shifts of K-shell iron lines in X-ray. In these analyses, the energy-scale calibration is crucial. In § 3.2.1, we summarise the calibration status. In § 3.2.2, we attempt to confirm the calibration using calibration-source data collected during the A 2256 observation. The primal goal here is to find differences of line energies observed in different positions within the field of view. Therefore the positional dependence of the energy scale is most important, which is evaluated in § 3.2.3. Here we focus on the data obtained in spaced-row charge injection off mode and around the K-shell iron lines. Considering all available information given here as summarized in table 3, we assume that the systematic uncertainty of the energy scale around the iron lines is most likely 0.1% and 0.2% at most over the central 14 ′ .7 × 14 ′ .7 region or among the three CCDs. Koyama et al. (2007b) estimated the systematic uncertainty of the absolute energy in the iron band to be within +0.1, −0.05%, based on the observed lines from the Galactic center along with Mn Kα and Kβ lines (at 5895 eV and 6490 eV respectively) from the built-in calibration source ( 55 Fe). Independently, Ota et al. (2007) investigated the XIS data of two bright and extended sources (the A 1060 and Perseus clusters) and evaluated the positional energy scale calibration in detail. They estimated the systematic error of the spatial gain non-uniformity to be ±0.13%. Furthermore, Ozawa et al. (2009) examined systematically the XIS data obtained from the start of operation in July 2005 until December 2006. They reported that position dependence of the energy scale are well corrected for the charge-transfer inefficiency and that the time-averaged uncertainty of the absolute energy is ± 0.1%. In addition, the gradual change of the energy resolution is also calibrated; the typical uncertainty of resolutions is 10-20 eV in full width half maximum.
Absolute Scale
We extracted spectra of calibration sources which illuminate two corners of each CCD (segment A and D). These spectra in the energy range of 5.3-7.0 keV are fitted with two Gaussian lines for the Mn Kα and Kβ along with a bremsstrahlung continuum component. Here we fixed the energy ratio between the two lines to the expected one. Thus obtained energy centroids of the Mn Kα line from the two corners of three CCDs give an average of 5904 eV (as compared with the expected value of 5895 eV) and a standard deviation (scatter among the six centroids) of 6 eV. The statistical errors of the line center were about 1-2 eV. This confirms that the absolute energy scale averaged over all CCDs and the relative gain among CCD segments are within ±0.15% and ±0.10%, respectively. Simultaneously we found that the data can be fitted with no intrinsic line width for the Gaussian components, meaning that the energy resolution is also well calibrated.
Spatial Variation
We used the XIS data of the Perseus cluster, which would provide the highest-quality XIS line spectra over the whole CCD field of view among all the Suzaku observations. Here we assume no line shift intrinsic to the cluster within the observed region. There were two exposures of the Perseus in the normal window and the spaced-row charge injection off modes (table 2) in periods close to the A 2256 observation. Note that Ota et al. (2007) used the same data, but with early calibration (i.e. version 0.7 data). Here we re-examine the accuracy with latest and improved calibration.
Firstly, following Ota et al. (2007), we divided the XIS field of view into 8 × 8 cells of size 2 ′ .1 × 2 ′ .1. Each spectra in the 6.2-7.0 keV band is fitted with two Gaussian lines for He-like Kα (∼ 6700 eV) and H-like Kα (∼ 6966 eV) and a bremsstrahlung continuum model. Here we fixed the ratio of the central energies between the two lines to a model value of 1.040 and let the energy of the first line be a fitting parameter. Because of the low statistics data in CCD peripheral regions we focus on the central 7 × 7 cells (i.e., 14 ′ .7 × 14 ′ .7). The typical statistical errors of the line energy are from ±4 eV to ±25 eV. Thus derived central energies from 7 × 7 cells for each CCD are used to derived an average and a standard deviation. The average values are 6575 eV, 6575 eV, and 6569 eV, for XIS 0, XIS 1 and XIS 3, respectively, which are consistent with the cluster redshift of 0.017-0.018. The standard deviations among 7 × 7 cells are 7 eV, 13 eV, and 10 eV for the three XISs, respectively. There is no cell having a significant deviation from the average value at more than 2σ level. These deviations include not only systematics due to the instrumental gain uncertainty but also statistics and systematics intrinsic to the Perseus emission. Therefore the energy scale uncertainty should be smaller than this range of deviations (0.1-0.2%).
Secondly, we focus on the CCD regions specific in our analysis below. We have extracted the Perseus spectra from the same detector regions (main and sub) as used in the A 2256 analysis. The definition of the regions are given in figure 2 and § 3.4. These spectra were fitted with the same model as above and used to derive the redshift (from the line centroid). We found no difference in redshift between the two regions. The redshift differences (between the two regions from the three sensors) range from −9 × 10 −4 to +6 × 10 −4 with an average of 1.5 × 10 −5 . Accordingly we estimate the possible instrumental gain shift to be within ±0.1% with no systematic bias between the two regions.
Energy sorted X-ray Images
In order to examine the spectral variation over the A 2256 central region, we extracted two images in different energy bands including He-like and H-like iron line emission as shown in figure 2. There appear at least two emission components which corresponds to the main cluster at east and the sub component at west discovered in Briel et al. (1991). We noticed a clear difference in the distribution between the two images. In the He-like iron image, the sub component exhibits brightness comparable to that of the main component. On the other hand, in the H-like iron image, the sub has lower brightness. This clearly indicates that the sub has cooler emission compared with the main. Based on these contrasting spatial distributions, we define centers of the two components to be (256. • 1208, 78. • 6431) and (255. • 7958, 78. • 6611), in equatorial J2000.0 coordinates (RA, Dec), respectively, as shown in figure 2.
Separate Spectral Fitting
Our goal is to constrain the velocity shift of the sub component with respect to the main cluster. Then, we extracted sets of spectra from the two components and fitted with different redshifts. More specifically, for the main component emission, we integrated the data within 4 ′ in radius from the main center but excluding a sub component region with a radius of 2 ′ (figure2). For the sub region, data within 1 ′ .5 in radius from the sub center are extracted.
We use the energy range of the 5.5-7.3 keV around the iron line complex. In this energy band the cosmic background fluxes are below a few % of the source count in the two regions. Therefore we ignore this background contribution. We use a CIE component to model the spectra. Free parameters are an iron line redshift, a temperature, an iron abundance, and normalization. Models for the three XISs are assumed to have different redshifts and normalization to compensate inter-CCD calibration uncertainties. Other parameters, temperature and abundance, are fixed to be common among the CCDs. As shown in figure 3 and table 4, these models describe the data well (FIT-1). The fits give different redshifts between the two regions from all three XISs as shown in figure 4(a). The statistical errors of the absolute redshift are less than ±0.002. The redshift differences are 0.0048, 0.0041, and 0.0076 for the three XIS. These are equivalent to shifts of 30-50 eV in energy or 1200-2300 km s −1 in radial velocity.
The spectral shape of the two regions are different as shown in figure 3. Therefore, the obtained redshift difference may depend on the spectral modeling. To check this possibility, instead of the CIE model, we use two Gaussian lines and a bremsstrahlung continuum component (FIT-2). Here we assume the He-like iron resonance line centered on 6682 eV and the H-like iron Lyα centered on 6965 eV for the two lines and determine the redshift common to these two lines. The Gaussian components are set to have no intrinsic width. As given in table 4, these models give slightly better fits to the data in general than the first model . For each XIS, we obtained redshifts and therefore a redshift difference as same as those from the first fit within the statistical uncertainty. Therefore the redshift depends insignificantly on the spectral modeling.
Strictly speaking line centroids of the iron line transitions could change depending on the emission temperature. In the case of the observed region of A 2256, where the temperature varies within 4-8 keV (e.g. Sun et al. 2002), the strong iron line structure is dominated by the He-like triplet. Within this temperature range, the emission centroid of the triplet stays within 6682-6684 eV based on the APEC model. This possible shift (< 2 eV) is well below the obtained redshift difference (30-50 eV) and should not be the main origin of the difference.
Gain-corrected Spectral Fitting
The obtained redshifts are systematically different among the three XISs [ figure 4(a)]. We attempt to correct this inter-CCD gain difference based on the calibration source data. Following Fujita et al. (2008), we estimate a gain correction factor, f gain , by dividing the obtained energy of the Mn K line from the calibration source by the expected one. In the A 2256 data as given in section 3.2, f gain are 1.0018 (XIS 0), 1.0000 (XIS 1), and 1.0016 (XIS 3). This factor, the redshift obtained from the fit (z fit ), and the corrected redshift (z cor ) have a relation We use this correction and fit the spectra with a single redshift common to the three XISs to the two-Gaussian model (FIT-3). These models give slightly poorer fits compared with the previous ones, because of a decrease of the degree of freedom (table 4). Nevertheless, the fits are still acceptable. The difference in the redshift between the two region is 0.005 ± 0.0008 (or 1500 ± 240 km s −1 ). In figure 4(b) we show the statistical χ 2 distribution as a function of the redshift. This indicates clearly that the data cannot be described by the same redshift for the main and sub regions. We found that redshifts determined here by X-ray are consistent with those of member galaxies in optical (table 1) as explained below. The X-ray redshift of the main component, z=0.0059 or 17700 km s −1 , is the same as the galaxy redshift within the statistical error. That of the sub component, z=0.00540 or 16200 km s −1 , is larger than the galaxy value of 15730 km s −1 . Yet the difference, 470 km s −1 , is within the combined errors from X-ray statistical (150 km s −1 ), systematic (300km s −1 ), and optical fitting (160 km s −1 ). Besides this, the obtained specta of the sub component could be contaminated from the emission of the main component due to the projection and the telescope point spread function (half power diameter of about 2 ′ ). This contamination could make the obtained redshift of the sub larger than the true one (or closer to that of the main).
To visualize the redshift shift in the sub region more directly, we show spectral fittings by combining the two front-illuminated XIS (0 and 3) data in figure 5. We fitted the spectra with the two Gaussian model as given above. In this figure we compare the fitting results between the bestfit model (z = 0.051) and one with the redshift fixed to the main cluster value (z = 0.058). This comparison shows that we can reject that the sub region has the same redshift as the main cluster.
Summary of the Result
We found gas bulk motion of the second component in A 2256. The difference in the redshifts and hence radial velocities between the main and sub systems is found to be 1500 ±300 ( Energy (keV) Fig. 3. Cluster spectra along with the best-fit CIE model in histogram (FIT-1). Plots (a) and (b) are from the main and sub regions, respectively. The XIS 0, XIS 1, and XIS 3 data are shown by black, red, and green colors, respectively. In lower panels fit residuals in terms of the data to model ratio are shown. ±300 (systematic) km s −1 ( § 3.4). This observed shift corresponds to only 0.5% in the energy, but is well beyond the accuracy of the energy scale reported by the instrument team (Koyama et al. 2007b;Ozawa et al. 2009) and that by Ota et al. (2007). Focusing on the present analysis of A 2256, we also have examined the calibration systematics independently and confirmed the accuracy given above. The obtained redshifts and hence the difference between the two X-ray emitting gas components are consistent with those of the radial velocity distribution in member galaxies. This consistency uniquely strengthens the reliability of our X-ray measurement. This is the fist detection of gas bulk motion not only in A 2256 but also from Suzaku , which presumably has the best X-ray spectrometer in operation for the velocity measurement. As given in § 1, Dupke and Bregman (2001a, 2001b previously claimed detections of bulk motions in the Perseus and Centaurus clusters. Compared with these and other attempts, our measurement is more accurate and robust. This improvement is not only due to the better sensitivity and calibration of the XIS but also due to the well-separated and X-ray bright nature of the structure in A 2256. Radial velocity distributions of cluster galaxies were used to evaluate dynamics of cluster mergers. These studies, however, have limited capability. First, largely because of the finite number of galaxies and the projection effect, a galaxy sub group within a main cluster is not straightforward to identify. Second, the major baryon in the cluster is not galaxies but the gas in most systems. Therefore dynamical energy in the hot gas cannot be ignored. Third, galaxies and gas are not necessary to move together especially during merging phases, because of the different nature of the two (collisionless galaxies and collisional gas). In practice, some clusters under the violent merging phase such as 1E 0657-558 (Clowe et al. 2006) and A 754 (Markevitch et al. 2003) show a spatial separation between the galaxy and gas distributions. Therefore, it is important to measure dynamics of galaxies and gas simultaneously. The present result is one of the first such attempts.
Dynamics of A 2256
The determined radial velocity, v r ∼ 1500 km s −1 , gives an lower limit of a three dimensional true velocity, v = v r / sin α, where α denotes the angle between the motion and the plane of the sky. Given a gas temperature of 5 keV ( § 3.4) and an equivalent sound speed of 1100 km s −1 around the sub component, this velocity corresponds to a Mach number M > 1.4. Therefore, at least around the sub component, the gas kinematic pressure (or energy) can be (1.4) 2 ∼ 2 times larger than the thermal one. In this environment, the gas departs from hydrostatic equilibrium. Then, does this motion affect the estimation of the mass of the primary cluster ? As argued by Markevitch & Vikhlinin (1997), it depends on the physical separation between the two components. In the case of A 2256, the two are not likely too closely connected to disturb the hydrostatic condition around the primary, as estimated below. However, to weight the total mass within the larger volume including the sub component we should consider not only the mass of the sub itself but also the dynamical pressure associated with the relative motion. This kind of a departure from hydrostatic equilibrium was predicted generally at cluster outer regions in numerical simulations (e.g. Evrard et al. 1996).
We will compare our measurement with other studies on the ICM dynamics. Markevitch et al. (2002) discovered a bow shock in the Chandra image of 1E 0657-56 (the bullet cluster) and estimated its M to be 3.0 ± 0.4 (4700 km s −1 ) based on the observed density jump under the shock front condition. Using a similar analysis, Markevitch et al. (2005) derived M of 2.1 ± 0.4 (2300 km s −1 ) in A 520. Based on a more specific configuration in the "cold front" in A 3367, Vikhlinin et al. (2001) estimated its M to be 1.0 ± 0.2 (1400 km s −1 ). In these cases, velocity directions are assumed to be in the plane of the sky. See Markevitch & Vikhlinin (2007) for a detailed discussion and other examples. These measurements are unique but require certain configurations of the collision to apply to observations. In contrast, X-ray Doppler shift measurements as given in the present paper are more direct and commonly applicable to merging systems. This method is sensitive to the motion parallel to the line of sight. These two measurements are complementary and could provide a direct measurement of three dimensional motion if applied to a merging system simultaneously.
We measured the gas velocity parallel to the line of sight (v r ). How about the velocity in the plane of the sky (v sky ) and the true velocity v? The ROSAT X-ray image of A 2256 shows a steeper brightness gradient between the main and second components (Briel et al. 1991). Furthermore, from the Chandra spectroscopic data Sun et al. (2002) found a hint of a temperature jump across the two component. They argued a similarity of this feature to the "cold front" discovered in other clusters. The temperature and hence pressure jump in A 2256 (from about 4.5 keV to 8.5 keV) are similar to those found in A 3367, in which the jump indicates a gas motion with v sky of about 1400 km s −1 . Therefore we expect a similar v sky in A 2256, which is comparable to v r . Further assuming that the second component directs to the main cluster center, v is estimated as v 2 r + v 2 sky ∼ √ 2v r ∼ 2000 km s −1 . Instead of assuming v sky , by considering the mean sin α factor, 2/π, v becomes 2400 km s −1 on average. Based on a simple assumption that the two system started to collide from rest, we can estimate the velocity as v ∼ ( 2GM R ) 1/2 , where G, M and R are the gravitational constant, the total mass of the system, and the separation, respectively. The total mass of A 2256 of 8 × 10 14 M ⊙ (Markevitch & Vikhlinin 1997) and an assumed final separation R of 1 Mpc give v ∼ 2800 km s −1 , which is comparable but larger than the estimated current velocity. Putting v = 2000 − 2400 km s −1 to the relation, we obtain a current separation R of 1.4 − 2 Mpc. The time to the final collision can be estimated to be about 0.2-0.4 Gyr [(R − 1) Mpc divided by v]. Here we assume that the system is going to the final collision. This assumption is consistent with a lack of evidence for strong disturbances in the X-ray structure, as argued previously for A 2256 in general.
Future Prospect
Our observations provided the first detection of gas bulk motion in a cluster. To understand in general the cluster formation which is dominated by non-linear processes, systematic measurements in a sample of clusters are required. For example, some clusters such as 1E 0657-558 (Clowe et al. 2006) at violent merging stages show segregation of the gas from the galaxy and possibility from dark matter components. In these systems, we expect different situations in the gas and galaxy dynamics compared with that found in A 2256. Given capabilities of current X-ray instruments such as the Suzaku XIS, A 2256 is presumably an unique target with X-ray flux high and the velocity separation clear enough to resolve the structure. Accordingly, the systematic study requires new instruments with higher spectral resolutions and enough sensitivities. In fact, this kind of assessment is one of the primary goals for planned X-ray instruments such as SXS (Mitsuda et al. 2010) onboard ASTRO-H (Takahashi et al. 2010). Using the SXS with an energy resolution better than 7 eV we could measure gas bulk motions in a fair number of X-ray bright clusters. Furthermore, we may find line broadening originated from the gas turbulence, as a result of mergers, related shocks, or some activities in the massive black hole at cluster centers. In addition, the SXS potentially will constrain for the first time the line broadening from the thermal motion of ions (and hence the ion temperature). The present result proves that A 2256 is one of the prime targets for ASTRO-H . The expected spectra of the two components in A 2256 with the SXS are shown in Fig.11 of Takahashi et al. (2010). | 2011-04-14T05:30:22.000Z | 2011-04-14T00:00:00.000 | {
"year": 2011,
"sha1": "96895013877493428e74f0873393a7d24f0570fb",
"oa_license": null,
"oa_url": "https://academic.oup.com/pasj/article-pdf/63/sp3/S1009/17444817/pasj63-S1009.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "96895013877493428e74f0873393a7d24f0570fb",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
233676155 | pes2o/s2orc | v3-fos-license | Estimation of the Distribution of Duration of Breastfeeding from Cross-Sectional Data: Some Methodological Issue
Background: Duration of breastfeeding is an important health indicator of mother and child. There are various indirect epidemiological methods available to estimate the duration of breastfeeding from cross sectional data.
Objective: To estimate the distribution of duration of breastfeeding at national level cross sectional data and compare various available technique. The impact of the sampling frame (ascertain of the individual understudy) is also evaluated.
Method: National Family Health Survey (NFHS-IV) data is used. Duration of breastfeeding of only those children who were born before 60 months from survey date were included in the study. The technique of Current Status Data, Life Table Analysis, and Kaplan Meier (KM) estimator is applied to assess the distribution of duration of breastfeeding.
Result: The mean estimate is 32.84, 33.14 and 33.64 months by Kaplan Maier Estimator, Current Status Data and Life Table Analysis respectively. The Current Status and Life Table method are better than Kaplan Meier Estimator as it is doesn’t based on recall data and heaping present in the data.
Conclusion: One must be very cautions while estimating the various epidemiological parameters from cross section data set. The assumptions of the methodology as per data available should be evaluate. If such data is not available, the available methodology may be modified. Regression analysis based on Current Status data technique may be used to assess the impact of various clinical and epidemiological factors (such as nutrition of mother, health status of mother etc.) on duration of breastfeeding
Introduction
Distribution of duration of breastfeeding is an important indicator of health of mother and child. Breastfeeding is one of the most effective ways to ensure child health and survival (1,2) in first year breastfeeding, and up to one-third of the second year of life of the child. Breastfed children have better intelligence tests, are less overweight or obesity and less prone to diabetes later in life. The mother who breastfeed also have a reduced risk of breast and ovarian cancers. Mothers worldwide are recommended to exclusively breastfeed infants for the child's first six months to achieve optimal growth, development, and health of child. Generally, for estimating the distribution of duration of breastfeeding, a cohort of birth say of size N is followed, till all have completed breastfeeding. The data thus obtain may provide us the distribution of duration of breastfeeding of the cohort. Practically such data is unavailable and also difficult to obtain. On the other hand, cross-sectional data on the duration of breastfeeding is available in different national level health survey. Various literature explained different techniques for estimation of duration of breastfeeding (3)(4)(5). Generally, Life Table, Kaplan Meier, and Current Status Estimator are very commonly used methods. In cross-section data, the duration of breastfeeding is not appropriately reported due to recall lapse of women. Hence confirmed age heaping at 6,12,36 months etc are common (6)(7)(8)(9)(10) . Generally studies rely on reported duration/recollection of duration, in first approach, duration of breastfeeding is considered to refer to the age of the children at the time of complete termination, regardless of the time when consumption of other foods began (6; 7). Other approach sees the use of current status or interval-censored data, whereby just the current breastfeeding status along with the age of the child at the time of survey interview is considered in building a picture of termination age (8; 9). The major advantages of retrospectively reported breastfeeding data are the ease of data collection, researchers often opting for crosssectional approaches in order to save time and cost, as well as the ability to capture relatively larger sample sizes (14). Drawback of recalled data is associated with age heaping, with participants tending to round up or down (15) the exact age of the child when the breastfeeding termination took place. This limits the ability to draw valid inferences (16). With current status data, the likelihood of age heaping is comparatively lower, except for the case of heaping in the reported age of children. This remains a problem due to misreporting of age. In general, more errors occur the greater the time-lag between an event and its recall. Some previous studies, concerning the distribution of breastfeeding termination times, have observed that the current-status measures lead to unbiased estimates of the survival function for a sample of births that occur during a fixed period (14). While, the present approach promises more reliable measures, several studies on breastfeeding have been conducted using this approach (13,(16)(17)(18) due to computational complexity (8). Apart from the types of estimation techniques, one another important factor affecting the distribution of duration of breastfeeding is the sampling frame. Sampling frame is defined as the method by which an individual is ascertained or identified as a member sample population and the time's reference for the duration variable to be measured (14,15). The distribution of duration of breastfeeding also depends much upon the sampling frame as shown in table 6. As the sampling frame changes the distribution also change. For estimating the duration of breastfeeding there are two sampling frames and consequently, the distribution will also change (21). For example, the duration of breastfeeding would be different if considering child as unit or mother as a unit. On the basis of these two sampling frame the distribution will be change. Kumar A et al. Vol 6 No 4 (2020) Estimation of the Distribution of Duration of Breastfeeding from Cross-Sectional Data Further, the sampling frame is also decided on the feasibility and considering the nonsampling error. In all these cases, the distribution is likely will vary in such a situation. Hence, certainly, a careful evaluation of the sampling frame is needed for analyzing the observed data and drawing inferences about population characteristics The three techniques (Current Status, Kaplan Meier and Life Table Techniques) for estimating the distribution of duration of breastfeeding from cross-sectional data is compared. The effect of the sampling frame is also evaluated. Consequently, the appropriate technique and feasible sampling frame in crosssectional data is examined. Methodology
Data
Birth record data of National Family Health Survey (NFHS-IV), collected during year 2015-16 is used. It is cross sectional data. From this dataset, variables such as child is alive, index of birth history, date of the interview, date of birth of the child, currently breastfeeding ('yes', 'no'), months of breastfeeding are used. Only those children included in study, whose age was in-between 0-60 months at the time of survey. Among this group of children or duration of breastfeeding many children have completed the breastfeeding and many are still continuing breastfeeding. A total 176335 number of children found whose age were less than 60 months. Figure 1 shows the extraction of dataset from Birth record data of National Family Health Survey (NFHS-IV) data.
Following three techniques explained below were used to estimate the duration of breastfeeding on available data (NFHS-IV).
Current status technique
In observation, ( ) is the duration of breastfeeding which is restricted to knowledge of whether or not ( ) exceeds the date of survey. This structure is known as current status data and sometimes referred to as referred case I interval censored data. In cross sectional study, the age of the child along with his current status of breastfeeding is noted. For example, there will n0 of number of child of 0-1 month, n1 number of child of 1-2 month, nt Kumar A et al. Vol 6 No 4 (2020) Estimation of the Distribution of Duration of Breastfeeding from Cross-Sectional Data is the number of child of age (t,t+1) months (t=1,2,….T) and among that nt , yt number of children are still breastfeeding. So, proportion of child still breastfeeding denote the proportion of children has breastfeeding more than t. The value of can be obtained for different (t) from the data and the duration of breastfeeding can be obtained by spline smoothing, the plot between t and .
is obtained by the equation (1). is smoothed by spline, and after smoothing the cumulative distribution and distribution is obtained (see table 2).
Kaplan Meier estimator
The retrospectively reported durations of breastfeeding for weaned children, along with censored durations of breastfeeding for children still being breast-fed at the time of the survey. Analysis of this type of data is done by Kaplan Maier estimator. Let > 0 be the duration of breastfeeding if a child stops breastfeeding then it is events. of interest takes place. As indicated above, the goal is to estimate the survival function underlying.
St= Prob (t > T),where t = 0,1,...is the time The estimator of the survival function (the probability that life is longer than ) is given by: With a duration of breastfeeding when at least one event happened, the number of events (e.g., number of the child who weaned) that happened at time and the individuals known to have survived (have not yet had an event or been censored) up to time as shown in table 3. For applying the Kaplan Meier estimator, in National Family Health Survey (NFHS-IV) the months of duration of breastfeeding is given and the event has been occurred or not is given by "whether baby is current breastfeeding or not" is given by these two variable the Kaplan Meier estimator is estimated.
Life Table Technique
The simplest analysis of the data of duration of breastfeeding irrespective of how they are ascertained in subject to serious limitation when the observation is truncated at some point before the age or due to survey date. To minimize the bias resulting in incomplete observation of duration of breastfeeding. A brief description of the method and notation are given in below table1: Table 1 Table Technique Notation Description The number of an eligible child in the study Completed number of months since birth. Number of child having breastfeeding more than ℎ month:
: Summary of Notations and their Descriptions used in Life
= 0,1,2, … = ( − ) + In fact, represents the expected duration of breastfeeding after months. It is pertinent to mention that is almost the same λ i.e. λ is assumed to be constant over time while may vary for different values of as shown in table 4. For the life table analysis, the duration of breastfeeding and the specific question is asked whether a child was "still breastfeeding" at the time of the survey, is used for the interval of one month for the life table analysis and in the last, take interval of three months because in interval of one month the number of child who still breastfeeding is very small due to which estimated survival is not estimated. A spline(13,16,23) ( ) is a smooth piecewise defined function whose "pieces" are lowdegree polynomials defined on separate intervals of the range of . The pieces are joined together in a suitably smooth fashion at joint points called knots. It is represented by a limited number of parameters and are smoothens the function that are extremely flexible in shape. It bridges the gap between parametric and nonparametric methods in statistics. A large number literature presents an algorithm for calculating splines of various degrees. Cubic splines (splines of degree 3)(24,25) are often used in practice, since they are reasonably flexible in shape and reliable algorithms are available for their calculation. A simple method for calculating cubic splines, which involves rescaling the time axis to the unit interval, is given by (18) and is used in this paper. "The parameters, which represent the cumulative survival function up to various time points, were considered as a function of the age of the child at the time of the survey and were represented by a cubic spline". Three knots at .25, .50, and .75 with 3 degrees were found to be sufficient for the model fitting.
Result and Discussion
The figure (2) shows the overall pattern of duration of breastfeeding by the three methods. The survival curve represents the probability of mothers who continued to breastfeed at any given time. From figure (2) it is observed that the survival curve of Kaplan Meier Estimator is fluctuated with time and these are mainly occurring in multiple of six months although the survival curve by other two methods does not fluctuate timely. Kumar A et al. Vol 6 No 4 (2020) Estimation of the Distribution of Duration of Breastfeeding from Cross-Sectional Data
As
is obtained by all the three methods as shown in table (2,3,4) after that it is smoothen by spline for 60 months of duration of breastfeeding. Then the cumulative distribution ( ) and distribution function ( )is obtained for all the method. For 60 months of duration of breastfeeding mean estimate by Kaplan Maier Estimator is 32.84, by Current Status Data is 33.14 and by Life Table Analysis is 33.64 then for mean duration of breastfeeding for 60 months because after 60 months there are only 13.5(%) of proportion of child still doing breastfeeding, so we assume that, is zero after the 60 months of duration of breastfeeding. Quartiles are obtained by the graph. Similarly, for 36 months of duration of breastfeeding is obtained. As shown in table 5 for 60 months the mean duration of breastfeeding is approximately equal but the median and quartiles are different by all the methods. This may occurred due to possible reasons, Kaplan Maier Estimator the information about duration of breastfeeding is depend on recall bases, the survival time is estimated at the point at which event occur, and censored event are uniformly distributed over the time however in current status data the information about duration of breastfeeding is find out by subtracting 60 months from date of interview (in CMC) and the survival time is estimated at the point at which event occur, but the roll of censored event is in interval of duration of breastfeeding is not uniformly distributed over time although in life table the information about duration of breastfeeding is also taken on recall bases but the roll of censored event is in interval of duration of breastfeeding. The way of taking sampling frame is also very important aspect of determining the distribution of estimation of duration of Kumar A et al. Vol 6 No 4 (2020) Estimation of the Distribution of Duration of Breastfeeding from Cross-Sectional Data breastfeeding. If we are taking child information data from National Family Health Survey (NFHS-IV) then we collect the information all the child which was taken birth in last the five years, then determining the estimation of distribution of duration of breastfeeding. But if we take mother information data from National Family Health Survey (NFHS-IV) then we collect the information of all the women who are currently breastfeeding in the last five years, then determining the estimation of the distribution of duration of breastfeeding.
Because an inverse relationship exists between birth rates and the time between the ways you are taking the data its affects the distribution. The distribution in the form of 1-( ) (Survival function) is obtained for the duration of breastfeeding by current status, life table, and Kaplan Meier techniques. Further, the value of mean and quartiles is also obtained. The distribution is quite close in each other in life table and current status data, whereas in Kaplan Meier estimator the distribution is different (obtain by Kolmogorov smirnov test). It was found that difference in current status and life table K-S test is 0.016, for current status and Kaplan Meier estimator K-S test is 0.027 and for life table method and Kaplan Meier estimator K-S test is 0.025. This may be because there is age heaping due to recall lapse. Life table techniques method adjusts the effect of recall bias up to some extent. But the distribution obtained based on current status review very less effort to collect the data. The chance of recall bias is almost zero. Hence the distribution obtained from this technique is most feasible and appropriate in the contrast of cross sectional data. One must take care of the sampling frame while estimating such distribution as shown in table 6. Ideally, it would be appropriate to involve only those children whose age is at least 60 months. But such data will suffer from recall bias. So current status data of children whose age is 0 to 60 months should be used to evaluate duration of breastfeeding. Conclusion All the above mentioned three techniques are of a non-parametric approach. Although a parametric approach may also be used to evaluate the distribution under some suitable assumptions. The current status and life table method are better than Kaplan Meier estimator as it is doesn't based on recall data and heaping present in the data. One must be very cautions while estimating the various epidemiological parameters from available data set. The assumptions of the methodology as per data available should be evaluate. If such data is not available, the available methodology may be modified. Regression analysis based on current status data technique should be used to assume the impact of various clinical and epidemiological factors (such as nutrition of mother, health status of mother etc.) on duration of breastfeeding.
Conflict of Interest
The author declares that there is not conflict of interest. | 2021-04-24T17:54:55.813Z | 2021-03-09T00:00:00.000 | {
"year": 2021,
"sha1": "40258a5bfc8266d81d33d6d00df7ca82d1aaef49",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.18502/jbe.v6i4.5686",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "40258a5bfc8266d81d33d6d00df7ca82d1aaef49",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
270115211 | pes2o/s2orc | v3-fos-license | Using 5TE Sensors for Monitoring Moisture Conditions in Green Parks
The ground surface and subsurface of green parks in arid and desert areas may be subjected to desiccation as a result of weather and hot temperatures. It is not wise to wait until plants are turning pale and yellow before watering is resumed. Given the scarcity of water in typical desert zones, we recommend full control of irrigation water. This study presents a method of recycling irrigation water using 5TE sensors, employing time-domain reflectometry (TDR) technology. A trial test section was constructed along the coast of the eastern province of Saudi Arabia. Water recycling involves using clay–sand liners placed below the top agricultural soils to intercept excess water and direct it towards a collection tank, and then it is pumped out to a major water supply tank. The main properties of soils and clay–sand liners normally taken into account include moisture content, density, and hydraulic conductivity. An assessment of geotechnical properties of clay–sand mixtures containing 20% clay content was conducted. The profiles of moisture and temperature changes were monitored using 5TE sensors and data loggers. The 5TE sensors provided continuous measurements at varying temperatures and watering cycles. Twenty-nine watering cycles were conducted over a six-month period. An additional section was considered with a liner consisting of the same clay but enhanced with bentonite as one-third of the clay content. The volumetric water content was found to vary from 0.150 to 0.565 following changing weather and direct watering cycles. The results indicated that the use of a TDR instrumentation is a cost-effective and time-saving technique to construct a system for saving irrigation water.
Introduction
The measurement of soil moisture content is of great importance to determine many geotechnical parameters and the water balance in environmental and agricultural projects.So many approaches can be followed to obtain the moisture content.Oven techniques, hot plates, microwaves, and chemical methods were all among the methods used in practice.These methods require time and attendance to perform.The need for immediate and reliable results suggested the use of sensors that can obtain the result in a very short time and can be digitally traced and reported.The most common sensors include a contact-based technique known as time-domain reflectometry (TDR) [1,2].These sensors were found to be highly accurate and were tested by many investigators for various environmental exposures and soil types [3][4][5].
This study is designed to investigate the use of 5TE moisture sensors in the management of irrigation, wetting, and drying assessments of green parks.This type of sensor measures the volumetric moisture content, temperature, and electrical conductivity.The volumetric moisture content is obtained using the dielectric constant of the media based on capacitance/frequency domain technology.It normally measures the apparent dielectric permittivity to an accuracy of 1 εa for the soil range , resulting in +/− 3% using the Topp equation.
The dielectric permittivity is computed by measuring the delay in time between the incident and reflected electromagnetic pulses [6,7].Topp et al. (1980) [5] presented an empirical relationship between the dielectric constant (Ka) and the volumetric moisture content (θ) for soils of variable mineralogy.where: θ = volumetric water content Ka = dielectric constant.
Wang et al. [8] investigated the use of the Topp equation for two soil materials and presented an assessment on the accuracy based on the laboratory-measured gravimetric moisture content.
Numerous studies have carried out field calibration, in which the sensor output is compared to the volumetric water content of the field soil, which is determined using the gravimetric approach [9].However, some researchers [10,11] suggested field calibration for greater accuracy in a laboratory soil calibration conducted by the manufacturer for specified types of soil can be satisfactory.The manuals provided by Meter (2018) provide detailed methods of application and use [12,13].
There are three phases in unsaturated soils: solid, air, and water.According to Shmulik [14], the aqueous solution is the only conducting phase for apparent electrical conductivity.This opened up the possibility of using it for the assessment of volumetric water contents (θ).Nonetheless, a wide range of variables, such as temperature, cation composition, particle shape and orientation, soil density, and porosity, might affect the measurement of electric conductivity.A rise in temperature has the potential to reduce the electrical resistivity of soil.
Studies that examined the effect of temperature include the works of Seyfried et al. and Banon et al. [15,16].The impact of temperature changes on the moisture content can be clearly observed when the temperature is recorded at 40 • C or higher [17], and this effect is also noticed in the current study extending over six months, where the profile of moisture is different during the hot period from 32 • C to 40 • C. The permittivity measurement is also affected by the salinity of soil, but this effect is practically negligible when the salinity is <1 dS•m −1 .
Understanding the temporal variations in soil temperature is equally important for plants or the cover of vegetation.The ambient temperature and the surrounding environment are significant factors to many plant processes [18].The following is how Bergman et al. [19] applied Newton's equation of cooling, which connects the heat transfer coefficient λ to the soil temperature Tsoil: The clay of Al-Qatif is extensively studied by researchers and was found to have high plasticity and be suitable for constructing liners for environmental and waste-control purposes [20].In this study, the clay-sand liner is intended for use as a water barrier to intercept excess water penetrating towards deep sand deposits.This local clay is suggested as an alternative to bentonite material.The governing factor for using typical clay is its hydraulic conductivity.It is sometimes likely to fall short of the specified level of hydraulic conductivity but can be enhanced with a small amount of bentonite.
The objective of this paper is to study the use of 5TE sensors linked to a proposed drainage system of liners.This is intended to save water in green parks and monitor moisture variations.Installing this sensor type is practical and can provide continuous data and information that enable quick decisions needed for watering grass and the collection of surplus water using automatic or manually controlled pumps.This research paper is structured to introduce the concept of utilizing the 5TE sensors and data loggers in volumetric water content observations and also to study the impact of temperature, considering wetting and drying cycles.The methodology is given in Section 3. Results and discussions are presented in Section 4. The conclusion of the work and suggestions for future studies are given in Section 5.
Related Studies
The introduction of sensor technology and data loggers encouraged automation and use of sensors for water management.Most of the recent works are focused on agricultural applications [21,22].The current study is mainly targeting the influence of environmental and geoenvironmental influence on the subsurface soils.The parameters investigated can also help in water management and plant demand.Balatsouras et al. [23] claimed that frameworks similar to WiChord+ are promising systems due to their compatibility with modern devices and accessories, providing an echosystem for smart agricultural approaches.A significant part of water is lost due to evapotranspiration (ET).Payero [24] suggested a simple weighing method to determine the evapotranspiration, which includes both plant transpiration and water evaporation from the soil surface.It is a vital part of the water cycle and affects the ecosystem's overall water balance.
Cariou et al. [25] highlighted the importance of buried sensor nodes and discussed its potential agricultural applications.Bertocco et al. [26] introduced a comparison between three approaches to assess the volumetric water content using ML algorithms.They concluded that an augmented VWC sensing method relying on a received signal strength indicator (RSSI) and soil-moisture sensor reading gave better results.The factors influencing the measurements of the volumetric moisture content are numerous, and there is a lot to investigate in this regard.This study observed the measurements of the volumetric water content at different temperature and watering patterns.
The field work related to this research considered sections of different types of claysand liners, and the sensors cover only the sections of concern.However, for vast and extended green areas, multiple sensors can be provided.A wireless sensor network (WSN) is an approach to collect data for extended and remote areas.Zhang et al. [27] claimed that in situ soil moisture observation can be reported in a radius exceeding 20 km.This is mainly dependent on careful selection of the required range.For field irrigation systems, this may not be essential.Few stations within the area may be sufficient to serve the purpose.
The work presented in this study is a further development that can serve irrigation systems, environmental assessment, and water management.This will certainly advance the current state-of-the-art of using underground sensors.The primary component of the clay liners is sand.The sand source for this study is a fine-to medium-grained material that is abundant in the nearby desert regions.The unified soil classification system, or ASTM D 2487 [28], is used to classify the sand as SP "poorly graded sand".The material was found to be outside of the uniformly graded sand range after looking at the coefficients of curvature (1.078) and uniformity (1.737).The sand used has a specific gravity of 2.65 to 2.66.The sand's particle size distribution is seen in Figure 1.In this study, the sand was used to compose layers for use in liners and layers overlying the liners for green parks.It is placed right under the agricultural soil and on top of the proposed liners to help in the drainage of water towards a collection point.
Natural Clay
The clay used in this study was obtained from Al-Qatif town, located in the easter province of Saudi Arabia.The specific gravity, soil classification, and index parameters o Al-Qatif clay are shown in Table 1.Al-Qatif clay, which is categorized as CH in the un fied soil classification system, or ASTM D 2487 [28], is a highly plastic soil that is we known for its strong expansion and shrinkage capabilities [29,30].The composition of Al-Qatif clay is shown in Table 2.The ASTM D 698 [31] standar was followed in examining the moisture-density relationship of the clay, which wa found in the range of 11.5 to 12 kN/km 3 , but when testing the clay-sand mixtures wit 20% clay by weight, the optimum moisture content level was 15 to 17%, and the max mum dry density was 17.5 to 18.00 kN/m 3 .Sand-clay mixtures were prepared using a clay content of 20%.The compactio properties of these mixtures were measured.
Natural Clay
The clay used in this study was obtained from Al-Qatif town, located in the eastern province of Saudi Arabia.The specific gravity, soil classification, and index parameters of Al-Qatif clay are shown in Table 1.Al-Qatif clay, which is categorized as CH in the unified soil classification system, or ASTM D 2487 [28], is a highly plastic soil that is well known for its strong expansion and shrinkage capabilities [29,30].The composition of Al-Qatif clay is shown in Table 2.The ASTM D 698 [31] standard was followed in examining the moisture-density relationship of the clay, which was found in the range of 11.5 to 12 kN/km 3 , but when testing the clay-sand mixtures with 20% clay by weight, the optimum moisture content level was 15 to 17%, and the maximum dry density was 17.5 to 18.00 kN/m 3 .CaO, % 0.9 0.9 Sand-clay mixtures were prepared using a clay content of 20%.The compaction properties of these mixtures were measured.
Bentonite Clay
For this study, the commercial bentonite HY Oil Companies Material Association (OCMA) was chosen.The liquid limit was reported as 480, and the plastic limit was measured as 50, resulting in a plasticity index of 430.The specific gravity was found in the range of 2.6 to 2.7.The chemical components of the HY OCMA bentonite are shown in Table 2.
Field Sections: Installation and Procedure
The main idea of this work was to develop a system that can save irrigation water and monitor environmental conditions during the lifetime of a green vegetation cover.All soils were prepared at the maximum dry density and the optimum moisture content, as relevant to the required clay content and the likely set-up.The main section addressed here consisted of 20% Al-Qatif clay liner.This was chosen based on previous studies conducted by the author.It was found that the more clay content, the lower the hydraulic conductivity of the liner.The natural clay of Al-Qatif cannot perform as the commercial bentonite with regard to the water-retention capacity.This is basically attributed to the lower plasticity.One other section was constructed in a similar way, but the natural Al-Qatif clay was enhanced with one-third bentonite (33.3% of the clay).The excavations for each section were 2 m long, 1 m wide, and 0.6 to 0.7 m deep.The material was placed and compacted in layers using small compacting machines to achieve the required thickness and the densities, as relevant to the standard proctor test.
A weather station was installed to monitor the temperature, rainfall, wind, and other parameters.Figure 2 presents the weather station constructed on site.Figure 3 shows a typical field section.
Bentonite Clay
For this study, the commercial bentonite HY Oil Companies Material Association (OCMA) was chosen.The liquid limit was reported as 480, and the plastic limit wa measured as 50, resulting in a plasticity index of 430.The specific gravity was found i the range of 2.6 to 2.7.The chemical components of the HY OCMA bentonite are show in Table 2.
Field Sections: Installation and Procedure
The main idea of this work was to develop a system that can save irrigation wate and monitor environmental conditions during the lifetime of a green vegetation cover All soils were prepared at the maximum dry density and the optimum moisture conten as relevant to the required clay content and the likely set-up.The main section addressed here consisted of 20% Al-Qatif clay liner.This was chosen based on previous studie conducted by the author.It was found that the more clay content, the lower the hydrauli conductivity of the liner.The natural clay of Al-Qatif cannot perform as the commercia bentonite with regard to the water-retention capacity.This is basically attributed to th lower plasticity.One other section was constructed in a similar way, but the natura Al-Qatif clay was enhanced with one-third bentonite (33.3% of the clay).The excavation for each section were 2 m long, 1 m wide, and 0.6 to 0.7 m deep.The material was placed and compacted in layers using small compacting machines to achieve the required thickness and the densities, as relevant to the standard proctor test.
A weather station was installed to monitor the temperature, rainfall, wind, and othe parameters.Figure 2 presents the weather station constructed on site.Figure 3 shows typical field section.Figure 4 shows a sketch of these layers, which are composed of top agricultural soil, mainly silty sand, followed by a sand layer and then the clay-sand liner.The thickness of the liner is 20 cm, and it is also underlain by 10 cm of free draining sand.
Variations in volumetric water content, temperature, and electrical conductivity were recorded using 5TE sensors (Figure 1) connected to an Em50 data logger.In addition to the sensors placed within the clay-sand liner marked as A and B, one more sensor was placed to record the ambient temperature and moisture.Table 3 presents the 5TE sensor specifications.Figure 4 shows a sketch of these layers, which are composed of top agricultural soil, mainly silty sand, followed by a sand layer and then the clay-sand liner.The thickness of the liner is 20 cm, and it is also underlain by 10 cm of free draining sand.Variations in volumetric water content, temperature, and electrical conductivity were recorded using 5TE sensors (Figure 1) connected to an Em50 data logger.In addition to the sensors placed within the clay-sand liner marked as A and B, one more sensor was placed to record the ambient temperature and moisture.Table 3 presents the 5TE sensor specifications.The sensors were set to take records at one-hour intervals.The 5TE sensors were attached to the mid-depth of the clay-sand layer.One more sensor was installed to record the ambient temperature.Figure 5 presents photographs from the site and methods of extracting clay in the Al-Qatif region.Figure 6 presents the maximum dry density versus moisture content for the clay-sand mixture with 20% clay The sensors were set to take records at one-hour intervals.The 5TE sensors were attached to the mid-depth of the clay-sand layer.One more sensor was installed to record the ambient temperature.Figure 5 presents photographs from the site and methods of extracting clay in the Al-Qatif region.Figure 6
Results and Discussion
A system of irrigation control proposed for this research is aimed at collecting excess water resulting from excessive irrigation.This is achieved by intercepting seeping water
Results and Discussion
A system of irrigation control proposed for this research is aimed at collecting excess water resulting from excessive irrigation.This is achieved by intercepting seeping water with the sand-clay liner.The excess water is directed towards an underground water tank.An automatic pump supplied to the underground water tank lifts water to a large on-ground water tank, acting as a main supply to the field sections.In practice, this tank can be eliminated and replaced by slopes of the ground made so that excess water is collected in a large sump or underground tank at one corner or edge of the park.From there, water can be recirculated for irrigation again.After a number of cycles, the water needs to be checked for salinity and suitability for the types of plants or grass used in the park.
The knowledge of the moisture content profile over time is very useful to assess the condition and make the decision to intervene if watering is required.The program of the study included performing 29 direct watering tests applied to a section, with a 20 cm thick clay-sand liner made up of Al-Qatif clay and sand (the clay is 20% by weight of the mixture).The second section considered for this study was for a bentonite-enhanced natural Al-Qatif clay.In this section, one-third of the clay was replaced by commercial bentonite.Similar monitoring was conducted.The hydraulic conductivity of a range of clay-sand mixtures was presented by Dafalla et al. [32].It can be seen that the hydraulic conductivity is reduced by increasing the clay content.The consideration of bentonite is to achieve a better hydraulic conductivity suitable to retain excess water.
Table 4 provides a summary of 29 direct watering tests conducted in the field with specified dates spanning a six-month period.The flowmeter readings for water entering the distribution pipes show the amount of water supplied each time.Records associated with each watering test are stored in the data loggers.
Figure 7 presents the ambient temperature profile for the site for the whole period of six months, from March to November.The range for minimum and maximum daily temperatures was reported in the range of 16 with the sand-clay liner.The excess water is directed towards an underground water tank.An automatic pump supplied to the underground water tank lifts water to a large on-ground water tank, acting as a main supply to the field sections.In practice, this tank can be eliminated and replaced by slopes of the ground made so that excess water is collected in a large sump or underground tank at one corner or edge of the park.From there, water can be recirculated for irrigation again.After a number of cycles, the water needs to be checked for salinity and suitability for the types of plants or grass used in the park.The knowledge of the moisture content profile over time is very useful to assess the condition and make the decision to intervene if watering is required.The program of the study included performing 29 direct watering tests applied to a section, with a 20 cm thick clay-sand liner made up of Al-Qatif clay and sand (the clay is 20% by weight of the mixture).The second section considered for this study was for a bentonite-enhanced natural Al-Qatif clay.In this section, one-third of the clay was replaced by commercial bentonite.Similar monitoring was conducted.The hydraulic conductivity of a range of clay-sand mixtures was presented by Dafalla et al. [32].It can be seen that the hydraulic conductivity is reduced by increasing the clay content.The consideration of bentonite is to achieve a better hydraulic conductivity suitable to retain excess water.
Table 4 provides a summary of 29 direct watering tests conducted in the field with specified dates spanning a six-month period.
The flowmeter readings for water entering the distribution pipes show the amount of water supplied each time.Records associated with each watering test are stored in the data loggers.
Figure 7 presents the ambient temperature profile for the site for the whole period of six months, from March to November.The range for minimum and maximum daily temperatures was reported in the range of 16 °C to 40 °C.Rainfall was traces, and no records above zero were reported.It is of interest to see what the temperature profile looks like for the ambient temperature and the temperature within the mid-section of the liner over a 24 h period.A comparison was made for two weather conditions prevailing in March and October (Figures 8 and 9).It is of interest to see what the temperature profile looks like for the ambient temperature and the temperature within the mid-section of the liner over a 24 h period.A comparison was made for two weather conditions prevailing in March and October (Figures 8 and 9).When the ambient temperature is building up in the morning, a difference of 4 °C can be observed in the mid-liner level.When a maximum temperature is reached and the weather starts to cool, the temperature in the liner continues to increase, but at a lower rate.After three hours from the maximum hot point, the temperature in the clay-sand liner became similar to the ambient temperature.In further hours, the subsurface temperature is much hotter than the ambient temperature.This is due to the fact that clay absorbs heat slowly and releases heat slowly.This property can be utilized to determine the most convenient time to use the park during hot weather conditions.Both March and October introduced the same temperature trends.As the temperature remained within 40 °C, the impact on the permittivity was small, and the calibration of the moisture measurements was not significantly affected.Knowledge of the temperature fluctuation of the near-surface soil will help with the convenience of the facility.When the ambient temperature is building up in the morning, a difference of 4 °C can be observed in the mid-liner level.When a maximum temperature is reached and the weather starts to cool, the temperature in the liner continues to increase, but at a lower rate.After three hours from the maximum hot point, the temperature in the clay-sand liner became similar to the ambient temperature.In further hours, the subsurface temperature is much hotter than the ambient temperature.This is due to the fact that clay absorbs heat slowly and releases heat slowly.This property can be utilized to determine the most convenient time to use the park during hot weather conditions.Both March and October introduced the same temperature trends.As the temperature remained within 40 °C, the impact on the permittivity was small, and the calibration of the moisture measurements was not significantly affected.Knowledge of the temperature fluctuation of the near-surface soil will help with the convenience of the facility.When the ambient temperature is building up in the morning, a difference of 4 • C can be observed in the mid-liner level.When a maximum temperature is reached and the weather starts to cool, the temperature in the liner continues to increase, but at a lower rate.After three hours from the maximum hot point, the temperature in the clay-sand liner became similar to the ambient temperature.In further hours, the subsurface temperature is much hotter than the ambient temperature.This is due to the fact that clay absorbs heat slowly and releases heat slowly.This property can be utilized to determine the most convenient time to use the park during hot weather conditions.Both March and October introduced the same temperature trends.As the temperature remained within 40 • C, the impact on the permittivity was small, and the calibration of the moisture measurements was not significantly affected.Knowledge of the temperature fluctuation of the near-surface soil will help with the convenience of the facility.
The response to direct watering was studied for each test.The amount of water supplied through pipes and measured in a flowmeter is given as an initial inlet reading and a final inlet reading, and the water amount in liters is computed.The date of watering and the time during which water is supplied to the grass are also recorded.The impact of watering as read in the sensor is measured as a starting vmc (volumetric water content) and the maximum vmc.The difference in vmc is computed.The time needed to reach the maximum vmc is also examined.
Table 4 summarizes 29 direct watering tests, vmc, and quantities of water supplied.Figure 10 presents the overall water supplied to the Al-Qatif clay liner section over a period of six months.Figure 11 presents the starting and maximum volumetric moisture content and the general trend of moisture variation over six months.The section was let to dry up on two occasions, and the volumetric water content was as low as 0.150.It is worth mentioning that when watering an initially dry clay liner, the gain of moisture is quick, and the maximum vmc is reached in a short time.This is attributed to the high suction created within the clay-sand mixture.The average peak of the vmc is estimated at 0.52 for the period from April to October and 0.470 in the month of March.Figures 12-14 show the volumetric moisture content on selected dry and wet The section was let to dry up on two occasions, and the volumetric water content was as low as 0.150.It is worth mentioning that when watering an initially dry clay liner, the gain of moisture is quick, and the maximum vmc is reached in a short time.This is attributed to the high suction created within the clay-sand mixture.The average peak of the vmc is estimated at 0.52 for the period from April to October and 0.470 in the month of March.Figures 12-14 show the volumetric moisture content on selected dry and wet The section was let to dry up on two occasions, and the volumetric water content was as low as 0.150.It is worth mentioning that when watering an initially dry clay liner, the gain of moisture is quick, and the maximum vmc is reached in a short time.This is attributed to the high suction created within the clay-sand mixture.The average peak of the vmc is estimated at 0.52 for the period from April to October and 0.470 in the month of March.show the volumetric moisture content on selected dry and wet days.
tonite-enhanced section.It was found to take 80 h compared to 60 h needed for the natural clay of Al-Qatif.The advantage of adding clay is the decrease in the hydraulic conductivity.The study of hydraulic conductivity is not set as an objective of this research.The composition of the two sections is different, and the bentonite is found to have different electrical conductivity, which is likely to influence the measurement of the volumetric moisture content.The variation of the electrical conductivity was presented in previous research by Dafalla et al. [33] and is shown in Figure 16.tonite-enhanced section.It was found to take 80 h compared to 60 h needed for the natural clay of Al-Qatif.The advantage of adding clay is the decrease in the hydraulic conductivity.The study of hydraulic conductivity is not set as an objective of this research.The composition of the two sections is different, and the bentonite is found to have different electrical conductivity, which is likely to influence the measurement of the volumetric moisture content.The variation of the electrical conductivity was presented in previous research by Dafalla et al. [33] and is shown in Figure 16.For comparison purposes, another similar section consisting of 20% Al-Qatif clay enhanced by replacing one-third of the local clay with bentonite was constructed in close vicinity.This section was watered on the same dates as the main section presented in this study.Figure 15 presents the response to watering in both sections.The trend is almost similar, but the time required to dry out to the original moisture is longer for the bentonite-enhanced section.It was found to take 80 h compared to 60 h needed for the natural clay of Al-Qatif.The advantage of adding clay is the decrease in the hydraulic conductivity.The study of hydraulic conductivity is not set as an objective of this research.The composition of the two sections is different, and the bentonite is found to have different electrical conductivity, which is likely to influence the measurement of the volumetric moisture content.The variation of the electrical conductivity was presented in previous research by Dafalla et al. [33] and is shown in Figure 16.Although soil moisture interaction is a very complex phenomenon, it has recently become easier to study and has aided researchers, geotechnical engineers, and the landscape industry in conducting more insightful studies.This is due to the advent of digital sensors that can measure electrical conductivity and moisture content, soil behavior prediction, and weather-related temperature and moisture monitoring at all times.When there is insufficient water for plants or when the soil is drying out, automatic irrigation pumps may be turned on.When used in clay-sand liners, the data obtained from these sensors can provide useful information to environmental engineers.The trends and variations in the temperature profile of clay-sand mixtures can be readily obtained.It is worth mentioning here that the output of 5TE sensors is calibrated by the manufacturer.The manufacturers conduct calibrations to make sure that the system is valid for a wide range of soils.Assessment was conducted by other researchers to validate the use of these sensors [8,34].It is common to measure the gravimetric moisture content rather than the volumetric moisture content.According to reference [34], these sensors functioned correctly at salinities of 2.42 dS m −1 and temperatures of 25 °C, respectively.When the temperature and salinity of the soil were between 16 and 30 °C and 1.9 and 2.75 dS m −1 , respectively, this sensor generally produced satisfactory findings (using data processed by manufacturer programming).
Conclusions
The use of 5TE sensors linked with a subsurface liner was found to be a promising system for saving irrigation water and monitoring the subsurface moisture content.The 5TE sensor is a TDR type, which is found to be an effective, time-saving, and early-warning tool for watering or dewatering excess water from underneath green parks.
The knowledge of the temperature fluctuation of the near-surface soil will aid in the convenient use of the facility.
The information obtained from the field log can help in determining long, relaxing periods.During the winter, green spaces with lots of clay will retain heat for a few hours and offer comparatively mild evenings.During the summer, the mixture will stay cool for a portion of the day.
Repeated watering tests confirmed a stable measuring system and reliable performance.The set-up suggested for saving irrigation water includes drainage and water Although soil moisture interaction is a very complex phenomenon, it has recently become easier to study and has aided researchers, geotechnical engineers, and the landscape industry in conducting more insightful studies.This is due to the advent of digital sensors that can measure electrical conductivity and moisture content, soil behavior prediction, and weather-related temperature and moisture monitoring at all times.When there is insufficient water for plants or when the soil is drying out, automatic irrigation pumps may be turned on.When used in clay-sand liners, the data obtained from these sensors can provide useful information to environmental engineers.The trends and variations in the temperature profile of clay-sand mixtures can be readily obtained.It is worth mentioning here that the output of 5TE sensors is calibrated by the manufacturer.The manufacturers conduct calibrations to make sure that the system is valid for a wide range of soils.Assessment was conducted by other researchers to validate the use of these sensors [8,34].It is common to measure the gravimetric moisture content rather than the volumetric moisture content.According to reference [34], these sensors functioned correctly at salinities of 2.42 dS m −1 and temperatures of 25 • C, respectively.When the temperature and salinity of the soil were between 16 and 30 • C and 1.9 and 2.75 dS m −1 , respectively, this sensor generally produced satisfactory findings (using data processed by manufacturer programming).
Conclusions
The use of 5TE sensors linked with a subsurface liner was found to be a promising system for saving irrigation water and monitoring the subsurface moisture content.The 5TE sensor is a TDR type, which is found to be an effective, time-saving, and early-warning tool for watering or dewatering excess water from underneath green parks.
The knowledge of the temperature fluctuation of the near-surface soil will aid in the convenient use of the facility.
The information obtained from the field log can help in determining long, relaxing periods.During the winter, green spaces with lots of clay will retain heat for a few hours and offer comparatively mild evenings.During the summer, the mixture will stay cool for a portion of the day.
Repeated watering tests confirmed a stable measuring system and reliable performance.The set-up suggested for saving irrigation water includes drainage and water collection, which needs to be addressed in future studies to select appropriate slopes and water-collection vessels.Also, a system for triggering automatic pumps needs to be investigated based on the plant's needs and watering periods.The clay-sand liner volumetric water content can reach a value of 0.15 when no watering or wetting is conducted over a period of a few days.The electrical conductivity of the bentonite-enhanced clay-sand liner can reach a value of 3.7 mS/cm, while the Al-Qatif clay reported a maximum of 2.3 mS/cm.
Installing this sensor type is practical and can provide continuous data and information that enable quick decisions needed for watering grass and the collection of surplus water using automatic or manually controlled pumps.Future works can explore the effect of humidity on the clay-sand liner and also the hydraulic conductivity using this technology.Introduction of a control panel system may also be an option to enable automatic pumps.The data presented in this study are limited to the type of clay used but can be used for estimating the behavior of other clays with a similar mineralogy or properties.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Figure 2 .
Figure 2. A weather station constructed on site.
Figure 2 .
Figure 2. A weather station constructed on site.
Figure 2 .
Figure 2. A weather station constructed on site.
Figure 4 .
Figure 4. Typical layers of the field section.
Figure 4 .
Figure 4. Typical layers of the field section.
Figure 5 .
Figure 5. Photographs showing the site and clay material excavation works.
Figure 6 .
Figure 6.Maximum dry density versus moisture content for the clay-sand mixture with 20% clay and for Al-Qatif clay.
Figure 5 .
Figure 5. Photographs showing the site and clay material excavation works.
Figure 6 .
Figure 6.Maximum dry density versus moisture content for the clay-sand mixture with 20% clay and for Al-Qatif clay.
Figure 7 .
Figure 7. Maximum and minimum daily temperature over 6-month period.
Figure 7 .
Figure 7. Maximum and minimum daily temperature over 6-month period.
Figure 8 .
Figure 8. Temperature profile of 24 h of moderate weather in March.
Figure 9 .
Figure 9. Temperature profile of 24 h of hot weather in October.
Figure 8 .
Figure 8. Temperature profile of 24 h of moderate weather in March.
Figure 8 .
Figure 8. Temperature profile of 24 h of moderate weather in March.
Figure 9 .
Figure 9. Temperature profile of 24 h of hot weather in October.
Figure 9 .
Figure 9. Temperature profile of 24 h of hot weather in October.
Sensors 2024 , 16 Figure 10 .
Figure 10.The overall water supplied to the Al-Qatif clay liner section over six months.
Figure 11 .
Figure 11.Trend of moisture variation over six months.
Figure 10 . 16 Figure 10 .
Figure 10.The overall water supplied to the Al-Qatif clay liner section over six months.
Figure 11 .
Figure 11.Trend of moisture variation over six months.
Figure 11 .
Figure 11.Trend of moisture variation over six months.
Figure 15 .
Figure 15.Volumetric moisture content compared with the Al-Qatif clay liner and a bentonite-enhanced Al-Qatif clay liner.
Figure 15 .Figure 15 .
Figure 15.Volumetric moisture content compared with the Al-Qatif clay liner and a bentonite-enhanced Al-Qatif clay liner.
Figure 16 .
Figure 16.Differences in electrical conductivity between the Al-Qatif clay and the bentonite clay, as measured for a 24 h period.
Figure 16 .
Figure 16.Differences in electrical conductivity between the Al-Qatif clay and the bentonite clay, as measured for a 24 h period.
Table 2 .
Chemical composition of Al-Qatif clay and bentonite clay.
Table 2 .
Chemical composition of Al-Qatif clay and bentonite clay.
Table 4 .
Summary of 29 direct watering tests, vmc, and quantities of water supplied.
• C to 40 • C. Rainfall was traces, and no records above zero were reported. | 2024-05-30T15:27:20.038Z | 2024-05-28T00:00:00.000 | {
"year": 2024,
"sha1": "b6ad2b68482da4ab585b52b2d9df096ff8415d67",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/24/11/3479/pdf?version=1716893001",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1dd50a0120bd8513bdd30b5d798bb03f904ff2fe",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": []
} |
16799 | pes2o/s2orc | v3-fos-license | Topographical Anisotropy and Wetting of Ground Stainless Steel Surfaces
Microscopic and physico-chemical methods were used for a comprehensive surface characterization of different mechanically modified stainless steel surfaces. The surfaces were analyzed using high-resolution confocal microscopy, resulting in detailed information about the topographic properties. In addition, static water contact angle measurements were carried out to characterize the surface heterogeneity of the samples. The effect of morphological anisotropy on water contact angle anisotropy was investigated. The correlation between topography and wetting was studied by means of a model of wetting proposed in the present work, that allows quantifying the air volume of the interface water drop-stainless steel surface.
Introduction
The quantitative description of the microstructure and the surface topography is a research field, which can provide a better understanding of the relation between surface topography, microstructure, and mechanical and physical-chemical properties. Surfaces of materials contain information about the mechanism of their formation as well as the factors that have influence on this mechanism. Besides, the surface morphology of a material can essentially influence its functional character. In many cases, OPEN ACCESS a systematic surface characterization is necessary to set up quantitative correlations between production conditions and physico-chemical properties of engineering surfaces to compare the resulting surface with standards and to model surface behavior.
During the last 30 years, the possibilities for surface topography quantification have been broadened by the availability of new methods [1]. For the evaluation of topographical data, several mathematical operations such as calculation algorithms and standard parameters can be applied today [2]. For this reason, the selection of the correct methodology while evaluating the data measured and the optimal use of the topographical information obtained is especially relevant.
For any type of modification of a technical surface, the interplay between topography and surface chemistry determines the surface properties. Therefore, topographic qualitative description (morphology) and its quantitative description (topometry) are of great importance. Every modification can produce changes on the surface in a special way. Additional to the nature of the process (mechanical, optical, electrical, magnetic, chemical and biological), the duration of its effect and external mechanical/environmental influences must be considered in general [3]. The resulting topography correlates to nanoscopic, microscopic and macroscopic properties, which in combination define the final surface properties.
This paper is focuses on investigating the effect of morphological anisotropy on water contact angle. The correlation between topography and wetting of the grounded surfaces of stainless steel was investigated by means of a mathematical model, which is proposed in the present work to describe the wetting properties. In addition, this new model allows the calculation of the enclosed air volume in the interface between water and stainless steel. Recently, Ishino et al. [4] proposed a model to describe the transition states between metastable contacts and to quantify the energy barriers between them. With the use of phase diagrams in the two dimensional space of texture parameters they postulate transitional stages between the different wetting regimes. Further, Kioshi et al. [5] presented a simulation evidence of coexisting Wenzel/Cassie [6,7] state for water droplets on a pillared hydrophobic surface. According to their results, a critical pillar height exists beyond which water droplets on pillared hydrophobic surfaces can be in the bistable Wenzel/Cassie state, depending on the initial condition of the droplets. To reach these results, Kioshi et al. computed the free-energy barrier separating the Wenzel and Cassie states on the molecular level, based on a statistical mechanics method.
The model presented in this paper is a validation of that presented by Kioshi et al for the molecular scale, adapted to the microscopic scale.
Materials
An untreated stainless steel sample and six ground samples with different roughness values were used in this study ( Table 1). The material examined was an austenitic stainless steel ANSI 316L 2B, which is notable for its face-centered cubic structure that improves the ductility and high corrosion resistance compared to plain carbon steels. Grinding damages the crystallite structures and dramatically changes the topography by giving the surface a defined anisotropy.
Topographic Characterization
The topographic characterization was realized by means of high-resolution scandisk confocal microscopy (SDCM). This is an optical imaging technique used to increase micrograph contrast and/or to reconstruct three-dimensional images by using a spatial pinhole to eliminate out-of-focus light or flare in specimens that are thicker than the focal plane (United States Patent 6824056) [8]. This method allows a fast 3D measurement of topography, structure and roughness with excellent height resolution and depth of field. In this study, a µSurf (Nanofocus AG, Germany) device was used. To characterize the stainless steel samples, a cut-off length of L m = 260 µm, a lateral resolution of x = 0.3 µm and a vertical resolution of z = 2-6 nm was used. Four measurements on different positions were done on each sample.
Characterization of Wettability
Wettability was characterized by means of static contact angle measurements. To determine the static contact angle , a measuring device OCA 40 Micro (Data Physics, Germany) was used. Prior to the characterization of wetting, the effect of gravity on the water drop volume was investigated. For this purpose, different drop volumes were applied on Sample 1. The contact angles were observed in the sanding direction S of the stainless steel and perpendicular T to this direction ( Figure 1). According to results shown in Figure 2, there was no significant effect of gravity for volumes equal or less than 30 µL. For this reason, 30 µL deionized water drops were used for the characterization of the wettability. The drops were placed on the surface with the help of a microliter syringe. Thereafter, the needle of the syringe was withdrawn from the drops and the contact angles were determined with the help of the software SCA 20th. Five water drops were applied to each specimen to determine the static contact angles observed in direction S and five measurements were made perpendicular T to this direction.
Results and Discussion
Unidirectional machine grinding of untreated samples with different grains (Table 1) resulted in samples of six different topographies. These differences can be appreciated using the surface arithmetic mean roughness S a (DIN EN ISO 25178) [9], as shown in Figure 3. As will be shown later, for a more complete study of the topography, two additional parameters were used: the surface area ratio-also known as Wenzel factor-and the reduced roughness.
The dependence of the static contact angle on the direction of the measurement has previously been studied by Shuttleworth and Bailey [10], Chen et al. [11] and by Neuhaus et al. [12]. In the present study, all ground surfaces showed important differences between contact angles measured in T and S directions. This contact angle anisotropy is proportional to the surface arithmetic mean roughness S a , as shown in Figure 3. Rougher samples show larger wetting anisotropy than the smoother ones. The untreated sample, which is almost isotropic, shows the smallest difference between T and S . Contact angles observed in T direction are, except in the case of Sample 1, smaller than those observed in S direction because grooves drive the water by capillary force. In both directions, minima are observed at the roughness value corresponding to Sample 4 ( Figure 4). The microscopic grooves of stainless steel surface formed during grinding determine the geometry of the water droplet boundary. This effect of the topographic anisotropy can be seen by means of the confocal microscope images ( Figure 5 and 6). Water is detained in the places where it flows perpendicular to the groves (Figures 5a,b and 6f), but tends to flow along the groves by capillary force (Figure 6h). On the intermediate contact lines, neither perpendicular nor parallel to the groves (Figure 6d,g), the water boundary is far more regular. Although the ground/polished Sample 1 is the least rough, its boundaries are not at all regular. Unlike the rest, water tends to spread in the direction perpendicular to the grooves (Figure 6c) resulting in a static contact angle of θ T > θ S , as shown in Figure 4.
To better understand the effect of topography and surface anisotropy on wetting, we can decompose the geometry of the droplet into two orthogonal components T and S, according to Figure 7. Using this model, it is possible to describe the effect of the topographical anisotropy on wetting by quantifying the topographical anisotropy by means of the reduced parameter R* and the reduced contact angle anisotropy θ* (Figure 8): where R aT and R aS are the 2D-arithmetic roughness (DIN 4768, ASME B46.1) of the surface profiles measured in direction T and S, respectively. The use of R* and θ* is intended to measure the anisotropy by quantifying the differences between the parameters measured in both directions and relating these differences to the direction with the smoother topography, used as local reference. According to Figure 8 there is a significant relationship between the topographic anisotropy and wettability in both directions. Once we have proved this relationship, it is possible to use this S-T components model to investigate the wetting regime on these modified steel surfaces.
Considering that wetting is complete on the whole surface, Young's equation could be used to calculate the contact angle of a rough, chemically homogeneous surface by using the roughness factor r introduced by Wenzel in 1936 [6] and defined as the ratio of the actual area of a rough surface to the geometric projected area on the horizontal plane: whereθ w is the apparent-measured and equilibrium and θ is the real-Young-contact angle. If the roughness of a surface is completely isotropic, then R aS = R aT and in consequence the apparent angles have the same value: cos θ wT = cos θ wS . But for an anisotropic surface in a complete wetting regime, it should be valid that only the real contact angels are the same: cos θ T = cos θ S , and hence: Where r S and r T are the two-dimensional Wenzel factors-the length of the profile perimeters-along the S and T direction, respectively. For anisotropic surfaces it holds that r T 1 and r S is the Wenzel factor of the whole surface, r. Thus, However, by applying Equation (5) to the available data no correlation was found, indicating that, although the angles are slightly lower than 90 degrees, complete wetting cannot be considered in all the interfacial areas.
The apparent contact angles θ wT measured in T direction at the position marked in Figure 9a are the smallest because, in these regions of the drop boundaries, the liquid is in complete contact with the surface. For this reason we can assume a complete wetting around the contact points where the angles were measured in T direction. On the contrary, the apparent contact angles measured in S direction correspond to a partial wetting regime. As demonstrated above, if we consider partial wetting in the S direction but complete wetting on the drop boundaries in the T direction, it is possible to apply the Cassie and Baxter model [7,13] to the S direction. This model considers the wettability of a composite surface composed of two types of homogeneous patches that have different solid-fluid interfacial tensions. The apparent contact angle is then given by: where f i and i represent the surface area fraction and the contact angle of patch i, respectively. For porous or corrugated surfaces, the roughness is mainly filled with air. The openings of the pores can be regarded as nonwetting patches with 2 = 180°. Since f 2 = 1 -f 1 , Equation (6) is: where f is the quotient of the contact area surface and the projected area on the horizontal plane.
Equations (3) and (7) can be combined to obtain an expression to the solid fraction that considers the partial wetting along the S direction and complete wetting near to the boundaries of the T direction, as showed in Figure 9. Thus, The solid fractions can now be calculated using the static contact angles. Indeed, Figure 10 shows that the surface anisotropy (R*) controls the partial wetting for Samples 2 to 6. Sample 1 presents, according to this model, almost complete wetting (f = 1). At the opposite end, Sample 6 is in the minimum of solid fraction, with only 67% of its surface being in contact with water. Figure 11 can help to better understand the morphological differences between the surfaces of the samples by comparing their profiles in S direction. According to Figure 11, Samples 3 and 4 have similar profiles, therefore their Wenzel factors-i.e., their perimeter profile lengths-shown in Figure 9 are almost the same. The profile of Sample 6 has higher peaks than that of Sample 5, but the distances between the peaks of Sample 6 are clearly larger and therefore its profile length-Wenzel factor-is relatively lower than that of Sample 5. Nevertheless Sample 6 has a higher topographical anisotropy (R*)-difference between R aT and R aS -than Sample 5 (see Figure 10).
With the solid fraction values obtained using Equation (8), it is possible to estimate the size of the air volumes trapped in the interface of the surface and the water droplet. But for this purpose it is necessary to construct the curves "solid fraction vs. height". The topographical data can be used as input to calculate the solid fraction f at different height levels using the software FRT Mark III (v.3.8.10). Using this procedure, we constructed the curves h f, where h is the height level with respect to the mean height ( Figure 12). A very good function to correlate the points obtained is the sigmoidal "DoseResp" curve (Origin software v.8.61) that provides R 2 coefficients from 0.999 to 1: where a, b, c and p are correlation constants. Results are listed in Table 2. By interpolating the solid fraction sigmoidals of Figure 12 with the solid fractions reported in Figure 10 it is possible to obtain the height of the interfacial air of each sample. Using again the software FRT Mark III (v. 3.8.10) it was possible to calculate the air volume trapped in the interfaces (Figure 13), i.e., the void volume between an imaginary plane at height h and the surface of the sample. Finally, using the topographical data, it is possible to represent the profile and the air interface height of the surfaces investigated as shown Figure 14 for Sample 5. Table 2).
Conclusions
The effect of topographical anisotropy of a stainless steel surface on water contact angle was investigated. The correlation between topography and wetting can be described by the T-S Model proposed in the present work. This model, which can be summarized by Equation 8, links the contact angle information measured in both perpendicular directions and the Wenzel roughness as well as the solid fraction. Using this model it was possible, independent of the relative hydrophilic surface of the steel (static contact angles <90°), to show that the wetting on the steel surface was not complete due to the air present in the deep cavities of the surface.
The apparent contact angle T measured in T direction (parallel to grooves direction) is the smallest possible for the ground samples because in these regions of the drop boundaries, the liquid is in complete contact with the bottom surface of the grooves. For this reason we can conclude that wetting is complete around the contact points where the angles were measured in T direction (green regions in Figure 15). On the contrary, the apparent contact angles measured in S direction (perpendicular to grooves direction) correspond to a partial wetting regime (dark regions in Figure 15). Figure 15. Complete wetting around the contact points where the angles were measured in T direction (green regions). The apparent contact angles measured in S direction correspond to a partial wetting regime (dark regions).
According to the above, our model is a validation of that presented by Kioshi et al [5] for the molecular scale to the microscopic scale (see Section 1). These authors suggested the existence of transition conditions between Wenzel and Cassie-Baxter regimes, which was experimentally confirmed in this study. Finally, a further application of the model presented in this article is to quantify the air present between solid and liquid phases using the topographic information and the measurements of contact angle anisotropy. | 2018-05-30T22:08:51.940Z | 2012-12-01T00:00:00.000 | {
"year": 2012,
"sha1": "54b5dd7ac22f118c3fcec1bfb6da1c2ea513e941",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/5/12/2773/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "54b5dd7ac22f118c3fcec1bfb6da1c2ea513e941",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
245390011 | pes2o/s2orc | v3-fos-license | The Impact of COVID-19 on Sleep Quality in People Living With Disabilities
Background: Research exploring the impact of the COVID-19 pandemic on sleep in people with disabilities has been scarce. This study provides a preliminary assessment of sleep in people with disabilities, across two timepoints during the pandemic, with a focus on those with visual impairment (VI). Methods: Two online surveys were conducted between April 2020 and March 2021 to explore sleep quality using the Pittsburgh Sleep Quality Index (PSQI). A convenience sample of 602 participants completed the first survey and 160 completed the follow-up survey. Results: Across both timepoints, participants with disabilities reported significantly poorer global sleep quality and higher levels of sleep disturbance, use of sleep medication and daytime dysfunction than those with no disabilities. Participants with VI reported significantly higher levels of sleep disturbance and use of sleep medication at both timepoints, poorer global sleep quality, sleep duration and latency at time 1, and daytime dysfunction at time 2, than those with no disabilities. Global sleep quality, sleep duration, sleep efficiency, and self-rated sleep quality deteriorated significantly in participants with no disabilities, but daytime dysfunction increased in all three groups. Disability and state anxiety were significant predictors of sleep quality across both surveys. Conclusion: While sleep was consistently poorer in people with disabilities such as VI, it appears that the COVID-19 pandemic has had a greater impact on sleep in people with no disabilities. State anxiety and, to a lesser extent, disability, were significant predictors of sleep across both surveys, suggesting the need to address anxiety in interventions targeted toward improving sleep.
INTRODUCTION
The COVID-19 pandemic has impacted people's lifestyles and routines worldwide. In the initial absence of a vaccine, governments across the globe introduced measures such as mask-wearing, social distancing, shielding, self-isolation, and quarantining to reduce the spread of the coronavirus. A number of countries, including the United Kingdom, introduced government-mandated "lockdowns, " which limited movement and social contact. Unsurprisingly, there has been a growing focus on the mental and physical health impacts of these restrictions, including those relating to sleep (Groarke et al., 2020;Pérez-Carbonell et al., 2020;Killgore et al., 2021). Poor sleep has been associated with poorer health-related quality of life (Lo and Lee, 2012), diminished cognitive functioning, and poor mental health, including increased incidence of anxiety and depression (Benitez and Gunstad, 2012;Gadie et al., 2017;Ingram et al., 2020). Changes in sleep duration have also been linked to increased alcohol consumption (Neill et al., 2020) and long-term health effects, including the incidence of physical health conditions such as hypertension, activation of the sympathetic nervous system, impaired glucose control, and increased inflammation (Alvarez and Ayas, 2004).
Research suggests that self-reported sleep patterns, sleep duration, and sleep quality have all worsened under lockdown (Gupta et al., 2020;Pérez-Carbonell et al., 2020). A study conducted in the United Kingdom in May 2020 found that 50% of participants reported that their sleep had been disrupted more than usual, 39% reported that they had been sleeping fewer hours per night compared to before the lockdown, and 29% reported sleeping longer hours but feeling less rested (King's College London and IPSOS Mori, 2020). One proposed reason for the observed impact of the pandemic on sleep is disruption to circadian rhythms, as a result of increased time spent indoors and, consequently, less daylight exposure (Cardinali et al., 2020;Morin et al., 2020). Lifestyle and lifestyle changes prompted by the pandemic, such as disruption to daily physical activity, but not low levels of physical activity (Diniz et al., 2020;Gupta et al., 2020), and alcohol consumption (Romero-Blanco et al., 2020;Robillard et al., 2021), but not changes in alcohol consumption (Ingram et al., 2020), have been found to impact sleep during the pandemic. Anxiety, depression, and stress brought on by the pandemic may have further contributed to irregular sleep patterns (Altena et al., 2020;Evans et al., 2021;Robillard et al., 2021;Villadsen et al., 2021). Research from China found that stress and anxiety were associated with poorer sleep quality in people with lower levels of social capital (i.e., a sense of trust, belonging, and participation within society) who were self-isolating for 14 days at the beginning of the pandemic (Xiao et al., 2020). In addition, chronic illnesses such as hypertension, diabetes, and arthritis have also been linked to sleep difficulties during the pandemic (Robillard et al., 2021). In the United Kingdom, those most at risk due to underlying health conditions were instructed to "shield, " which meant no social contact for long periods of time. Shielding has been associated with poor sleep, with more people in the shielding group than expected experiencing poorer sleep (Ingram et al., 2020). One factor at play may be loneliness, which has previously been found to have a reciprocal effect on sleep; higher levels of loneliness correlate with higher levels of disturbed sleep (Griffin et al., 2020;Groarke et al., 2020). Indeed, loneliness has been identified as a contributing factor in clinical insomnia during the pandemic (Kokou-Kpolou et al., 2020).
Visual impairment (VI) typically refers to reduced vision, which is not correctable with glasses, contact lenses, or surgery, and can range from mild to severe vision loss or blindness. Blind people with no light perception may be at particular risk of poor sleep quality and short sleep duration (Leger et al., 1999;Peltzer and Phaswana-Mafuya, 2017;Hartley et al., 2018). This may be a result of disruption to circadian rhythms, responsible for the sleep-wake cycles, due to a reduction or complete lack of light information being relayed from the eye to the master circadian clock in the hypothalamus (Lockley et al., 2007). Research conducted prior to the pandemic found a higher incidence of disrupted sleep, greater sleep latency, shorter sleep duration, greater daytime disruption, irregular sleep patterns, and difficulty maintaining sleep in people living with VI, particularly those with no light perception, compared to controls (Tabandeh et al., 1998;Tamura et al., 2016). Poor sleep has also been found in people living with other types of disability such as intellectual and developmental disabilities (Richdale and Baker, 2014), hearing difficulties (Test et al., 2011), and traumatic brain injuries (Castriotta et al., 2007;Ouellet et al., 2015).
Very little research has considered the impact of the pandemic on sleep in people with disabilities such as VI, although the existing evidence indicates that disability may have impacted on sleep at this time. One study reported that people living with a disability were more likely to report shorter sleep duration (<6 h) than those without a disability (Pérez-Carbonell et al., 2020), and another reported a high prevalence of insomnia (71%) in those with disabilities, including VI (Necho et al., 2020). Given the impact of sleep on mental and physical health, it is important to understand how the pandemic has affected the sleep quality of those living with disabilities such as VI. The present paper sets out to explore sleep quality in people with disabilities, with a focus on those living with VI, as the pandemic progressed.
MATERIALS AND METHODS
Longitudinal data were collected in two online surveys conducted first between March and April 2020 (T1) and a follow-up conducted in March 2021 (T2). Additional details of methods and findings relating to the sample used in this study are reported in Heinze et al. (2021) and are available to supplement the findings in this article.
Materials
The online survey was developed in Microsoft Forms (Microsoft Corporation, Redmond, WA)
Frontiers in Psychology | www.frontiersin.org team at Blind Veterans UK, a charity supporting British veterans with sight loss, in collaboration with the University of Oxford. The survey platform was selected due to its accessibility features for participants with VI including compatibility with screen readers, color contrast, and high contrast settings. The survey was further made accessible by splitting grid questions across individual pages, so that participants were shown only one question per page to ensure ease of reading.
In addition to participant information, consent, and demographics, the questionnaire consisted of four sections: current life circumstances (e.g., employment and self-isolation status); health and health behaviors (e.g., disability, alcohol consumption); sleep quality; and social well-being (loneliness, anxiety). The questionnaire was amended for T2 to improve data quality (examples given below) and reduce participant burden.
Disability Status
At T1, disability status was assessed by first asking participants if they had a disability followed by a question which instructed them to select all types of disability that applied to them from a list of 16 conditions including "VI or blindness" (see Table 1). At T2, participants were asked if they considered themselves to have a disability followed by a grid question which required them to select "Yes, " "No, " or "Prefer not to say" for each of the 16 conditions. As a result, the mean number of conditions reported increased from 2 at T1 to 3 at T2. As many as six additional participants reported having conditions such as disability affecting mobility or mental health conditions at T2. However, one person indicated having limb loss at T1 but not T2.
Self-Isolation
Self-isolation status was assessed with a single question which asked participants to indicate how long they had been selfisolating from a list of response options which included "I am a keyworker/not able to self-isolate" and "I do not selfisolate" and ranged from "0-2 weeks" to "Over 12 weeks" at T1 and from "0-2 weeks" to "Over 6 months" at T2.
Alcohol Consumption
At T1, alcohol consumption was assessed by two questions asking participants if they drank alcohol, followed by how often they had been drinking alcohol, with response options ranging from "Once a week" to "Every day." At T2, the questions were combined into a single question which asked how often participants had been drinking alcohol over the last 3 weeks and which included the response option "I do not drink alcohol."
State Anxiety
State anxiety was assessed using the 20-item state anxiety subscale (STAI-S) of the State Trait Anxiety Index (Spielberger, 1970(Spielberger, , 1983. Only state anxiety, as opposed to trait anxiety, was measured in this study. In addition to ensuring brevity of the survey, this was to determine current feelings of anxiety at different timepoints during the pandemic, instead of an individual's proclivity to experience anxiety. The STAI-S consists of 10 positively and 10 negatively worded statements. Respondents are instructed to indicate how they are feeling "right now" on a scale of 1 (Not at all) to 4 (Very much). Positively worded items are reverse-scored, and all scale responses are summed to derive a subscale score ranging from 20 to 80, with higher scores indicative of greater state anxiety. Spielberger et al. (1983) reported excellent internal validity of the STAI-S with a median alpha coefficient of α = 0.93 (ranging from α = 0.86 to 0.95) for samples of working-age adults, college students, high school students, and military recruits and relatively poor test-retest reliability with a median correlation of r = 0.33 (ranging from r = 0.16 to r = 0.62) for samples of college and high school students ascribed to the temporary nature of state anxiety.
Loneliness
Loneliness was assessed using version 3 of the UCLA Loneliness scale (Russell, 1996). The scale consists of 20 items that measure self-reported feelings of loneliness and social isolation. Scale responses are summed to generate a loneliness score ranging from 20 to 80, with higher scores indicative of higher levels of loneliness. Russell (1996) reported high internal validity for different sample populations ranging from α = 0.89 (in samples of elderly and teachers) to α = 0.94 (in a sample of nurses) and a test-retest reliability over 1 year of r = 0.73 for a sample of elderly.
Sleep Quality
Sleep quality over the last month was assessed using the Pittsburgh Sleep Quality Index (PSQI; Buysse et al., 1989). The PSQI is a self-report measure consisting of 19 items, which are used to derive seven component scores (self-reported sleep quality, sleep latency, sleep duration, sleep efficiency, sleep disturbance, use of sleep medication, and daytime dysfunction). The component scores are summed to derive a global PSQI score ranging from 0 to 21, with higher scores indicating worse sleep quality. Buysse et al. (1989) reported an internal consistency of α = 0.83 for the PSQI and test-retest reliability of r = 0.87 for the global PSQI score. Respondents with a global PSQI score of >5 are categorized as poor sleepers (sleep outcome). Sleep outcome has a diagnostic sensitivity of 89.6% and specificity of 86.5% for distinguishing between good and poor sleepers (Buysse et al., 1989).
Recruitment
Data collection for T1 took place between April 1, 2020, and May 15, 2020. A convenience sample was recruited through the researchers' personal and professional networks, social media, and professional forums. Participants who had consented to being recontacted and had provided a valid email address were invited to take part in T2. Data collection for T2 took place between March 8, 2021 and March 28, 2021. A small number of participants (n < 9) across both timepoints were unable to complete the questionnaire by themselves and instead completed it over the telephone with a researcher reading out the questions and entering the responses given.
Procedure
The Medical Sciences Interdivisional Research Ethics Committee at the University of Oxford advised that ethical approval was not required for this study. Participants accessed the survey via a clickable link embedded in the study invitation. At the start of both surveys, participants were provided with detailed information about the study objective and their rights as research participants. Participants were then asked to provide informed consent to take part in the research by agreeing or disagreeing to a list of consent statements. Participants were able to select if they wanted to answer or skip each of the four main sections, and "Prefer not to say" options were given at most questions. At the end of T1, participants were asked to provide contact details if they consented to being re-contacted for follow-up research.
Analysis
Duplicates and records without responses were removed from the dataset before analysis. Responses were treated as missing and excluded from the analysis where participants had chosen to skip a section, selected "Prefer not to say," had not responded to a survey item, or had responded "Other" to questions on sleep disturbance and time taken to fall asleep (this option was included to account for non-normative experiences such as those associated with being bedridden). No global PSQI score was calculated for participants with a missing component score. Proportions were calculated based on the total number of participants giving a valid response at a question excluding those who selected "Prefer not to say" or skipped the question. Due to a typographical error, the STAI-S scale item Q4 was presented with an incorrect adjective at T1. This was corrected for T2. As a result of this error, a revised anxiety score was calculated for both surveys, which excluded the incorrect item Q4, Cronbach's α = 0.96 for T1 and T2, respectively. The revised scores were used for descriptive statistics for T1 and T2 and regression analyses.
The aim of this study was to assess sleep quality in individuals with disabilities in general, with a focus on those living with VI. Subgroup analysis therefore initially compared participants who reported having one or more types of disability (including those who reported having VI) to participants who reported having no disabilities, and then participants who reported having "VI or blindness" to those who reported no disabilities. Due to small sample sizes in T2 (nine participants reported VI only), it was not possible to control for other disabilities in the VI group. Thus, the group reporting any type of disability included participants with VI and without VI, and the VI group contained participants with comorbid disabilities.
Global PSQI sleep scores were not normally distributed for the three subgroups, as assessed by Shapiro-Wilk's test (p < 0.05). As a result, nonparametric tests were used to assess betweenand within-group differences, and medians and interquartile ranges (IQR) are reported in addition to means and standard deviations (SD).
Analysis sought to address three questions: 1. If and how participants with any type of disability and those with VI differed from participants with no disabilities at the two timepoints. To address this, descriptive statistics including mean and SD as well as median and IQR are reported for participants with one or more disabilities, participants with VI and participants with no disabilities; Chi-square tests were used to assess sleep outcome in participants with one or more disabilities vs. participants with no disabilities and participants with VI vs. participants with no disabilities. Mann-Whitney U tests were used to assess between-group differences in global PSQI scores and PSQI component scores between participants with one or more disabilities vs. participants with no disabilities and participants with VI vs. participants with no disabilities. 2. If and how sleep quality changed between the two surveys within each subgroup. Wilcoxon signed-rank or sign tests were used to explore within-group differences between T1 and T2 global PSQI scores and PSQI component scores in participants with one or more disabilities, participants with VI and participants with no disabilities, respectively. 3. What factors predicted sleep quality at both timepoints, and, in particular, whether disability predicted sleep quality when controlling for other factors. A hierarchical linear regression was conducted at T1 and repeated at T2 to identify consistent factors. Table 2 provides an overview of participant characteristics in both surveys. After removing duplicates and surveys which yielded no responses, a total of 602 participants completed T1. The majority of these were white, male, aged 46-55, and in paid employment. Participants resided in 22 different countries, predominantly the United Kingdom. The majority of participants had been self-isolating for 2-4 weeks and were not drinking alcohol. Mean loneliness was 42.54 (SD = 13.91), and mean state anxiety using the revised score was 40.52 (SD = 13.87).
Participant Characteristics
Full results for loneliness have been reported elsewhere (Heinze et al., 2021), and manuscripts reporting results for health behaviors (including alcohol consumption and self-isolation) and state anxiety have been submitted for publication. In total, 329 T1 participants were invited to take part in T2, 163 yielded responses (49.5% response rate). After removing cases who did not wish to take part in the research (n = 2) and duplicates (n = 1), a total of 160 individuals completed T2. There were no statistically significant differences between T1 participants who were invited to but did not complete T2 and those who completed T2 in terms of sex, age group, ethnicity, continent of residence (which was compared due the small numbers resident in countries outside of the United Kingdom), and employment status. The majority of T2 participants were white, female, aged 46-55, and in paid employment. One participant had lost their job during COVID-19 and was currently looking for work. Participants resided in nine different countries, the majority in the UK. The smaller sample likely resulted in reduced global distribution of participants compared to T1. The majority of participants were not self-isolating and did not drink alcohol. Mean loneliness was 42.18 (SD = 14.54), and mean state anxiety using the revised STAI score was 38.08 (SD = 14.27).
Disability and VI
Around two-thirds of participants in both surveys reported no disabilities and around a third reported having one or more types of disability (Table 1) with a maximum of eight distinct types of disability being reported by one participant. The most common disability at both timepoints was VI, followed by mental health issues, hearing impairment, and disability affecting mobility at T1, and disability affecting mobility, mental health issues, and medical conditions such as asthma, diabetes or epilepsy at T2. It should be noted that the prevalence of VI in both surveys is unsurprising considering the survey was sent to members of Blind Veterans UK and contacts within the sight loss sector. Among participants with VI, comorbidity was high at both timepoints, the most commonly reported comorbid conditions being hearing impairment (36%) and medical conditions (24%) at T1 and disability affecting mobility (48.6%) and hearing impairment (43.2%) at T2.
Group Differences in Sleep Quality
Overall, sleep quality over the past month was poor at both timepoints, particularly among those with disabilities. Participants with disabilities scored significantly poorer on median global Poor sleep was also more prevalent among participants with VI compared to those with no disabilities, but this was not statistically significant at either timepoint; 63.2% of those with VI were categorized as having poor sleep at T1, χ 2 (1, N = 485) = 1.78, p = 0.182, and 63.6% at T2, χ 2 (1, N = 136) = 0.42, p = 0.519. While mean global sleep quality was also worse in those with VI at both timepoints, median sleep quality was significantly worse at T1 only ( Table 4). Compared to those with no disabilities, participants with VI reported significantly more disturbed sleep and use of sleep medication at both timepoints, in addition to shorter sleep duration and greater sleep latency at T1, and increased daytime dysfunction at T2.
Changes in Sleep Quality Over Time
Global PSQI scores were available for both timepoints for 101 participants with no disabilities, 45 participants with one or more disabilities, and 30 participants with VI ( Table 5). There were no statistically significant changes in median global sleep quality and six of the component scores between T1 and T2 within participants with one or more disabilities, except for a statistically significant increase in daytime dysfunction. Aside from a small decrease in the use of sleep medication, the mean scores for global sleep quality (Figure 1) and the remaining six PSQI components all increased (Figure 2). The biggest increases in this group were observed for daytime dysfunction, sleep efficiency, and sleep duration, with the proportion of participants who reported getting <5 h of sleep increasing from 11.8% at T1 to 18.0% at T2. Similar trends were found when focusing on the VI group consisting of participants with VI only and those with VI and comorbid conditions. There were no statistically significant differences in median global PSQI scores and six of the seven PSQI component scores over time, but, as for the group of participants with one or more disabilities, there was a statistically significant increase in daytime dysfunction. Furthermore, mean scores increased for global PSQI sleep quality (Figure 1) and four of the seven PSQI components except for sleep latency, sleep disturbance, and use of sleep medication (Figure 2). The largest increase in this group was seen in sleep efficiency, while the proportion of participants with VI who rated their sleep quality as "very good" fell from 20.0% at T1 to 11.4% at T2. In contrast, participants with no disabilities reported significantly poorer global sleep quality, sleep duration, sleep efficiency, self-reported sleep quality, and daytime dysfunction at T2. Mean scores increased across all seven component scores except for sleep latency in this group (Figure 2), with the biggest mean increases observed for sleep efficiency, sleep duration, and self-reported sleep quality. For instance, the proportion of participants without disabilities getting 7 or more hours of actual sleep decreased from 79.0% at T1 to 61.5% at T2, while the proportion of those getting <5 h of sleep increased from 2.9% to 5.8%. Similarly, 6.7% rated their sleep quality as "very bad" in T2 compared to none at T1. In contrast, the proportion rating their sleep as "very good" fell from 21.0% at T1 to 17.1% at T2.
Predictors of Sleep Quality
A hierarchical linear regression was run to determine whether the addition of disability (having one or more disabilities vs. having no disabilities) predicted sleep quality when controlling for age and gender in the first step, and factors previously associated with sleep quality (see Table 6) in the second step. The full model of gender, age, state anxiety (revised), loneliness, self-isolation, alcohol consumption, and disability (Model 3) was statistically significant, F(7, 473) = 49.41, p < 0.001; adjusted R 2 = 0.414. The addition of state anxiety (revised), loneliness, self-isolation, and alcohol consumption in Model 2 explained an additional 37.4% of the variance in sleep quality above and beyond age and gender, F(6, 474) = 49.32, p < 0.001. The addition of disability in Model 3 accounted for an extra 3.8% of the variance in sleep quality. Higher levels of anxiety, loneliness, and having one or more disabilities significantly contributed to explaining sleep quality in the final model.
The procedure was repeated for T2 to determine whether the factors identified at T1 consistently predicted sleep quality (see Table 6). The full model of gender, age, state anxiety (revised), loneliness, self-isolation, alcohol consumption, and disability (Model 3) was statistically significant, F(7, 141) = 14.21, p < 0.001; adjusted R 2 = 0.385. The addition of anxiety, loneliness, self-isolation, and alcohol consumption in Model 2 explained an additional 33.1% of the variance in sleep quality above and beyond age and gender, F(6, 142) = 14.58, p < 0.001; adjusted R 2 = 0.355. The addition of disability in Model 3 accounted for an extra 3.3% of the variance in sleep quality. Being younger predicted sleep quality at T2. As at T1, higher levels of anxiety and having one or more disabilities significantly contributed to explaining sleep quality in the final model but, unlike T1, loneliness did not.
DISCUSSION
This paper set out to provide a preliminary assessment of sleep quality over time in individuals with disabilities, with a focus on those living with VI, during the COVID-19 pandemic. Overall, sleep quality was found to be consistently poorer in participants with disabilities, including those with VI, than in participants with no disabilities. Although it accounted only for a small amount of variance, disability emerged as a consistent predictor of sleep quality across both timepoints when controlling for age, gender, and other factors previously associated with sleep quality, such as alcohol consumption (Romero-Blanco et al., 2020;Robillard et al., 2021), anxiety (Xiao et al., 2020), and self-isolation (Pérez-Carbonell et al., 2020). Individuals with disabilities scored significantly worse across all seven PSQI components than those with no disability at T1 (April-May 2020), reflecting existing evidence of comparatively poorer sleep in individuals with a disability during the pandemic (Fancourt et al., 2021).
Previous research has found that people with VI often report poor sleep quality and greater sleep-related complaints than those without a VI (Tabandeh et al., 1998;Zizi et al., 2002;Tamura et al., 2016;Peltzer and Phaswana-Mafuya, 2017). In the current study, global sleep quality was consistently poorer in individuals with VI than those with no disability; however, Frontiers in Psychology | www.frontiersin.org the difference between the two groups was no longer statistically significant at T2. Reflecting existing evidence (Tabandeh et al., 1998;Peltzer and Phaswana-Mafuya, 2017), individuals with VI also reported shorter sleep duration, increased sleep latency, more disturbed sleep, and increased use of sleep medication compared to individuals with no disability during the early stages of the pandemic. By T2 (March 2021), only sleep disturbance and use of sleep medication remained significantly poorer in those with VI. Furthermore, except for daytime dysfunction, there was no significant deterioration in overall sleep quality nor in any of the PSQI components for those with VI and those reporting any type of disability. This contrasts with the significant deterioration in sleep quality identified in participants without disabilities and suggests that the pandemic may have had a greater impact on the sleep of individuals with no disabilities. One possible reason for this may be that self-isolation and experiences of loneliness are not necessarily new for people living with disabilities, which impact mobility and social contact (Brunes et al., 2019). The majority of people with disabilities commonly have comorbid disabilities and health conditions (Havercamp et al., 2004;Barnett et al., 2012;Court et al., 2014), which may have resulted in greater health concerns prior to the pandemic. Thus, the impacts of worries relating to health, self-isolation, and/or limited social contact on sleep may have been greater amongst those for whom these concerns were novel.
Secondly, given evidence of the impact of VI on sleep before the pandemic (Tabandeh et al., 1998;Tamura et al., 2016;Peltzer and Phaswana-Mafuya, 2017), the negative impacts of the pandemic on sleep may not be as apparent among this group compared to those without a disability, whose sleep may have been comparatively better prior to the pandemic. Indeed, around two-thirds of participants with VI in this study were categorized as poor sleepers at both timepoints. This is comparable to the proportion reported elsewhere for visually impaired people with no light perception (65.6%) but higher than that reported for those with light perception (45.8%; Tabandeh et al., 1998). In contrast, around 56% of participants with no disabilities were categorized as poor sleepers in the current study, a figure substantially higher than the 9.1% reported for controls without VI by Tabandeh et al. (1998). Baseline figures for sleep quality, social contact, and experiences of self-isolation prior to the pandemic were not available in the current study, and therefore, the reasons behind the different sleep experiences of individuals with and without disabilities can only be postulated.
Contrary to previous research, the current study did not find an association between self-isolation and sleep (Pérez-Carbonell et al., 2020). It is possible that feelings of loneliness experienced as a result of self-isolation, rather than selfisolation itself, impact sleep, although loneliness predicted sleep quality only at T1. Levels of loneliness were significantly higher in participants with disabilities and VI than those with no disabilities at both timepoints, and although not statistically significant in any of the three groups, bigger increases in loneliness were observed in participants with disabilities and VI (Heinze et al., 2021). Further research is required to confirm the impact of loneliness on sleep. A ceiling effect may be one possible explanation, with the impact of loneliness on sleep reducing as feelings of loneliness become increasingly normalized by the individual. Once again, this may reflect a greater impact of restrictions on social contact in people without disabilities, for whom loneliness may have been a novel experience at T1. In addition, being younger predicted sleep quality at T2 but not at T1. This contradicts previous findings which associated older age with poorer sleep quality (Gadie et al., 2017). In contrast, state anxiety was a significant predictor of sleep quality across both timepoints and accounted for a large proportion of the variance in sleep quality. This supports existing evidence, which points to the negative impact of anxiety on sleep (Altena et al., 2020;Xiao et al., 2020;Evans et al., 2021;Robillard et al., 2021;Villadsen et al., 2021). In this sample, state anxiety was consistently higher in participants with disabilities and VI, although statistically significant differences between those with and without disabilities were found at T2 only (Heinze et al., manuscript submitted for publication). Given associations between disability and anxiety (Sareen et al., 2006;Brenes et al., 2008;Kempen et al., 2012), these findings have important implications for the design of interventions targeted at improving sleep quality for individuals with disabilities beyond the pandemic. State anxiety may be an essential factor to consider in any such intervention.
Limitations and Future Directions
The current study highlights a number of important findings relating to sleep quality in people living with disabilities such as VI during the COVID-19 pandemic. However, some limitations must be acknowledged. Firstly, due to convenience sampling, and recruitment of participants through professional and personal networks within the sight loss sector, extrapolation of findings to the general population cannot be made. Additionally, the sample size and number of valid scores for T2 were considerably smaller than for T1. Thus, longitudinal comparisons relied on a smaller subsample than was available at both timepoints. Secondly, findings relating to sleep quality in people with VI should be interpreted with caution. Due to small sample sizes in T2, it was not possible to control for comorbid disabilities, which may have impacted on sleep. Future research is needed to assess both sleep quality in people living with VI only, and the relationship between disabilities other than VI and sleep quality. Exploration of the potential differences in sleep between those who report one, and those who report multiple, comorbid disabilities, would also be valuable. Participants were recruited from the membership of Blind Veterans UK, a charity which provides its members with access to support relating to sleep and health, including targeted sleep hygiene interventions. The survey was also promoted through other sight loss organizations, which may have increased sleep education and sleep quality among respondents. Future research should collect information on the support that participants have accessed relating to sleep and consider how this support may mediate sleep experiences during, and following, the easing of COVID-19 restrictions.
Next, while the majority of participants resided in the United Kingdom at both timepoints, responses were received from as many as 22 different countries at T1. Measures implemented to tackle the pandemic, and public information campaigns, may have differed substantially between these countries. Due to small sample sizes, it was not possible to provide geographical comparisons of sleep experiences, but research in this area may provide useful insights into the impacts of national policy on this aspect of public health and help to inform best practice and future policy.
Finally, the current study reports on findings relating to two surveys undertaken during the COVID-19 pandemic, a period characterized by its impact on everyday life, work, and social experiences. The period following restriction easement may offer a similarly novel range of experiences and challenges, which may impact on aspects of health and well-being, including sleep. Future research is needed to explore individuals' sleep experiences during this transition period and beyond, to establish the long-term health implications of the pandemic, particularly among individuals living with VI and/or other disabilities.
CONCLUSION
The current paper provides a preliminary assessment of sleep quality in people with disability during the COVID-19 pandemic, with a focus on those living with VI. It offers insight into the factors, which may have played a role in sleep quality during the COVID-19 pandemic, including not only disability and VI, but also other health and social factors. While sleep was consistently poorer for individuals with disabilities, including those with VI, the pandemic appeared to have a greater impact on individuals with no disabilities, who experienced a significant deterioration in their sleep over time. State anxiety and, to a lesser degree, disability were consistent predictors of sleep quality at both timepoints, and interventions designed to alleviate sleep difficulties should seek to address the role of state anxiety in sleep quality.
DATA AVAILABILITY STATEMENT
The datasets presented in this article are not readily available because participants were not asked if they consented to their data being shared outside the research team involved in this study as part of the consent process. Requests to access the datasets should be directed to renata.gomes@bravovictor.org.
ETHICS STATEMENT
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
NH designed and performed the analysis and wrote the paper. SH wrote and edited the paper. CC designed the survey, wrote the paper, and edited the paper.
LG-M consulted on data analysis and edited the paper. TK designed the survey and produced graphics for the paper. SF advised on survey design and data analysis and reviewed the paper. CE advised on survey design and reviewed the paper. RG designed the survey and edited the paper. All authors contributed to the article and approved the submitted version.
FUNDING
This study was funded by Blind Veterans UK.
ACKNOWLEDGMENTS
This work was carried out with a contribution of time from Circadian Therapeutics, University of Oxford. | 2021-12-23T14:29:32.525Z | 2021-12-23T00:00:00.000 | {
"year": 2021,
"sha1": "aac9a06939170f2dc059e8a7eb9afc8f9fdf7ba7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "aac9a06939170f2dc059e8a7eb9afc8f9fdf7ba7",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
54875175 | pes2o/s2orc | v3-fos-license | Syddansk Universitet Societal costs of diabetes mellitus 2025 and 2040 forecasts based on real world cost evidence and observed epidemiological trends in Denmark
Aim: The objective is to contribute with real world evidenced economic forecasts of diabetes attributable costs in 2025 and 2040 differentiated according to patients’ morbidity status which is a novel approach within forecasting. Methods: Method of forecasting is based on an annual calendar year prediction of diabetes attributable costs by using the BOX-model, an established and tested epidemiological transition-state model. The study population includes all Danish diabetes patients presented in 2011 (N= 318,729) according to the Danish National Diabetes Register. Forecasting is based on individual patient data from 2000 to 2011 for incidence, mortality, patterns of morbidity and complication rates combined with demographic population projections from Statistics Denmark. The 2011 estimation of diabetes attributable costs were applied to the epidemiological framework. Forecasting was performed for three different epidemiological scenarios.Results: Our three epidemiological scenarios indicate that within the shorter time span increases in the prevalent population are difficult to change primarily due to the already achieved historic improvements in diabetes mortality and morbidity. These will approximately double societal costs of diabetes in the next 10 years,assuming current trends in morbidity and mortality are maintained.The resulting diabetes population will incur three times current costs in 2040.A 20% reCorresponding author.
Introduction
Chronic diseases are one of this century's greatest threats towards public health with almost epidemic prevalence increases globally and expectations of significant increases in the future [1]. Diabetes Mellitus is, with around 350 million people globally suffering from this disease [2] [3], one of the most burdensome chronic diseases associated with major disability, reduced quality of life and shortened length of life [2] [4].
Various factors are expected to cause future increase in the prevalence of diabetes: demographic changes [5], sedentary life styles and obesity [6]- [8], improved survival [9] [10] epidemiology [11], screening efforts [12] and new morbidity patterns implying that diabetes is increasingly seen in younger ages [13] [14]. Management of the increasing diabetes population implies, among others, an economic challenge, which societies must face, as diabetes patients require increased health care, pharmaceuticals and nursing services for their remaining lifetime [4] [15] [16]. Long term models can identify where a society may be heading, providing policy makers with a foundation on which decisions concerning future strategic prioritization can be grounded [17].
Forecasts of the burden of diabetes exist in great numbers in the literature, see for example, King et al. 1998 [18], Bagust et al. 2002 [19], Huang et al. 2009 [20], Mainous et al. 2007 [21] or Tunceli et al. 2009 [22]. Our forecasting model (the BOX-model) is an established and tested epidemiological disease model, which has proven its global applicability for different diseases with largely accurate predictions showing only nonessential deviations [9] [23] [24]. The BOX-model is simple and intuitive, based on epidemiological drivers observed over more than a decade and economic cost estimates for 2011 calculated on the individual level from national registers.
Based on a comprehensive epidemiological framework, this study forecasts diabetes attributable costs in Denmark for the period 2012-2040 according to sectors and patient's morbidity status. Denmark has optimal conditions due to data availability, coverage of the diabetes population and richness of information in national registers [25]. In addition, Denmark is a typical European country in terms of treatment availability and population structure. The study was part of a large-scale register based on observational investigation, the Diabetes Impact Study 2013 [26], which investigated epidemiological, health economic and socioeconomic aspects of diabetes in Denmark [11] [16] [27].
Method
Estimating the size of future costs attributable to diabetes, the epidemiological dynamics underlying the prevalence of diabetes must be taken into account. Each year, new patients are diagnosed, patients develop complications and yet other patients will die. These dynamic structures are appreciated in the forecasts through the underlying epidemiological framework, presented in the BOX-model.
The BOX-Model
To model the future prevalent diabetes population, this study uses a simple multi-state transition model, the BOX-model, a flexible epidemiological framework, based on individual data from the entire Danish diabetes population. The BOX-model, (Figure 1) has been validated [9] and thoroughly described elsewhere [11].
In the BOX-model, an individual is either non-diabetic (population at risk) or belongs to one of the diabetic complication groups: CG0, no complications; CG1, minor complications or CG2, major complications. ICD-codes defined for each complication group is given in the supplementary material (A). Health states in the model are mutually exclusive and collectively exhaustive meaning that each patient can only be in one state in a cycle and must be in a state in each cycle. Cycles are measured in calendar years [28]. Irreversibility is assumed and,therefore, patients can only move forward in the model. Influx (new incident cases) and outflux (mortality) as well as influx to each of the complication groups were accounted for on an annual basis. Forecasting is based on patient groups defined by gender and age at diagnosis in 25 year age intervals.
Study Population
The study population was based on the entire diabetes population in Denmark in 2011, adjusted according to shortcomings in the Danish National Diabetes Register, specified elsewhere [29], N = 318,729. Person years (PYRS), defined as 365 person days, N=297,378 in 2011 were applied. The study population was compared to the Danish diabetes-free population (N= 5,261,714) and to a matched (gender, age and municipality of residence) control population from the diabetes-free population (N= 1,462,872).
Data Sources for Epidemiological Forecasting
The epidemiological forecasting was based on observed individual patient level data on the entire Danish diabetes population from 1997 through 2011 through Danish national registers [11]. Transition probabilities between states were extrapolated from the observed data resulting in a prevalence (PYRS) in each health state in every calendar year. This means that the exact number of projected PYRS in 2011 deviates from the observed number, however the deviation is <1%. To facilitate comparison with earlier studies [11] [16] [27], we state the observed numbers from 2011. PYRS were stratified by gender and age at diagnosis in 25 year age intervals. The diabetes-free population, and hence population time at risk of developing diabetes, for each calendar year until 2040 was calculated from demographic population projections from Statistics Denmark based on recent trends for vital demographic events: birth rate, death rate, immigration, emigration and naturalization, converging towards a long time perspective level based on annual forecasts [29]. This epidemiological forecasting make up the framework on which 2011 cost estimates are added.
Data Sources for Economic Forecasting
Age at diagnosis, gender and complication status, among other characteristics, influence on patients costs [16]. Estimates for diabetes attributable costs according to these characteristics were calculated and applied to the epidemiological model.Diabetes attributable costs for 2011 were calculated as the difference between total costs of a person with diabetes and the expected total costs given the annual resource consumption of the control population stratified according to gender and five-year age intervals. The included cost components are listed in Table 1 along with measurement of cost components and described in more detail elsewhere [16].
Continued
Productivity loss based on data from SD Lost productivity attributable to diabetes were accounted for through an estimation of 1) Annual mean gross income difference from expected income given educational level, gender and age; 2) Premature mortality; 3) Abseentism. 1) Sum of absolute difference in annual gross income between PwD and controls aggregated for patients older than 14 years and younger than 69 in strata by gender, age in 5 year intervals and four educational levels (1: < 11 years of education; 2: < 16years; 3: < 18 years and 4: 18+ years) 2) Sum of annual foregone income due to lost years of productivity in cases of premature death for: 2a) 2011 and 2b) productivity loss in 2011 due to deaths attributable to diabetes occurred prior to 2011. Since data is not available on deaths attributable to diabetes for the past 45 years, we used attributable deaths in 2011 and the production loss that will incur in the future until the age of 69 to mirror the foregone production well knowing that this method builds on the simplified assumption that diabetes mortality and labour market patterns the past decades have not changed. For persons between the age of 15 and 69, number of relative deaths by gender and 10-year age group was compared between PwD and controls and the difference is assumed to represent deaths attributable to diabetes. Number of attributable deaths in each strata was multiplied with the average wage of a diabetes-free person in that strata. 2a) Strata were aggregated and the sum was divided by 2 assuming deaths are equally spread over the entire calendar year. 2b) Mid age in each age-interval was used as proxy and then number of years until the age of 69 was calculated and multiplied with number of attributable deaths in the given strata again multiplied with the average annual gross income among a diabetes-free person in that strata.
3) Number of days of absence due to diabetes is calculated based on literature estimates of 3 extra days a year. Daily wage is calculated as the mean annual income among PwD divided by 200 working days.
SMBG costs (meters and sticks) and insulin pumps
Cost of SMBG (for the 22% of PwD using insulin) was estimated on the basis of a study of SMBG costs in Canada to annually 860 US$ equivalent to 6175 DKK (2011 prices) [32]. According to the Danish Ministry of Health and the Danish Diabetes Association [33] pumps were used by approximately 2100 PwD and the annual cost ranged from 22,000 to 39,000 DKK in 2010. For 2011,we have applied a conservative cost estimate of 22000 DKK for80% of all T1 children (0-14 year) and for 5% of the rest of the T1 population in total amounting to 2450 PwD. Censors are not included in this cost and would approx. double the annual cost of pumps.
Appliances (blind assistance, protese crus, femur, wheel chairs, sticks) Unit costs of blind assistance was calculated on the basis of the MTV report [34]and includesassistance outside home, sticks and guide dogs, IT solutions for blind parents, blind library appliances and amounted to 99,137 DKK per year (2011 prices). The cost cover needs for the 1.1% -1.6% (amounting to approx. 3372 persons) of the diabetes population that is considered socially blind [34]. The cost of a crus and femur prosthesis was estimated to be respectively 17,000 and 44,000 DKK per year. In 2011, 1348 (crus) and 768 (femur) persons with diabetes lived with an amputation and respectively 75% and 50% of those were assumed to have a prosthesis. The rest of the amputated persons are assumed to use wheel chairs. Given that cost estimates for the year 2011 were originally calculated according to age in five year intervals,these estimates were recalculated to age at diagnosis in 25 year age groups. Due to data limitations, this recalculation was not possible for nursing services and additional cost components. Therefore, we applied the same cost structure between age and age at diagnosis for these two cost components as found for health care costs. Furthermore, we maintained total attributable cost estimates calculated on age groups and applied the estimated cost structure between age and age at diagnosis across strata based on these totals. Cost calculations of productivity loss due to premature mortality were calculated based on assumptions concerning the mortality rate. Hence, the model considers the annual assumed mortality rate and adjusts productivity loss due to premature mortality correspondingly. Calculation of depreciation of capital was based on the size of the secondary health care sector cost component. All costs were calculated in fixed 2011 values.
Scenarios
Comparison between three contrasting scenarios was deployed. Each scenario was related to the same base year (2011) and outlines a situation specified according to observed epidemiological trends in incidence, mortality and complication progression from 1997-2011. The three scenarios represent: 1) continuation of observed epidemiological trends under the assumption that these trends will continue as historically observed (core); 2) continuation of the observed trends regarding mortality and complication rates but a constant rate of incidence as observed in 2011, reflecting the assumption that incidence will stabilize and discontinue the increase (intermediate); 3) all epidemiological drivers are kept constant on the level observed in 2011 to reflect no further improvements in mortality and morbidity among diabetes patients and no further incidence increase (constant). Scenarios are presented in Table 2.
For each scenario, the BOX-model calculates a distribution of PYRS. By adding estimates of diabetes attributable costs specific for gender and age at diagnosis, total diabetes attributable sector costs for every calendar year are arrived at.
Economic Potentials
The cost forecasts mirror the observed cost structure and level in 2011, though it is obvious that the future will not hold the same investments and treatment/cost structures as in 2011. A prerequisite for the proposed epidemiological scenarios is, therefore, to capture some structural changes and potential relevant investment cases. On the one hand, the continuation of treatment improvements as assumed in scenarios (core and intermediate) cannotbe expected without some future investments in pharmaceuticals and health care. On the other hand, the cost levels in health care, nursing and pharmaceuticals will ultimately be decided, by what is politically possible in the years to come. Hence, the challenge is to quantify implications hereof for the cost forecasts. To accommodate this in our model, we suggested a number of hypotheses representing, on one hand, potentials for freeing of resources if certain efficiency improvements are realized or of a given political or administrative initiative and, on the other hand, budget limitation or economic potentials of a given investment. Based on the Core scenario each of the hypotheses was estimated under the assumption of everything else held constant. Hypotheses, rationale and corresponding model adjustments are described in Table 3 and Table 4. Table 4. Description of hypotheses, rationale and model adjustment method: potential for freeing of resources of a given initiative.
Rationale
Model adjustment method Potential for freeing of resourcesof a given political oradministrative initiative H4: Efficiency improvements in a) the health care sector b) nursing services Annual productivity gains are achieved in the Danish health care sector/nursing sector. In the period 2003-2011, the annual productivity gains in the Danish health care sector has been 2.3% and in 2011 alone 5.3% [40].
Annual 1% reduction ofcostper PYRS for H4a-primary andsecondary care H4b-all nursingcomponents H5: Reduced usage of nursing services Patients with diabetes live longer and better with their diabetesand in comparison with 2006 estimates, the 2011 cost structureimplies relatively less costs fornursing [16]. This can also be aconsequence ofstructural changes in the Danish nursing sectorwith reduced services. If these developments are continued, theneed for/usage of nursing services per PYRS can be expectedto further decrease over time. Improved regulation of diabetes patients who have not yet developed complications from their disease. If CG0 patients were able to contribute more equal to thediabetesfree population on the labour market, productivitylosses due to lower income and excess absence could bedecreased. Annual 2.5% reduction perPYRS in productivity lossdue to difference in annualincome and excess absence for patients in CG0
Results
All cost estimates are presented in 2011 EUR based on a conversion rate from DKK to EUR of 7.4647 DKK.
Total Attributable Costs of Diabetes 2011-2040-The Three Scenarios
We have previously estimated total attributable costs of diabetes to the Danish society in 2011 to be at least 4.27 billion EUR, corresponding to 14,349 EUR per PYRS [16]. Forecasting estimates of total diabetes attributable costs and costs per PYRS for each cost component in the three epidemiological scenarios are presented for the years 2025 and 2040 in Table 5. More detailed specification of distribution of costs according to sectors and complication groups together with epidemiological indicators are given in supplementary material (B). In Figure 2and (15,835 EUR). This reflects that the core scenario assumes continued improvements in treatment results and hereby a less morbid, however, larger diabetes population where the constant scenario results in a smaller and more disease burdened diabetes population due to higher mortality and morbidity. Intermediate scenario was placed in between the two in respect to prevalence with 862,623 patients, but with the lowest total costs (9.98 billion EUR and more or less the same cost per PYRS as Core 11,568 EUR). Cost per PYRS decrease with time in both the core and the intermediate scenario as a result of the larger however less morbid diabetes population whereas an increase is seen in the constant scenario. The estimated total cost in 2025 are quite similar in the three scenarios ranging from 7.1 over 7.6 to 8.0 billion EUR varying hereby with less than 12% from the lowest to highest estimate reflecting the inertia of the future development in the diabetes population due tohistoric
Cost Distribution According to Sectors
Looking at costs in the health care sector, these are projected to be between 1.8 and 2.5 billion EUR in 2040 (1.3 and 1.4 billion EUR in 2025). This is 1.8-2 times (2025) and 2.5-3.4 times (2040) the current level in 2011. The same patterns are projected for pharmaceutical consumption and nursing services resulting in a demand for pharmaceuticals in 2040 of between 360 and 530 million EUR and a demand for nursing services in 2040 of between 2.7 and 3.4 billion EUR.
Cost Distribution According to Complication Groups
Cost distributions within the three complication groups in 2011 and in 2040 across the three epidemiological scenarios are depicted in Figure 4.
The relative distribution of costs between complication groups were more or less similar in the core and the intermediate scenario, whereas a greater proportion of costs were spent among patients in CG2 in the constant scenario (63% compared to relatively 49% and 50%). This was mainly due to a greater volume of patients in CG2 in the constant scenario but also due to a steeper cost gradient from CG0 to CG2 in this scenario of 5.4 times higher cost in CG2 than CG0 compared to 4.6 and 4.8 times in the core and the intermediate scenario, respectively. In 2011 the 25% of patients with major complications consumed 58% of the total resource use consumed by diabetes patients. The part of resource use consumed by patients with major complications decreases in 2040 in both the core and the intermediate scenario to app. 50%, whereas it increases in the constant to the mentioned 63%. The share of resources consumed by patients with no complications will respectively be 30% and 18% in the core and the constant scenario where CG0 will make up 60%, compared to 49% of patients.
Economic Potentials
To illustrate the understanding that future epidemiological development in the diabetes population would require some form of investment compared to the level of costs in 2011, we created some cases showing, on one hand, the level of economic resources that future investments require and, on the other hand, how economic space arefreed if certain efficiency improvements are realized. Important conclusions from these analyses were: If investments in primary care were set to increase with 5% annually (H1), investment in new pharmaceuticals with 2.5% annually (H2), and investment in secondary prevention with 2.5% (SMBG and Patient education) (H3a+H3b), the costs incurred by investments in 2025 will be in the range of 250 million EUR in 2025. In 2040 thecost (H1, H2, H3a+b) incurred will be 1.1 billion. If patients' own time (H3c) is included, 300 million EUR should be added for 2025 and 1.3 billion EUR for 2040.
Costs 2040 Constant Scenario
If productivity increases by 1% per year in primary and secondary health care and nursing sectors0.4 and 1.3 billion EUR would be freed in 2025 and 2040 respectively (H4a and H4b). If usage of secondary care services are reduced by 2.5% annually this will free resources in the range of 400 million EUR in 2025 and 1.2 billion EUR in 2040 (H5). For nursing services the corresponding numbers are 500 million and 1.4 billion EUR in 2025 and 2040 (H6). Reduced productivity loss (H7) among patients in CG0 of 2.5% annually will free resources in the range of 280 million EUR in 2025, which is more than the sum of the suggested investments in primary care, pharmaceuticals and secondary prevention, when patients' own time are not taken into account. Results of each of the hypotheses are given in thesupplementary material (C+D).
Discussion
The point of departure for the forecasted scenarios are all centrally available data in Danish national health registers for all Danish diabetes patients in 2011 providing comprehensive estimates of real world evidenced costs attributable to diabetes forecasted according to 14 years of epidemiological data and a categorization of diabetes patients in three complication groups. This is a novel approach enabling an intuitive understanding of forecasting results as indicators of where diabetes, in a public health perspective, is heading. This study attempts to forecast trends in the future diabetes patient population and, hence, expected costs given the current resource consumption and productivity loss among diabetes patients. Model input are of highest possible quality, distinguishing the BOX-model from majority of international models based on data from population surveys, and the model has been validated showing only nonessential deviations [9] [23] [24]. Our analysis is distinct using societal attributable costs to diabetes including both resource consumption and productivity loss. Furthermore, we take into account the dynamics of diabetes and the expected natural history of disease in relation to development of late complications.
The BOX-model is general and intuitive aiming to guide decision makers as to where this disease is heading more than making accurate future projections. Trends from the forecasted scenarios may probably be generalized across countries. They indicate that increasing prevalence of diabetes and, hence, costs of diabetes are difficult to change within the shorter time span and will approximately double the next 10 years primarily due to the already achieved historic improvements in diabetes mortality and morbidity. Hereafter, the span is wider depending on the epidemiological trends occurring, however, it is realistic to assume a 2.5 or tripling of the patient population and, hence costs in 2040. Such estimates correspond well with international projections [20]. On the cost side, the predictions concerning health care, pharmaceuticals and nursing services are conditional on current rates of utilization and supply, which of course will change over time. From a societal perspective, the constant scenario can be viewed as a minimum cost under the assumption that 2011 cost structures and supplies are continued. This means that incidence rates are stable and no further progress in the health of diabetes patients in relation to morbidity or mortality occurs. This is probably unrealistic expectations, however,it sets the frame for comparison with the core scenario where the difference in costs (2.8 billion EUR in 2040) reflects the amount of extra resources necessary, if prevalence increases continues as observed until 2011. The intersmediate scenario compared to the core reflects the general public health expectation that primary prevention will result in stable or decreased incidence rates compared to historic trends. If this succeeds, a 25% reduction in costs can be expected in 2040 compared to the costs in the core scenario.
We believe that our estimations present intuitive understandable perspectives valuable for decision makers, for instance, for the health care system to be ready to meet this chronic disease challenge of a doubling in resource demand already in 2025 under current structures. With estimation of economic potentials to the core scenario, we aim to highlight how the scenarios can guide cost effectiveness discussions. For instance, interventions aiming to shift treatment of diabetes patients from secondary care to primary care can be compared to the threshold of around 500 million EUR in 2025 freed, if a goal of an annual 2.5% decrease is reached. In comparison, a 5% increased investment in primary care will cost an amount in the range of 45 million EUR in 2025. We do not argue for a given causal effect of a specific intervention, but merely point out the economic potentials if suggested goals were reached or specific investments were made.
Another important conclusion is that prevalence is a poor measure of disease control when it comes to chronic diseases. Lower cost per patient year might be more desirable than lower prevalence as this means that each patient is living better with his or her disease contributing to a larger prevalent population. Categorization of patients according to their complication status in three groups is a novel approach, which allows a more general view on the disease, which is easy to interpret and communicate. We have previously shown how health care costs and nursing costs increased markedly when patients with diabetes develop minor or major complications. Hence, there is great cost saving potential in preventing development of complications among patients with diabetes. This is reflected in the intermediate scenario, where focus is placed on efforts to sustain historic improvement in epidemiological indicators into the future, but incidence rates are assumed constant. We further project a shift in resource consumption from patients with major complications to patients without complications due to the volume of patients living with diabetes without complications in the future.
It must be stressed that the basic patient population in the scenarios has obtained its size and age composition as a consequence of access to diabetes treatment and care during decades prior to year 2001. Therefore, a comparison of PYRS experienced under competing scenarios reflects the cumulative effect of access to treatment over previous decades. In prolonging of this, it is important to bear in mind that costs are an expression of supply and demand meaning that patients' demand will only increase to the extent that the supply is available. In the model, discrete time intervals of one calendar year are used and not continuous time reflecting our wish for a simple and intuitive modeling approach. Age at diagnosis, and not running age, was used to reflect that the model follows a patient with diabetes from diagnose until death concerning age and gender specific costs and morbidity and mortality drivers. Forecasting 25 years ahead in time it is obvious that changes over time, in health care queues, waiting lists and treatment offers cannot be accommodated for in the model, as these are unknown. It is inevitable that modelers will make different choices and apply different assumptions. The included hypotheses can throw light on consequences of different assumptions, however, the model will never be a perfect representation of the real world [42].
Conclusion
Our projections indicate that within the shorter time span increases in the prevalent population, and therefore the associated cost, are difficult to change primarily due to the already achieved historic improvements in diabetes mortality and morbidity. These will approximately double societal costs of diabetes the next 10 years,assuming current trends in morbidity and mortality are maintained.The resulting diabetes population will incur three times the current costs in 2040, although the costs per PYRS are falling during the whole period.A 20% reduction in cost per PYRS shows how the distribution of patients with complications are expected to change over time with patients living better and, hence, on average become less resource demanding with their disease. Prevalence is, therefore, a poor measure of disease control in a public health perspective. With marked increases in diabetes prevalence, not only resource demand for health care, nursing and pharmaceuticals will increase but also societal productivity loss due to the increasing number of patients in the working age.Despite wide uncertainty around projections of the future, they enable us to appreciate better the implications for societies of currently observed epidemiological trends. Hereby, projections provide a basis for discussing future resource demand and consequently the necessary investments and structural changes.
Funding Sources
This study has been conducted by ApEHR in cooperation with the Danish Diabetes Association and supported by a PhD program from COHERE, funded by The Danish Centre for Strategic Research in Type 2 Diabetes, DD2. A consortium of sponsors from the pharmaceutical industry comprising Astra Zeneca/BMS, Novo Nordisk, Merck, Sanofi Aventis and Bayer has provided an unrestricted grant to ApEHR for the conduct of this research.We thank professor Kristian Bolin for useful commenting.
Conflicts of Interest
Neither the Danish Diabetes Association nor the consortium of sponsors from the pharmaceutical industry has had any influence on the conduct of the study. Gangrena of lower limb 2 0
Supplementary Materials
Intracerebral haemorrhage 2 0 Amputation above ankle level 2 0 a Value indicates classification state (0, 1 or 2, respectively). b Values 1 and 0 indicate that item is specific for diabetes and unspecific for diabetes, respectively. | 2018-12-15T09:33:37.792Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "0ca85e8853ecedc787eaab0f0fb4bb30b99ba6f3",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=60846",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "e874de33574b6dcf4dc5dce1126bb8d7b2d2af0e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Economics"
]
} |
119284259 | pes2o/s2orc | v3-fos-license | The 30-band k.p modeling of electron and hole states in silicon quantum wells
We modeled the electron and hole states in Si/SiO2 quantum wells within a basis of standing waves using the 30-band k.p theory. The hard-wall confinement potential is assumed, and the influence of the peculiar band structure of bulk silicon on the quantum-well sub-bands is explored. Numerous spurious solutions in the conduction-band and valence-band energy spectra are found and are identified to be of two types: (1) spurious states which have large contributions of the bulk solutions with large wave-vectors (the high-k spurious solutions) and (2) states which originate mainly from the spurious valley outside the Brillouin zone (the extra-valley spurious solutions). An algorithm to remove all those nonphysical solutions from the electron and hole energy spectra is proposed. Furthermore, slow and oscillatory convergence of the hole energy levels with the number of basis functions is found and is explained by the peculiar band mixing and the confinement in the considered quantum well. We discovered that assuming the hard-wall potential leads to numerical instability of the hole states computation. Nonetheless, allowing the envelope functions to exponentially decay in a barrier of finite height is found to improve the accuracy of the computed hole states.
I. INTRODUCTION
Silicon remains the technologically most important semiconductor, and is used to build numerous electronic 1,2 and photonic devices. [3][4][5][6] For almost four decades, the scaling down of the dimensions of these devices has been driven into the current nanoscale by Moore's law. Therefore, modeling transport and optical properties of silicon nanodevices should take into account quantum-confinement effects. In silicon quantum wells, the valence-band states can be modeled by the sixband k·p theory, 7 whereas, the conduction-band states require application of more elaborate ab initio [8][9][10][11][12] or tight-binding [13][14][15][16][17] methods because of the indirect band gap nature of silicon. However, the latter calculations become increasingly complex for larger structures, leading to slow performance and the requirement for large computer memory. These conditions are alleviated by the k·p theory, which can successfully describe states close to the band extrema. Yet, those k·p Hamiltonians usually are for states close to the Γ point of the Brillouin zone, [18][19][20] and are, therefore, suitable for nanostructures made of direct band-gap materials.
The approach, which has recently been pursued as a successful alternative to atomistic calculations, is the 30-band k·p model. [21][22][23][24] This model was demonstrated to accurately describe states in the whole Brillouin zone and was applied to silicon nanostructures 25,26 and GaAs/AlGaAs superlattices. 27 Nevertheless, because of the large number of energy bands which are taken into account, the 30-band calculations are far from being trivial. Also, this Hamiltonian is not invariant with respect to translational symmetry of the crystal, which is a general drawback of the k·p theory. Moreover, the results of k·p models can exhibit spurious solutions, [28][29][30][31] which arise from the incorrectly determined bulk states and, therefore, represent a considerable hurdle for calculations of the electron and hole states in a nanostructure. As a matter of fact, some of the envelope functions of the spurious solutions are highly oscillatory. A way to avoid them is to cut off contributions of the bulk states with a large wave vector. 30 The use of a basis consisting of plane waves is a natural choice for such a method, as recent calculations have demonstrated for InGaAs/InP superlattices. 30 Also, spurious solutions might also arise due to the lack of ellipticity of the multiband Hamiltonian. 32,33 . Unfortunately, no general method for removing spurious solutions from the k·p calculation has been proposed to date. Furthermore, almost all the proposed methods for the removal of spurious solutions were tested on the eight-band k·p Hamiltonian and for direct-band gap semiconductors.
In this paper, we study the electronic structure of silicon quantum wells by the 30-band k·p model. 22 We consider Si/SiO 2 quantum wells grown along the [001] direction. Because the conduction-and valence-band offsets are quite large, the electrons and holes are mainly confined in the silicon layer. Therefore, for convenience, an infinite potential-well confinement (hard-wall potential) was assumed, and the basis of standing waves was used. The conduction-band states of this quantum well have recently been considered by the approximate effective two-band model. 6 The values of the parameters were taken from Ref. 22, where the dispersion relations of the bulk bands in the whole first Brillouin zone (FBZ) were fitted to the results of ab initio calculations. However, the symmetry between the FBZ and the second Brillouin zone (SBZ) was not established in this fitting procedure. Therefore, spurious solutions are found in the energy spectrum. We explore their origin and, moreover, formulate a procedure which removes them from the energy spectrum of the analyzed quantum well. Also, the stability of some of the solutions with respect to variation in the order of the basis is discussed. Moreover, to explore, in more detail, how the specific boundary conditions affect the solutions, we supplement the analysis for the case of the finite band offset between Si and SiO 2 . In all our results, the top of the silicon bulk valence band is taken as the zero of energy.
II. THE BULK BAND STRUCTURE OF SILICON
Before presenting the results of our calculations for the modeled silicon quantum well, we briefly discuss the silicon bulk band structure, as computed by the 30-band k·p model. 22 The dispersion relations of a few bands with energies close to the band gap in the whole FBZ and SBZ along the [001] direction are displayed in Fig. 1. The highest-energy states in the valence band are localized close to the Γ point of the FBZ (the Γ valley), whereas, the conduction-band states have their energy minimum at k z = K 0 , which is close to the X point of the FBZ (the ∆ valley). The parameters of the model were fitted such that they reproduce the dispersion relations in the full FBZ well. 22 However, such a parametrization fails to produce the correct symmetry of the bands with respect to the FBZ boundary as demonstrated by the solid lines in Fig. 1. For k z beyond the X point the dispersion relations of all bands should be mirror symmetric to the dispersion relations left of the X point, 34,35 such as the ones shown by the dashed lines in Fig. 1. More specifically, a valley labeled by ∆ ′ in Fig. 1 should appear in the ground conduction band in the SBZ. However, this important detail is missing in the 30-band model. Rather, the energy of this band steeply increases with k z in the SBZ, and instead of the ∆ ′ valley, there exists a valley of an upper conduction band, labeled by Θ in Fig. 1, which is just 80 meV above the ∆ valley. Also, it is located close to the FBZ boundary, therefore it can have an important contribution to low-energy conduction band states in the quantum well. Note that the symmetries of the two bands differ: The ground conduction band mainly has the Γ 15 zone-center symmetry, whereas the upper conduction band has the combined Γ 25u + Γ 2l symmetry.
In addition to the lack of the symmetry of the conduction bands, the valence-band dispersion relations enter the band gap for large wave vectors, which is also shown in Fig. 1. Consequently, for a given energy E < 0, there exists an additional wave vector outside the FBZ. It was demonstrated that, for quantum wells based on direct band-gap semiconductors and using the eight-band k·p Hamiltonian, these high-k bulk states, which are degenerate with the low-k bulk states produce spurious states in semiconductor quantum wells. 30 As we will see, the incorrect dispersions of the energy bands shown in Fig. 1 will have severe effects on the numerical calculations of the quantum-well states.
III. THE QUANTUM WELL STATES
For the hard-wall confinement potential, the quantumwell states are obtained by solving the equation, Here, H 30 denotes the 30-band k·p Hamiltonian, which was introduced in Ref. 22, and Ξ is the 30-band envelopefunction spinor, χ j (z) denotes an envelope function of the zone-center periodic part of the Bloch function u j (r). 22 The full wave function of the electron in the quantum well reads In order to satisfy the Dirichlet boundary conditions, a basis of standing waves is chosen. 25,36 Furthermore, for the conduction band the range of wave vectors of the basis states is conveniently centered at k z = K 0 (see Fig. 1), 25 Here, N denotes the order of the basis and W is the well (simulation box) width. We note that the conduction band of silicon has two minima along the [001] direction, which occur at K 0 and −K 0 . This leads to a double degeneracy, which, along with spin, gives rise to four-fold degenerate states in the silicon quantum wells. A valleysplitting phenomenon breaks this degeneracy, 37 but this is a small effect due to both the inversion symmetry of the confining potential and the large separation between the equivalent ∆ valleys at the K 0 and −K 0 points. Therefore, it is discarded in our calculations.
The dominant component of the envelope-function spinor χ d = χ j is determined according to the criterion that it has the largest C j = χ j |χ j out of 30 envelope functions which are the solutions of Eq. (1).
IV. RESULTS AND DISCUSSION
A. The origin of spurious states The obtained spectra for quantum wells with thicknesses of W = 2 nm and W = 5 nm are shown in Figs. 2(a) and 2(b), respectively. To obtain these results, the order of the basis N in Eq. (4) is chosen such that the ground-state energy is converged up to an accuracy of 1 meV. We found N = 7 satisfies the convergence criteria. However, this leads to the presence of basis states outside the FBZ and inside the SBZ. For the conduction-band states (cbss) in the W = 2 nm wide quantum well, out of the seven basis functions, just a single basis state belongs to the FBZ. It is a cumbersome detail related to the small separation of the ∆ valley from the X point. Furthermore, the Γ 15 states around the ∆ point are expected to mostly contribute to the low-energy conduction-band states in the quantum well. However, the extra Θ valley is close in energy to the ∆ valley, hence some quantumwell states will be mainly Γ 25u + Γ 2l like. Not all states shown in Figs. 2(a) and 2(b) are physically relevant solutions, i.e., some spurious states are found in the energy spectrum. These states are denoted by dashed lines and are classified into two types as explained below.
Next we look at the localization of the electron in a few states of the W = 5 nm wide quantum well for k x = k y = 0 as shown in Fig. 3. In order to find both types of spurious solutions in the energy spectrum, we increased the basis size to N = 15. The probability densities of these states are displayed in the left panels On the other hand, the spurious solution with an energy of 453 meV has both a highly oscillatory probability density and the dominant envelope function as depicted in Figs. 3(b) and 3(e), respectively. The dominant envelope function is almost regularly periodic with a period of 0.7 nm, therefore, it is mainly composed of the bulk state with the wave vector k z = 2π/(0.7 nm) + K 0 ≈ 18 nm −1 . Such a high-k value is outside the FBZ where the 30-band model previously was demonstrated to fail. Therefore, such states are named high-k spurious solutions, and are abbreviated by hksss (denoted by the short dashed lines in Fig. 2). They are found in both the conduction and the valence bands.
Figures 3(c) and 3(f) display the state, whose energy is 1275 meV, which sets in between the cb and hkss states shown in Fig. 3. As a matter of fact, its probability density shown in Fig. 3(c) resembles the cb ground state shown in Figs. 3(a) and 3(d) and, therefore, could solely indicate that the state is a regular one. However, the dominant component of the envelope function spinor, shown in Fig. 3(f), is more oscillatory thanχ d displayed in Fig. 3(d). Yet, these oscillations are less regular and of the larger period than for the hkss state [compare Figs. 3(e) and 3(f)]. Nevertheless, they are composed of the wave vectors outside the FBZ [k z = 2π/(1.7 nm)+K 0 ≈ 13 nm −1 ], and are mainly contributed by the bulk states of the Θ valley. Therefore, such states are spurious but of another type, which are named evss. They are denoted by the long dashed lines in Fig. 2 and are found only in the conduction band.
In order to further illustrate the origin of the states displayed in Fig. 3, in Fig. 4, we show the corresponding distributions of the probability over the different components of the envelope-function spinor χ j |χ j . Figure 4(a) shows χ j |χ j 's for the electron ground state and demonstrates that this state mainly is composed of the Γ 15 zonecenter states. However, it has a large contribution from the Γ 1u band. 22 This result is consistent with the approximate two-band model proposed in Ref. 6. On the other hand, the main contribution to the hkss of Fig. 3(b) comes from the Γ 25l states as shown in Fig. 4(b), and the largest χ j |χ j in the evss displayed in Fig. 3(c) belongs to the Γ 25u and Γ 2l bands, as Fig. 4(c) shows. The latter two bands mainly form the Θ valley just outside the FBZ (see Fig. 1), which is an artifact in the SBZ, thus such states are classified as spurious.
B. The spurious solutions removal
We developed a scheme to automatically remove both types of spurious solutions. It is based on the following observations. In addition to contributions of different zone-center states to quantum-well states, which were illustrated in Fig. 4, the absolute value of the expansion coefficients |c (j) m | is found to be an important figure of merit for classifying quantum-well states as regular and spurious ones. We checked the distributions of χ j |χ j over j and |c (j) m | over both j and m and were able to formulate the set of empirical rules for extracting a few (three to five) low-energy spurious solutions from the conductionband spectrum of the quantum well. The regular states in the conduction band are found to mainly originate from the Γ 15 band. We label the regular conduction-band states by the counter n. Furthermore, χ Γ15 envelope functions were found to mostly be composed of the lowm basis states. For example, the electron ground state for the range from W = 2 nm to W = 20 nm is found to mainly be composed of the m = 1 basis function. Furthermore, the m values of the expansion coefficients with the largest magnitudes in the conduction-band states n and n + 1 are found to differ by not more than unity. Also, the quantum-well states, whose dominant envelope function χ d is due to bulk states different from Γ 15 , are found to be composed dominantly of the basis functions with wave vectors outside the FBZ. Therefore, they are spurious in origin, and may be of the hkss or evss type.
The proposed modus operandi is as follows. The calculation starts by choosing the value of the order of the computational basis N to achieve a reasonable energy accuracy, as previously explained. The Hamiltonian is then diagonalized, and the envelope functions with the largest χ j |χ j 's are selected for all the computed states. The index of the dominant envelope function is labeled by j max . Furthermore, for the determined j max , the largest expansion coefficient |c (jmax) m | is found and is labeled by m = m max . The m max value will be compared with m, which is the reference value of m max , and is set to unity when the procedure starts. The procedure for eliminating the spurious solutions from the spectrum of the conduction-band states reads: 1. Set the reference value of the maximal index of the 2. The composition of all the states from zero energy onward is determined; a state with the largest contribution of the Γ 15 band is selected for further consideration; 3. For the selected state, c (jmax) mmax is determined.
4. If m max >m the state is classified as spurious.
6. If m max <m, the state is classified as a regular state, and therefore, n = n + 1.
7. Go back to step 2 to proceed with checking the other states.
Note that no regular state is misclassified by this procedure. In other words, we found that the states which do not have the dominant Γ 15 component are dominated by standing waves with wave vectors outside the FBZ. However, the proposed procedure may be applied to only remove a few lowest-energy spurious states, which is three to five for W ranging from 2 to 5 nm. Mixing between the Γ 15 and the Γ 25u +Γ 2l zone-center states, which form the ∆ and Θ valleys, respectively, becomes larger when the electron energy increases, and the explained algorithm cannot be adopted. Furthermore, the proposed algorithm cannot be applied to thin quantum wells, and W = 2 nm was found to be a practical lower limit. For quantum wells thinner than approximately 2 nm, convergence of the electron energy to within 1 meV is not reachable.
Both the hksss and evsss are found to exist in the range of W from 2 to 5 nm for the chosen basis size. When W increases, the evss energies cross the energies of the regular states, as shown in Fig. 5. Similar to Fig. 3, the (k x , k y ) = (0, 0) states are shown in this figure. The number of the regular states, whose energies are lower than the lowest-energy evss increases with W . Therefore, when W tends to infinity, which is the bulk silicon case, all the regular states will be below all evsss. In the energy range displayed in Fig. 5, only two hksss are above the ground conduction-band states for W = 2 nm, and their energies sharply decrease with W , such that already for W = 2.1 nm these hksss enter the band gap where they can easily be recognized and removed from the energy spectrum.
As discussed, Fig. 2(a) shows the dispersion relations of the sub-bands which have energies close to the conduction-band bottom in the 2-nm-wide well. Because of band folding the minimum of the conduction band is at k x = k y = 0. Some spurious solutions evidently are found in the band gap, and all of them are of the high-k type and are, therefore, easily removed. On the other hand, a few evsss are found in the conduction band, whose dispersion relations appear to be similar to the dispersion relations of the regular states. It is because evss ′ s are formed out of the states of the Θ valley, which is similar to the real conduction band states which mainly arise from the states of the ∆ valley. In other words, the bulk states of both the real states and the evsss do not exhibit appreciable band mixing.
C. The hole states
Let us now consider the hole states. The presented procedure can also be adopted to remove the spurious solutions in the energy range of the valence band, except that the real valence-band states are found to mainly be composed of the Γ 25l zone-center states. But, in addition to the spurious solutions, the hole states in the silicon quantum well suffer from an instability in the calculation with respect to the basis order as Fig. 6 demonstrates. Notice that the hole ground-state energy level oscillates with the size of the basis. The amplitude of the oscillations can be as large as 100 meV, and its value decreases when the well width increases, as Figs. 6(a)-6(c) show for W = 2, 5, and 20 nm, respectively.
These zigzag-shaped convergence can be explained as follows. First, note that the diagonal elements of the 30band Hamiltonian are equal to the kinetic-energy term for a free electron. Therefore, without band mixing, the dispersion relations of all bands in silicon are concave. The curvature of the valence band alters sign through the band mixing. The analyzed silicon quantum well is symmetric, and for k x = k y = 0, the envelope functions are classified strictly with respect to inversion of the z coordinate as even or odd. The dominant component of the hole ground state is even, therefore, it is composed of the m = 1, 3, 5, . . . basis states. These basis states are dominantly coupled with the odd (m = 2, 4, 6, . . .) basis functions by the off-diagonal terms which are proportional to k z . It is obvious from the form of the 30band Hamiltonian given in Ref. 22 that the finite overlap between the even and the odd envelope-function spinor components leads to a change in the sign of curvature of the sub-band dispersion relation.
To further illustrate the zigzag variation in the hole eigenstates observed in Fig. 6, we focus on a result obtained with a basis of size N and one with size N + 1, where N is an odd number. The extra basis function in the N + 1 basis is an odd function. Because the slope of this (N +1)th basis function is largest close to the boundary where the oscillatory N th basis function reaches its maximum, the value of the matrix element between the two states can be large and, therefore, can substantially modify the eigenenergy value. For odd N , the basis is, in fact, not effective in establishing the appropriate curvature of the quantum-well sub-bands. Also, the dominant envelope function in the spinor of the ground hole state has extra zeros close to the well boundary as the inset in Fig. 6(a) shows for N = 7 and the W = 2 nm quantum well. However, if N is an even number, the dominant envelope function of the ground hole state becomes less oscillatory, and the extra zeros of the envelope function do not exist as the inset in Fig. 6(c) demonstrates for N = 20 and the W = 20 nm wide quantum well.
The demonstrated instability of the valence-band solutions with the size of the basis essentially is a consequence of the inappropriate curvature of the hole states as modeled by the diagonal terms of the 30-band model. Such problems do not exist in the six-band model where the sign of curvature of the valence-band dispersion relation is appropriate even if modeled by only the diagonal terms. The problem cannot be solved by increasing the size of the basis, i.e., by taking states outside the FBZ as shown in Figs. 6(a) and (b) into account. In fact, it arises from the need of the envelope function to drop exactly to zero at the boundary.
D. The case of the finite band offset
In order to explore how the assumption of the infinite barrier affects the stability of the hole states calculations, we extend our analysis to the case of a finitedepth Si/SiO 2 quantum well. The valence-band offset in Si/SiO 2 systems has been found to amount to 4.5 eV. 38,39 However, no values for the parameters of the 30-band model have been extracted, therefore, they are assumed to be equal to the parameters of silicon, except for the value of the band gap at the Γ point, which equals 8.9 eV. 38,40 In these calculations, we assume that the Si well of width D is centrally positioned with respect to the simulation box, whose width is denoted by W . As an example, we assume that the Si well is D = 5 nm wide and choose k x = 0 nm −1 and k y = 0 nm −1 .
The obtained eight highest hole energy levels for W = 15 nm and W = 10 nm as functions of the basis order N are shown in Figs. 7(a) and 7(b), respectively. Quite interestingly, the oscillations previously found for the case of the infinite quantum well do not take place when the valence-band offset is finite. For both values of W , the results improve by increasing N , and as expected, the smallest basis is needed to compute the ground state. Furthermore, no big change is observed when the size of the simulation box decreases from W = 15 nm to W = 10 nm, except that a slightly larger basis is needed when the simulation box is wider. Therefore, allowing the envelope functions to exponentially decay stabilizes the energy-level dependence on N . It confirms our previous claim that the steep descents of the envelope functions near the quantum-well boundaries cause numerical instabilities with the hard-wall potential shown in Fig. 6. Furthermore, as Fig. 7(c) demonstrates, when N is sufficiently large (N ≥ 14), we found that quite reliable results are produced irrespective of the value of the valenceband offset. In this figure, the ground-state energies in two unrealistic cases, V of f = 0.5 eV and V of f = 8 eV, are shown along with the ground state for V of f = 4.5 eV, which was previously shown in Fig. 7(b). This figure demonstrates that, if the envelope function is allowed to exponentially decay to zero inside the barrier, computation of quantum-well states becomes quite stable with respect to the number of basis functions. Even for a valence-band offset as large as 8 eV, the convergence of the hole ground-state energy level towards the numerically exact value is found to be quite steady, and only N = 13 basis states are needed to produce the energy value with a negligible error.
The value of the valence-band offset V of f = 4.5 eV is large such that the envelope functions decay fast in the barrier. Hence, the energy of the hole ground state is almost constant for W > 6 nm as Fig. 8(a) shows for N = 20. The energies of the other states depend similarly on W . However, some of them clearly exhibit oscillations. The reason is as follows. Since lower states of the valence band are more oscillatory, we need a broader k-interval than for the ground state to accurately describe them. With increasing box size (and fixed basis size N ), we narrow the covered k space (since k ∼ 1/W ), so these low states are not described accurately. Nevertheless, we need a wider box for these states than for the ground state. When the difference between W and D is not large, the confining potential is like in the infinite quantum well. Consequently, the results for the highest-energy states become quite unstable when N varies as Fig. 6 previously showed. Figure 8(a) indicates that, if one is interested in computing only three highest energy states, even the choice W = 6 nm produces a good result. Varying width and depth of the quantum well might modify this finding, and could depend on the values of the material parameters. But Fig. 8(b) shows that the probability density of the hole ground state is quite confined inside the well (which ranges between 2.5 nm ≤ z ≤ 7.5 nm). It accounts for why the energy of the ground state in Fig. 8(a) does not vary much for W > 6 nm. Moreover, we found that, even for a small barrier width, the highest-energy states can be computed quite accurately, and the accuracy of the calculation of the lower energy states can be improved by increasing the basis order. Hence, thin silicon layers embedded between thick barriers can be modeled accurately by the employed 30-band theory, providing the width of the simulation box and the basis size is large enough. We note that, for finite band offsets, Richard et al. previously employed 40 basis functions in the 30nm-wide box to compute the hole states in the Ge/SiGe quantum well with 1-meV accuracy. 25
V. CONCLUSION
We used a basis consisting of standing waves within the 30-band k·p model to solve the electronic structure of a Si/SiO 2 quantum well grown in the [001] direction. For the assumed infinite potential steps at the well boundaries, we found that numerous spurious solutions are present in the computed electron and hole spectra. These spurious states are classified into two categories: the high-k states which arise from the contribution of the states outside the first Brillouin zone and the extra-valley spurious states which arise from the spurious valley outside the first Brillouin zone. The missing symmetry of the conduction band in bulk silicon as modeled by the 30-band k·p Hamiltonian is found to be the cause of the extra-valley spurious states in the conduction band. Furthermore, we devised a procedure which is able to remove the low-energy spurious states from both the conductionband and the valence-band energy spectra. The latter is found to exhibit instabilities due to a peculiar band mixing and the specific boundary conditions, when the order of the employed basis varies. This failure of the 30-band k·p model might heuristically be accounted for by a large difference in the electron confinement in the hard-wall silicon quantum well and the silicon bulk. However, if the hard-wall confinement is made softer the deficiencies in the 30-band k·p approach are found to disappear for the adequately chosen size of the simulation box and the basis order. Furthermore, the choice of the numerical method is not relevant for the demonstrated instability of the hole states, i.e., we found that it also exists if the finite-difference or finite-element methods are adopted to solve the 30-band eigenvalue problem. | 2013-11-18T09:58:54.000Z | 2013-11-18T00:00:00.000 | {
"year": 2013,
"sha1": "6718ac2b90a249224705cfabc551a4580fdc0aed",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1311.4311",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6718ac2b90a249224705cfabc551a4580fdc0aed",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
265428040 | pes2o/s2orc | v3-fos-license | Small breast epithelial mucin as a useful prognostic marker for breast cancer patients
Abstract This study aimed to evaluate the clinical utility of small breast epithelial mucin (SBEM) as a prognostic biomarker in an independent patient cohort. The paraffin-embedded tissues and clinicopathological data of 105 patients with breast cancer were collected, and the expression of SBEM in breast cancer samples was detected by immunohistochemical staining. The correlations between clinicopathological variables and the expression of SBEM were analyzed, and its significance as a prognostic indicator for breast cancer patients was determined. Immunohistochemical staining revealed that SBEM was expressed mostly in the cytomembrane and cytoplasm, with markedly increased SBEM expression (≥4 points on staining intensity) observed in 34 of 105 breast cancer tissues (32.4%). Elevated expression of SBEM was found to be significantly associated with larger tumor size (P = 0.002), more frequent lymph node metastasis (P = 0.029), advanced tumor node metastasis stage (P = 0.005), reduced expression of the progesterone receptor (PR) (P = 0.002), and a higher Ki-67 index (P = 0.006). Survival analysis indicated that patients with elevated SBEM expression had worse overall survival (OS) (5-year OS rate: 50.5 vs 93.9% for high and low SBEM expression, respectively, P < 0.001) and disease-free survival (DFS) (5-year DFS rate: 52.8 vs 81.7% for high and low SBEM expression, respectively, P = 0.001) rates than those with low expression of SBEM. Univariate and multivariate Cox analyses demonstrated that elevated expression of SBEM (hazard ratio [HR] = 1.994, 95% confidence interval [CI]: 1.008–3.945, P = 0.047), tumor size (HR = 2.318, 95% CI: 1.071–5.017, P = 0.033), and PR status (HR = 0.195, 95% CI: 0.055–0.694, P = 0.012) were independent predictors of OS in breast cancer patients. Elevated expression of SBEM was associated with both aggressive tumor characteristics and poor survival, indicating its potential as a useful prognostic biomarker for breast cancer patients.
Introduction
Breast cancer is a common malignancy and the leading cause of cancer-related death in women.The disease is heterogeneous in terms of molecular features, morphology, and biological behavior [1][2][3].Due to the lack of recognized symptoms and signs, breast cancer usually presents at an advanced stage, resulting in poor therapeutic efficacy and low survival outcomes [4][5][6].Therefore, it is of great significance to identify novel prognostic biomarkers and therapeutic targets for breast cancer patients.
The mucin family are large proteins with heavy glycosylation, and it could be classified as membrane-bound (MUC1, MUC4, MUC13, and MUC16) and secretory types (MUC2, MUC5AC, MUC5B, and MUC6).The mucins form a chemical barrier in luminal surfaces of organs such as breast, pancreas, and gastrointestinal tract against infection and inflammation.As essential components of cells, mucin proteins also play important roles in cellular apoptosis, adhesion, and metastasis.It has been reported that mucins are used as specific diagnostic markers and therapeutic targets for human cancers.Small breast epithelial mucin (SBEM), also known as MUCL1, is a key member of the membrane-bound mucin family.SBEM is specifically expressed in salivary and mammary glands [7,8].A previous study suggested that SBEM may be a valuable biomarker for bone marrow micrometastases in breast cancer patients [9,10].Emerging evidence has also shown that SBEM levels are markedly increased in the peripheral blood of breast cancer patients in comparison with healthy controls [11].By contrast, low levels of SBEM were more frequently found in breast cancer patients who underwent neoadjuvant chemotherapy [11].In addition, SBEM was identified as an oncogene that promoted the migration and invasion of breast cancer cells by the regulation of the epithelial-to-mesenchymal transition [12].However, the clinical value of SBEM as a prognostic biomarker has not yet been clarified.Therefore, the current study focused on the expression of SBEM in breast cancer tissues using immunohistochemistry and analyzed the correlations between its expression and clinicopathological characteristics as well as survival outcomes in breast cancer patients.Our findings may provide a valuable biomarker for the prognostic assessment and clinical treatment of breast cancer patients.
Patients and clinical samples
A total of 105 patients who had been diagnosed and treated with surgical resection for breast cancer in our hospital between January 2016 and December 2018 were consecutively enrolled.All cases were pathologically diagnosed with invasive breast carcinoma, and the patients were eligible if they had no evidence of distant metastasis at the initial diagnosis.None of the patients received neoadjuvant radiotherapy, chemotherapy, or chemoradiotherapy before surgery.Patients with comorbid malignancies or incomplete clinicopathological and follow-up data were excluded.This study was approved by the Ethics Committee of our institution (No.K2020-003), and informed consent was obtained from all participants before surgery.
Paraffin-embedded tissues and clinicopathological data of 105 breast cancer patients were retrospectively collected.The demographic and clinicopathological features, including patient age, histological grade, tumor size, lymph node metastasis, anatomical tumor node metastasis (TNM) stage, expression of the estrogen receptor (ER) and progesterone receptor (PR), amplification status of human epidermal growth factor receptor 2 (HER2), Ki-67 expression, P53 mutation, and molecular subtype, were analyzed.The expression of ER, PR, and Ki-67 was detected by immunohistochemistry.The threshold of ER and PR positivity was defined as >10%, and cases showing nuclear staining of ≥14% of Ki-67 were classified as having high Ki-67 expression.
Informed consent: Informed consent has been obtained from all individuals included in this study.
Ethical approval:
The research related to human use has been complied with all the relevant national regulations, institutional policies, and in accordance with the tenets of the Helsinki Declaration, and has been approved by the Ethics Committee of Cangzhou People's Hospital (No. K2020-003).
Immunohistochemical analysis
Immunohistochemical staining for SBEM expression was performed using the streptavidin-peroxidase method according to both previous reports and manufacturers' instructions.
Briefly, fixed tissue samples were embedded in paraffin and cut into 4 µm sections.The sections were then deparaffinized with a xylene solution and rehydrated in descending concentrations of ethanol.Antigen repair and retrieval were conducted with 0.01 M sodium citrate buffer (pH = 6.0) under heating for 30 min at 95°C in a microwave, after which the slides were blocked with 0.3% H 2 O 2 solution for 10 min to inhibit the activity of endogenous peroxidase.After washing with phosphate-buffered saline, the sections were incubated with the primary antibody, polyclonal rabbit anti-SBEM (1:200, ab122530, Abcam, Cambridge, UK), overnight at 4°C.On the following day, the slides were incubated with the secondary antibody (anti-rabbit IgG) for 60 min at room temperature.Immunohistochemical responses were visualized by incubation with diaminobenzidine solution, and the sections were counterstained with hematoxylin.Finally, the slides were dehydrated in an alcohol gradient and fixed with xylene, followed by evaluation and imaging under a microscope.
Statistical analysis
The patients were divided into negative-and positiveexpression groups based on the SBEM immunohistochemical results.Correlations between clinicopathological variables and SBEM expression were summarized using a contingency table and analyzed by Pearson's chi-squared or Fisher's exact test.The primary outcomes for the survival analysis were overall survival (OS) and disease-free survival (DFS).The former was defined as the time from the date of surgical treatment to the date of death due to any cause or the last follow-up, while the latter was defined as the time interval from the date of surgical treatment to the date of the first postoperative tumor recurrence or metastasis.Survival differences between patients with negative and positive SBEM expression were evaluated by the Kaplan-Meier method with log-rank tests.In addition, univariate and multivariate Cox regression analyses were performed to identify the significant predictors of OS and DFS for breast cancer patients, and the data were expressed as the hazard ratio (HR) and 95% confidence interval (CI).Data processing and statistical analysis were performed using SPSS implemented with the SPSS version 23.0 (IBM Corp, Armonk, NY, USA).Two-sided t-tests were used for all statistical analyses, and a P-value of <0.05 was regarded as statistically significant.
Expression of SBEM in breast cancer samples and its correlations with clinicopathological characteristics
Immunohistochemical staining showed that SBEM was mainly expressed in the cytomembrane and cytoplasm.
Representative images of positive and negative SBEM expression in breast cancer samples are shown in Figure 1.Positive expression of SBEM was detected in 34 of 105 breast cancer tissues (32.4%).
The correlations between clinicopathological features and SBEM expression are summarized in Table 1.It is found that the positive expression of SBEM was significantly associated with larger tumor size (P = 0.002), frequent lymph node metastasis (P = 0.029), advanced TNM stage (P = 0.005), negative expression of PR (P = 0.002), and higher Ki-67 index (P = 0.006).However, there were no significant correlations between positive SBEM expression and other clinicopathological parameters such as age, ER status, histological grade, and molecular subtype (P > 0.05).
Expression of SBEM and survival outcomes in breast cancer patients
The associations between SBEM expression and survival outcomes in breast cancer patients are shown in Figure 2. The Kaplan-Meier curves indicated that patients with positive expression of SBEM had worse OS and DFS than those with negative expression of SBEM.The 5-year OS rates of patients with positive and negative expression of SBEM were 50.5 and 93.9%, respectively (P < 0.001).
Discussion
The identification of specific biomarkers is useful for the early diagnosis and clinical management of breast cancer.It has been reported that SBEM is a novel tissue-specific protein in mammary glands, and the expression of SBEM is elevated in human breast cancer [8].Current evidence indicates that SBEM is usually overexpressed in breast cancer tissues, cell lines, and peripheral blood samples from breast cancer patients [13,14].Zhang et al. found that both mRNA and protein expression of SBEM was significantly increased in breast cancer tissues [15].Several studies have reported a significant correlation between high SBEM expression and aggressive clinical features such as tumor size, lymph node metastasis, and clinical stage [11,15].In a previous study, Liu et al. identified SBEM as a useful biomarker for predicting lymph node metastasis and micrometastasis in breast cancer patients [11].This suggests that SBEM may provide valuable information for the prognostic assessment of breast cancer.However, little is known about the prognostic value of SBEM in breast cancer patients.In this study, the protein expression of SBEM in breast cancer tissues was evaluated and the relationships between its expression and clinicopathological characteristics as well as survival outcomes of breast cancer patients were analyzed in an independent patient cohort.The data showed the positive expression of SBEM in 32.4% of samples, and its positive expression was significantly associated with larger tumor size, frequent lymph node metastasis, advanced TNM stage, negative expression of PR, and a higher Ki-67 index.Survival analysis revealed that the positive expression of SBEM was correlated with worse OS and DFS, suggesting its potential as a useful prognostic biomarker for breast cancer patients.These findings are consistent with those of Liu et al., who investigated the expression of SBEM in 87 triple-negative breast cancer (TNBC) tissues using immunohistochemistry, where they observed positive expression of SBEM in 58% of patients and found that it was an independent prognostic marker of OS and DFS in TNBC patients [16].Moreover, high expression of SBEM was also detected in peripheral blood samples of breast cancer patients compared with healthy controls and other malignancies [11].In view of its high specificity in breast tissue, evaluation of SBEM levels may thus be helpful for the early diagnosis, risk stratification, and therapeutic management of breast cancer patients.As a potential oncogene, SBEM plays a critical role in breast cancer metastasis.Functional in vitro experiments demonstrated that the overexpression of SBEM markedly increased both the migration and invasion of MDA-MB-231 and MCF-7 cells [12].Additionally, the levels of the epithelial marker E-cadherin were reduced, while those of mesenchymal markers such as vimentin, N-cadherin, and Twist were increased by the overexpression of SBEM [12].These findings suggested that SBEM might promote breast cancer metastasis via the epithelial-mesenchymal transition.However, the biological functions and potential mechanisms of SBEM in the oncogenesis and progression of breast cancer need to be further explored in future research.
Conclusions
In conclusion, our findings suggested that the positive expression of SBEM was associated with both aggressive tumor characteristics and poor survival and that SBEM might thus be a valuable prognostic biomarker for breast cancer patients.As a tissue-specific protein, it is necessary to further evaluate the clinical utility of SBEM in larger patient cohorts.
Figure 1 :
Figure 1: Representative images of SBEM expression in breast cancer tissues: (a) the negative expression of SBEM (×400) and (b) the positive expression of SBEM in immunohistochemical staining (×400).
Figure 2 :
Figure 2: Kaplan-Meier curves showed the associations between SBEM expression and survival outcomes of breast cancer patients: (a) for DFS and (b) for OS.
Table 1 :
Correlations between clinicopathological variables and the expression of SBEM in breast cancer
Table 2 :
Univariate and multivariate Cox analyses for predictors of OS in breast cancer patients
Table 3 :
Univariate and multivariate Cox analyses for predictors of DFS in breast cancer patients | 2023-11-26T05:13:14.904Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "f0e6b0d83856b101ed78cade99fde4e9985746c7",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "f0e6b0d83856b101ed78cade99fde4e9985746c7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
270407683 | pes2o/s2orc | v3-fos-license | Pathway-dependent supramolecular polymerization by planarity breaking
In controlled supramolecular polymerization, planar π-conjugated scaffolds are commonly used to predictably regulate stacking interactions, with various assembly pathways arising from competing interactions involving side groups. However, the extent to which the nature of the chromophore itself (planar vs. non-planar) affects pathway complexity requires clarification. To address this question, we herein designed a new BOPHY dye 2, where two oppositely oriented BF2 groups induce a disruption of planarity, and compared its supramolecular polymerization in non-polar media with that of a previously reported planar BODIPY 1 bearing identical substituents. The slightly non-planar structure of the BOPHY dye 2, as evident in previously reported X-ray structures, together with the additional out-of-plane BF2 group, allow for more diverse stacking possibilities leading to two fiber-like assemblies (kinetic 2A and thermodynamic 2B), in contrast to the single assembly previously observed for BODIPY 1. The impact of the less rigid, preorganized BOPHY core compared to the planar BODIPY counterpart is also reflected in the stronger tendency of the former to form anisotropic assemblies as a result of more favorable hydrogen bonding arrays. The structural versatility of the BOPHY core ultimately enables two stable packing arrangements: a kinetically controlled antiparallel face-to-face stacking (2A), and a thermodynamically controlled parallel slipped packing (2B) stabilized by (BF2) F⋯H (meso) interactions. Our findings underscore the significance of planarity breaking and out-of-plane substituents on chromophores as design elements in controlled supramolecular polymerization.
Materials and Methods
Chemicals and Reagents: All chemicals were purchased from Sigma Aldrich (St. Louis, MO, USA), TCI Europe N.V. (Tokyo, JP) and BLD pharm (Senefelder ring, Reinbeck, DE) and used without further purification methods unless otherwise mentioned.Silica gel was used for column chromatography unless otherwise mentioned.
Column chromatography: Preparative column chromatography was performed in self-packed glass columns of different sizes with silica gel (particle size: 40-60 µm, Merck).Solvents were distilled before usage.
NMR spectroscopy: 1 H and 13 C NMR spectra were recorded at 298 K on Avance II 300 and Avance II 400 from Bruker for routine experiments using tetramethylsilane (TMS) as internal standard.Additional 1 H as well as 2D 1 H-19 F HOESY spectra were recorded on an Agilent DD2 500 ( 1 H: 500 MHz) and an Agilent DD2 600 ( 1 H: 600 MHz) at a standard temperature of 298 K in deuterated solvents.Multiplicities for proton signals are abbreviated as s, d, t, q and m for singlet, doublet, triplet, quadruplet and multiplet, respectively.
Mass spectrometry (MS):
MALDI mass spectra were recorded on a Bruker Daltonics Ultraflex ToF/ToF or a Bruker Daltonics Autoflex Speed with a SmartBeamTM NdYAF-Laser with a wavelength of 335 nm.ESI mass spectra were measured on a Bruker MicrOToF system.The signals are described by their mass/charge ratio (m/z) in u.
UV-Vis spectroscopy:
UV-Vis absorption spectra were recorded on a JASCO V-770 or a JASCO V-750 with a spectral bandwidth of 1.0 nm and a scan rate of 400 nm min -1 .Glass cuvettes with an optical length of 1 cm, 1 mm and 0.1 mm were used.All measurements were conducted in commercially available solvents of spectroscopic grade.
Fluorescence spectroscopy: Fluorescence and excitation spectra were recorded on a JASCO Spectrofluorometer FP-8500 in quartz cuvettes (SUPRASIL®, Hellma) with an optical length of 1 cm.
FT-IR spectroscopy:
Solution and solid-state measurements were carried out using a JASCO-FT-IR-6800 equipped with a CaF 2 cell with a path length of 0.1 mm.
Atomic force microscopy (AFM):
The AFM images were recorded on a Multimode®8 SPM System manufactured by Bruker AXS.The used cantilevers were AC200TS by Oxford Instruments with an average spring constant of 9 N m -1 , an average frequency of 150 kHz, an average length of 200 µm, an average width of 40 µm and an average tip radius of 7 nm.All samples were drop-casted from freshly prepared solutions onto an HOPG surface.
Gel permeation chromatography (GPC):
Gel permeation chromatography was performed on a Shimadzu prominence GPC system equipped with two Tosoh TSKgel columns (G2500H XL; 7.8 mm I.D. x 30 cm, 5 µm; Part.No. 0016135) using CH 2 Cl 2 as eluent.The solvent flow was set to be 1 mL/min.Detection was carried out via a Shimadzu prominence SPD-M20A diode array detector (DAD).
Scanning electron microscopy (SEM):
SEM images of self-assembled species were recorded on a Thermo Fisher Scientific Phenom ProX Desktop SEM.All samples were drop-casted on a silicon wafer surface.
Transmission electron microscopy (TEM):
The TEM images were recorded on a FEI TITAN Themis G3 60-300 transmission electron microscope manufactured by Thermo Fischer Scientific with an operation voltage of 60 kV and 300 kV.The X-FEG field emission gun gives a bright and highly stable electron source for the measurements for high resolution images.This device is also equipped with monochromator, Cs image corrector, quadruple EDX-system, Fischione model 3000 HAADF detector, a fast CMOS camera to capture high resolution images with very fast frame rates and a high-resolution EEL spectrometer (GATAN Quantum 965) for detailed analysis of the structures.The samples were prepared on carbon coated mesh copper grid by drop casting the sample and the excess liquid was drained using a filter paper that was placed under the grid.Compound 1, B, C, D and 3,4,5-tris(dodecyloxy)-N-(4-ethynylphenyl)benzamide (E) were prepared by following the reported synthetic procedures and showed similar spectroscopic properties to those reported therein. [1,2]nthesis of linear BOPHY derivative (2):
Nucleation-Elongation model for cooperative supramolecular polymerization
The equilibrium between the monomeric and supramolecular polymer species can be described in a cooperative process with the Nucleation-Elongation model which was developed by Ten Eikelder, Markvoort and Meijer. [3,4] is model is used to describe the aggregation of 2, which exhibits a non-sigmoidal cooling curve as shown in temperature-dependent UV-Vis experiments.The model extends nucleation-elongation based equilibrium models for growth of supramolecular homopolymers to the case of two monomer and aggregate types and can be applied to symmetric supramolecular copolymerizations, as well as to the more general case of nonsymmetric supramolecular copolymerizations.In a cooperative process, the polymerization occurs via two steps: in a first step (nucleation), a nucleus, which is assumed to have a size of 2 molecules, is formed.In a subsequent step, the elongation of the nuclei into one-dimensional supramolecular polymers occurs.The values T e , ΔH°n ucl , ΔH° and ΔS° can be determined by a non-linear least-square analysis of the experimental melting curves.The equilibrium constants associated with the nucleation and elongation phases can be calculated using the following equations: (1) And the cooperative factor (σ) is given by:
Denaturation Model for Supramolecular Polymerization
The denaturation model [5] is based on the concentration-dependent supramolecular polymerization equilibrium model by Goldstein, [6] where the polymerization is described as a sequence of monomer addition equilibria. [ For the cooperative model, K n < K e and for the isodesmic process K n = K e .The concentration for each species P i is given by The dimensionless mass balance is obtained by inserting the dimensionless concentration = [ ], the monomer concentration x= K e [X] and the concentration of each species P i (for i≤ n): = −1 and for > n : = −1 ): Both sums are evaluated by using standard expressions for converging series: With = and = total monomer concentration The sum solved by standard numerical methods (Matlabfzerosolver) yields the dimensionless monomer concentration .Considering that every species with is defined as aggregate, the degree of > 1 aggregation results in: Via = ⅇ (− 0 ) the denaturation curves can be obtained with f defined as volume fraction of good solvent: It is assumed that the cooperativity factor is independent of the volume fraction and the m value for the elongation regime equals the m value for nucleation.The denaturation data needs to be transformed into the normalized degree of aggregation, if fitted to the supramolecular polymerization equilibrium model: The optimization of the four needed parameters (∆G 0 , m, σ and p) to fit the equilibrium model to the experimental data (normalized degree vs. f) is done by the non-linear least-squares analysis using Matlab (lsqnonlinsolver).The data is then fitted with the non-linear least squared regression (Levenberg Marquardt algorithm).
Fluorescence quantum yield
Absolute luminescence quantum yields were measured on a JASCO spectrofluorometer FP-8500 (equipped with an ILF-835 integrating sphere) with a band width of 5 nm and a scan rate of 1000 nm/min.Quartz cuvettes with an optical path of 5 mm were employed.The measurements were carried out with a specific excitation wavelength for each sample, as shown in Table S4.
Supplementary Figures
Minor shifts in emission in solvents such as toluene arise from solvatochromism, which takes place without any aggregation process.TEM measurements further confirm that the aggregates 2A and 2B also exhibit similar morphologies to those observed in AFM and SEM.
Theoretical Calculations
The DFT B3LYP/6-31g(d,p) basis set [7,8] was used to perform the geometry optimization of the different supramolecular species (monomer, dimers and trimers).To reduce the computational cost of theoretical calculations, the long alkoxy chains were replaced by methoxy groups.The corresponding absorption spectra for the monomer and trimers were calculated by using the rcam-B3LYP/6-31g(d,p) method.All computations were carried out using Gaussian-16 (G16RevC.01). [11]The time-dependent density functional theory (TD-DFT) [9] was selected for the geometry optimization (monomer, dimers and trimers), employing the CAM-B3LYP density functional [10] together with the 6-31G(d,p) basis set. [7,8] he corresponding absorption spectra for the monomer and trimers were calculated by TD-DFT using the rcam-B3LYP/6-31g(d,p) method including 80 excitation energies.PyMOL was used as molecular visualization program.The unusual emission features of the face-to-face stacked aggregate 2A probably arise from the defects in the packing as also evident from the above-mentioned theoretical calculations.
Figure S8 :
Figure S8: Variable Temperature (VT) cooling (a) and heating (b) UV-Vis spectra of a 10 µM MCH solution of 2 with a cooling/heating rate of 1 K min -1 .c) α agg vs. T at wavelength of 505 nm.
Figure S11 :
Figure S11: Variable Temperature (VT) UV-Vis spectra obtained upon heating a sonicated solution of 2A of compound 2 at different concentrations: a) 5 µM b) 10 µM c) 15 µM and d) 30 µM with a heating rate of 1 K min -1 in MCH.
Figure S19 :
Figure S19: SEM images of aggregate 2B prepared by drop-casting 10 µL of 2 (c= 20 µM in MCH) on a silicon wafer substrate.The images reveal elongated fibre-like structures with a more significant bundling than those of 2A.
Figure S20 :Figure S21 :Figure S22 :
Figure S20: AFM height (a,c) and corresponding phase (b,d) images of aggregate 2A prepared by cooling a 10 µM solution in MCH from 363 K to 298 K with a cooling rate of 1 K min -1 followed by drop-casting the sample on HOPG surface.
13 C
Table S5 :
Different H-bond distances in aggregates 2A and 2B. | 2024-06-13T15:09:44.583Z | 2024-06-11T00:00:00.000 | {
"year": 2024,
"sha1": "a65cc2d8dc2b67755ebd3c3d03cf3a3bb0b11de5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1039/d4sc02499k",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b28ea4a6c4fed8fb1565b1efdfbdb6c2d0d073a",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": []
} |
237555547 | pes2o/s2orc | v3-fos-license | Colon capsule endoscopy in clinical practice: lessons from a national 5-year observational prospective cohort
Background and study aims Colon capsule endoscopy (CCE) has been proposed as an alternative to colonoscopy for screening patients at average risk of colorectal cancer (CRC). A prospective national cohort was developed to assess relevance of CCE in real-life practice and its short- and long-term impacts on clinical management. Patients and methods All patients who underwent a CCE in France were prospectively enrolled from January 2011 to May 2016 and reached annually by phone until May 2017. All CCE and colonoscopy reports were systematically collected. Results During the study period, 689 CCEs were analyzed from 14 medical centers. Median follow-up time was 35 months [IQR: 12–50]. Indication for CCE was mainly for elderly patients (median age: 70 years, IQR: [61–79]) due to anesthetic or colonoscopy contraindication (n = 307; 44.6 %). Only 337 CCEs (48.9 %) were both complete and with adequate bowel preparation. Advanced neoplasia (adenoma with high-grade dysplasia or CRC) was diagnosed following 32 CCEs (4.6 %). Among patients who underwent colonoscopy or therapeutic surgery following CCE, 18.8 % of all advanced neoplasias (6/32) had not been diagnosed by CCE mainly due to technical issues. Performing a colonoscopy in the case of significant polyps or insufficient bowel cleansing or after an incomplete CCE allowed the diagnosis of 96.9 % of all identified advanced neoplasias (31/32). Conclusions Outside the scope of academic trials, improvement is needed to increase the reliability of CCE as less than half were considered optimal i. e. complete with adequate bowel cleansing. Most of missed colonic advanced neoplasia were due to incomplete CCE with distal neoplasia location.
Results During the study period, 689 CCEs were analyzed from 14 medical centers. Median follow-up time was 35 months [IQR: . Indication for CCE was mainly for elderly patients (median age: 70 years, IQR: [61-79]) due to anesthetic or colonoscopy contraindication (n = 307; 44.6 %). Only 337 CCEs (48.9 %) were both complete and with adequate bowel preparation. Advanced neoplasia (adenoma with high-grade dysplasia or CRC) was diagnosed following 32 CCEs (4.6 %). Among patients who underwent colonoscopy or therapeutic surgery following CCE, 18.8 % of all advanced neoplasias (6/32) had not been diagnosed by CCE mainly due to technical issues. Performing a colonoscopy in the case of significant polyps or insufficient bowel cleansing or after an incomplete CCE allowed the diagnosis of 96.9 % of all identified advanced neoplasias (31/ 32).
Conclusions Outside the scope of academic trials, improvement is needed to increase the reliability of CCE as less than half were considered optimal i. e. complete with adequate bowel cleansing. Most of missed colonic advanced neoplasia were due to incomplete CCE with distal neoplasia location.
been proposed as an alternative to colonoscopy for screening of average-risk colorectal cancer patients who show contraindications or are unwilling to undergo colonoscopy, and/or in cases of incomplete colonoscopy (cases of stenosis or insufficient bowel cleansing excluded) [6][7][8]. It has been demonstrated as a sure and effective tool to detect polyps at high risk of malignant development [9,10]. Diagnosis performance of secondgeneration CCE for detection of polyps ≥ 6 mm has been evaluated in several studies, with a sensitivity ranging from 79 % to 89 % and a specificity ranging from 64 % to 97 % [11][12][13][14][15][16]. However, clinical relevance of CCE in real-life practice and its shortand long-term impacts on clinical decisions have never been described. Indeed, CCE is of particular interest when colonoscopy cannot be performed, a clinical situation that could not be explored by clinical trials comparing CCE to colonoscopy. The aim of this study was thus to describe feasability, patients profile, results and the decision process that follows the use of CCE when performed in real-life.
To assess these questions, the results of the French National Observatory of Colon Capsule Endoscopy (ONECC), a systematic national observational cohort of patients who underwent second-generation CCE in France with a 5-year follow-up, are presented herein.
Patient inclusion
During the study period, the use of CCE in France was only possible within the ONECC cohort piloted by the French Society of Digestive Endoscopy. Thus, all patients who underwent a CCE in France were enrolled in a prospective manner, from 2011 to 2016.
Ethical considerations
Written, informed consent was obtained from each patient included in the study. The study protocol conforms to the ethical guidelines of the 1975 Declaration of Helsinki (updated in 2013). This study was authorized by the National Commission for Data Protection and Liberties under the no. 1519762 and is registered on ClinicalTrials.gov (NCT 03533894) in accordance with the legislation in place at the time of the study.
Procedure
All patients ingested second-generation CCE (Pillcam Colon 2, Medtronic, Minnesota, United States) after a 1-day clear liquid diet and bowel preparation consisting of 4-L or 2-L (Moviprep split doses of polyethylene glycol based preparation ± bisacodyl 5 mg (given and as a rescue if CCE was not excreted). 40 mg sennosides was also given 2 days before CCE ingestion. After ingestion, the patient received a booster regimen of sodium-phosphate solution (45 mL and 30 mL) or, if contraindicated, polyethylene glycol (500 mL). CCE videos were then analyzed by a trained gastroenterologist using dedicated software (Rapid Reader 7.0, Medtronic, Minnesota, United States).
Data collection
The gastroenterologist that prescribed CCE implemented an online electronic Case-Report-Form (e-CRF) mentioning: demographic data, further indication of colon exploration, indication of CCE, polyp presence, location, and size, bowel cleansing grade, complication during recording, and completeness of colon exploration (defined by a CCE where all colon segments were declared to be seen). Were considered "significant", polyps ≥ 6 mm in size and/or the association of ≥ 3 polyps [11]. Bowel cleanliness was graded according to the validated Leighton-Rex scale from 1 to 4 (1: Poor; 2: Fair; 3: Good; 4: Excellent) [17]. The gastroenterologist who analyzed the CCE also mentioned if he retained the indication to perform a colonoscopy following the CCE. There was one CCE reader per center, all with > 300 capsule endoscopy readings at the time of study (only small bowel capsule, as this was the first time CCE was used in France). All CCE readers followed a 2-day specific training for CCE reading. If a colonoscopy was performed, results were also reported. All CCE and colonoscopy reports were systematically collected and reviewed, and data analysis was performed only on complete data for which all reports were available to ensure data robustness. Diagnosis of neoplasia were all histologically confirmed.
Follow-up data
All enrolled patients were annually reached by phone during the study period and until May 2017. In cases of loss to followup, local administrative registers were systematically consulted to check for patient death at the end of follow-up.
Statistical analysis
Odds ratios were calculated and Fisher's exact test performed using GraphPad Prism version 6.00 for Mac OS X (GraphPad Software, La Jolla, California, United States, www.graphpad. com).
Results
Between 2011 and 2016, a total of 1,282 CCEs were performed in France. Complete data were available for 689 CCEs (53.7 %) (▶ Fig. 1) from 14 different medical centers (7 teaching hospitals, 7 general hospitals). The median (interquartile range; IQR) number of CCEs per center was 30 . Median follow-up was 35 months (12-50). Follow-up was not possible for 107 patients (15.5 %). Median (IQR) age for patients undergoing a CEE was 70 years (61-79) years and the population concerned showed important comorbidities. The main indication for CCE was contraindication to anesthesia or colonoscopy (n = 307; 44.6 %). At the end of the study, 115 patients (16.7 %) were dead (▶ Table 1). Cause of death was reported in 26.1 % of cases (30/115), among which none were related to a colorectal neoplasia.
In the majority of cases (409/689; 59.4 %), the gastroenterologist who completed the e-CRF did not recommend a colonoscopy following CCE, mainly due to the absence of polyps or the recording of a non-significant polyp (351/409; 85.8 %; ▶ Fig. 1). In this population for whom a colonoscopy was not recommended, 30.3 % (124/409) had an incomplete CCE. For those patients, the median age and indication for CCE were comparable to the whole cohort. Among patients who did not undergo a colonoscopy after the initial CCE, only one patient was reported with a CRC: one intramucosal cancer detected at colonoscopy 4 years after the initial CCE (colonoscopy performed after a sigmoid diverticulitis; ▶ Fig. 2).
In 40.6 % of patients (280/689) a colonoscopy was recommended. Indications for colonoscopy are described in ▶ Fig. 1. Among those with a recommendation to perform colonoscopy, 18.6 % (52/280) finally did not perform the examination mainly because of patient refusal (18/52; 34.6 %) or a confirmed medical contraindication to colonoscopy (17/52; 32.7 %). In 11.1 % of cases (31/280), the colonoscopy was recommended due to the diagnosis of a polyp on CCE even if the polyp did not meet criteria for significance. Overall, 27.9 % of CCEs (31/111) with a non-significant polyp gave rise to the indication for a colonoscopy.
When a colonoscopy was performed (n = 228) a polyp was diagnosed in 45.2 % of cases (103/228) representing 290 polyps Table 4. Importantly in four of six (66.7 %) of these misdiagnosed cases, capsule examination was incomplete and the advanced neoplasia was described as distal (sigmoid or rectum). In one case, the CCE and colonoscopy were concordant in the identification of a 5-mm polyp of the sigmoid colon, (i. e. a non-significant polyp according to the definition) that still justified a colonoscopy for the referent gastroenterologist with histology revealing an intramucosal CRC. In the last case, a lesion characterized as a voluminous lipoma of about 3 cm was described in the colonic region where a voluminous CRC was di-▶ agnosed at colonoscopy, raising the question of lesion misdiagnosis on CCE.
Overall colonoscopy and CCE were concordant (polyp size and location) in 48.2 % of cases (110/228). For patients with a non-significant polyp at CCE and who underwent colonoscopy (n = 44), only one polyp (1/44; 2.3 %) corresponded to an advanced neoplasia (rectal CRC) after a CCE with insufficient bowel cleansing. Performing a colonoscopy after CCE in the case of significant polyps or insufficient bowel cleansing or after an incomplete CCE allowed the diagnosis of 96.9 % of all identified advanced neoplasias (31/32).
Discussion
In the ONECC cohort, CCE was mainly used for elderly and fragile patients with contraindication to colonoscopy, which may represent one main indication for colon capsule in order to avoid sedation or anesthesia in these patients. About half of CCEs identified a polyp and a colonoscopy was recommended for 40.6 % of all CCEs performed. About 5 % of CCEs led to a diagnosis of advanced neoplasia with a concordance between capsule/invasive colonic explorations of 81.3 %. However, less than half of all CCEs were considered optimal, i. e. complete with adequate bowel cleansing. False-negative CCE cases were mainly related to incomplete CCEs with distal CRC.
The aim of this study was not to assess the diagnosis performance of CCE given the fact that all patients did not perform the gold standard diagnostic test (colonoscopy); however, this is the first population-based, real-life study of CCE with longterm prospective follow-up. With patient enrollment coming from teaching hospitals and general hospitals, this study gives a good overview of how CCE can be used in clinical practice, and how it can impact patient management outside the scope of academic comparative controlled trials.
As confirmed by the present results, the main clinical situation of interest for CCE use is when colonoscopy cannot be performed (incomplete or contraindicated), a clinical situation that cannot be evaluated in a previous study when CCE was compared to colonoscopy. In such situations, CCE has already demonstrated superiority against CT colonography, the other alternative for noninvasive colonic exploration [15,18,19]. The ONECC cohort further showed reassuring results for CCE use in this population with high concordance between CCE and invasive colon exploration for high-grade dysplasia or CRC.
Moreover, in this real-life cohort, use of CCE showed specific interests in terms of management, demonstrating the possibility to perform colonic surgery directly after obvious tumor identification on CCE, with an increasing patient care efficiency. Of note, in about 10 % of cases, colonoscopy was recommended by practitioners despite the presence of non-significant polyps during a reassuring complete CCE with adequate bowel cleansing. This suggests that polyp size and number may not be the only way to assess the relevance of performing a colonoscopy after CCE. Clinical parameters, patient and gastroenterologist risk perception, and the optical aspect of the polyp on CCE, particularly a suspicious aspect, contribute to the decision-making process. Thus, it might be of interest to systematically assess the degree of suspicion of malignancy on CCE reports based on the polyp images obtained to help clinical decision in cases where size and number may not be sufficient. More precisely, in this cohort, this could have helped avoid the one missed case of high-grade dysplasia from the 5-mm isolated polyp identified on CCE. Developing a potential malignancy qualitative scale may be of interest to describe polyps seen on CCE in order to homogenize descriptions.
A limitation of the present study relates to missing data, as about 15 % of patients were lost to follow-up and complete CCE and colonoscopy reports were not available for half of the CCEs performed in France, and thus, not included in the analysis. Death causes were also not all known and some deaths related to colonic neoplasia or new diagnosis of CRC may have been missed. Second, the compliance of patients in taking the entire bowel preparation was not reported. Therefore, we could not differentiate between insufficient bowel cleansing due to lack of compliance or to the fact that the actual protocol for bowel preparation is not sufficient for CCE. However, to our knowledge, this work is the first to provide insights on how CCE is used in daily practice and its strength and limits.
▶ Table 4 Description of patients with advanced neoplasia at colonoscopy not detected at colon capsule endoscopy (CCE The main limitations related to CCE use are insufficient bowel cleansing and incomplete examination [20]. Despite using an optimized protocol of bowel cleansing with booster and split PEG preparation, fewer than half of CCEs were considered complete with adequate bowel cleansing, which is about 25 % less than what has been described in academic studies [21,22]. Actual strategies for bowel preparation are insufficient and new approaches should be developed [17]. Recently, Fuccio et al identified risk factors associated with poor colon cleansing for colonoscopy in hospitalized patients [23]. The systematic screening for such factors before CCE could prompt extended bowel preparation to optimize CCE diagnosis performance. However, a CCE that is incomplete or with insufficient bowel cleansing can still be of clinical interest, as demonstrated by the fact that nearly half of the CCEs with a significant polyp were described as incomplete or with insufficient bowel cleansing. Because most missed cases of advanced neoplasia were due to incomplete CCE with distal CRC location, this raises the question of completing CCE with a distal colonoscopy in patients with contraindication to sedation and incomplete CCE. This is supported by the fact that herein, performing a distal colonoscopy after CCE would have allowed the detection of nearly all identified advanced neoplasias. Given these results, a possible recommended approach for elderly patient management would be to perform a colonoscopy after CCE in case of: 1. identification of a significant polyp; 2. insufficient bowel cleansing; or 3. identification of a polyp with an aspect suggestive of advanced malignancy; and 4. to propose only a distal colonoscopy to avoid sedation-associated risks in cases of incomplete CCE (▶ Fig. 3).
Conclusions
In conclusion, the ONECC cohort showed that a complete CCE with adequate bowel preparation can be used to exclude colonic advanced neoplasia in daily practice in subjects for whom colonoscopy cannot be performed. However, improvements in completion rate and cleansing protocols are needed to enhance CCE diagnostic accuracy. | 2021-09-19T05:14:41.809Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "11c14bda6cc95beee7448077f913681dd918eca7",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/a-1526-0923.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "11c14bda6cc95beee7448077f913681dd918eca7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250670319 | pes2o/s2orc | v3-fos-license | LogAmp electronics and optical transmission for the new SPS beam position measurement system
A new front-end board is under development for the CERN SPS Multi ORbit Position System (MOPOS). Based on logarithmic amplifiers, it measures the beam position over a large dynamic range of beam intensities and resolves the multi-batch structure of the SPS beams. Analogue data are digitized at 10 MS/s, packed in frames by an FPGA and on every turn sent to the readout board, via a 2.4 Gb/s optical transmission link. A first prototype has been successfully tested with several SPS beams. This paper presents an overall description of the system and its capabilities highlighted by the first beam measurements.
Introduction
The SPS orbit and trajectory measurement system relies on 216 Beam Position Monitors (BPM) distributed all around the ring. They are either single plane electrostatic rectangular "shoe-boxes", based on electrodes with a linear response, or stripline directional couplers, both providing horizontal and vertical beam signals.
The upgrade of the present MOPOS is required to replace the obsolete electronics and to improve the quality of the position measurements, currently done with a timing resolution of one SPS turn (around 23 µs) by means of a peak-detector. The project aims at developing a radiation hard (100 Gy/year) electronic system capable of providing both high dynamic range measurements, covering the various beam configurations available in the SPS, and a fast enough data sampling rate to resolve the 2 µs long multi-batch structure of the beam.
Requirements
The SPS currently accelerates both proton and lead-ion beams, with a wide range of bunch filling patterns and bunch charge as shown in table 1. We call batch a bunch train coming from the PS ring and injected in the SPS. The nominal number of batches N batch can vary between 1 and 4 for protons and up to 13 for ion-beams. Because the pick-up provides signals directly proportional to both beam position and beam intensity, the electronic system must cover a dynamic range of more than 70 dB [1]. In addition, due to the different filling patterns and bunch spacing, the bandwidth of the electronics will vary from 40 to 200 MHz. The required resolution for low and high intensity beams is presented in table 2 both for the orbit mode, providing a beam position averaged over 1 ms, and for the trajectory mode, in case of turn-by-turn acquisitions.
Electronics upgrade
A simplified block diagram of the newly developed electronics is presented in figure 1. Each pickup will be read out using a front-end board located in the SPS tunnel, and exposed to radiation doses of up to 100 Gy per year. Despite the technical challenges imposed by a radiation-hard design, this solution was favoured to avoid the very long and expensive cables which are currently used in the present system. In view of a possible upgrade for two-plane beam position monitors, the front-end electronics integrates two processing channels, for both the horizontal and the vertical planes. On the front-end board, after a first analogue processing stage, signals are digitized locally and transmitted via optical fiber over long distances, of up to 1 km, to a read-out board located in a surface building. The read-out board communicates through a VME interface with the software environment in order to receive commands and send data. The machine timing is distributed on the read-out boards providing both an injection pre-pulse signal, sent around 70 µs before each beam injection, and the SPS Turn Clock, sent every 23 µs. The SPS Turn Clock pulse is used to synchronize all the data chain locally on the front-end boards. The read-out board is equipped with a powerful FPGA, which provides several data acquisition modes described later in this document. The data are published via our standard control system. In total, the Multi Orbit POsition System has 216 front-end boards and 54 read-out boards.
Front-end
The front-end board has been designed to be modular with independent analogue and digital circuits connected together. The analogue board is based on bandpass filters and dual logarithmic amplifiers (Analog Devices ADL5519), whereas the digital board consists of an octal 14-bit ADC (Analog Devices AD9252), an FPGA (Xilinx Spartan6) and a Small Form-factor Pluggable (SFP) optical transceiver. Analogue Front-End. The architecture of a single-plane analogue front-end board is depicted in figure 2. The input signals come either from the pick-up or from a calibrator, which is remotely controlled by the digital front-end board.
The beam displacement y can be derived from the logarithmic difference of the input signals from opposite electrodes U and D as follows: Consider the series expansion of the natural logarithm: Converting to decibels and using the first term leads to: Then, for small beam displacements, the log-ratio gives a good approximation of y: For each BPM plane, the input stage is made of a low-pass filter that minimizes the bunch shape variation during acceleration. The signals from each electrode are split into three parallel detection chains with different band-pass filters and gain stages to cover the high dynamic range and the different beam patterns available in the SPS. Signals from opposite electrodes are processed by the same dual logarithmic amplifier, which provide a direct measurement of the beam position. Therefore each plane generates three analogue position signals, called Delta 200 MHz, Delta 40 MHz Low Sensitivity and Delta 40 MHz High Sensitivity. One should however note that only one signal will be used to calculate the beam position, this selection being made in the read-out -3 - board. In addition, all the logarithmic amplifier's outputs are summed together. The corresponding Sum signal gives an estimate of the beam intensity and it is used to detect the presence of the beam and to validate the acquisitions.
Digital front-end. The architecture of the digital front-end board is depicted in figure 2. Each BPM plane provides three beam position signals and one beam intensity signal. They are digitized at 10 MS/s and serialized. The on-board FPGA communicates with the read-out board via a 2.4 Gb/s, bidirectional, optical transmission link. Every 23 µs it receives from the read-out board a frame of data, which carries the SPS Turn Clock information together with some commands used to control the parameters of the calibration system. This timing information is then locally decoded and each ADC measurement is tagged in the FPGA by a time-stamp with respect to the rising-edge of the SPS Turn Clock. Data are then packed in a frame every turn and transmitted to the read-out board.
Radiation tests. The front-end board is designed to be located in the SPS tunnel, where it must withstand radiation doses of up to 100 Gy per year. Several commercial components have been tested under radiation in 2012 [2]. Two different setups have been prepared with 1 kGy of total dose targeted for each device under test. In the first setup, logarithmic amplifiers, ADC-drivers and voltage regulators have been tested at the Paul Scherrer Institute (PSI), in the Proton Irradiation Facility (PIF). The corresponding data allowed to select the components to be used in the Multi Orbit POsition System. In the second test setup, several families of bidirectional Small Form-factor Pluggable (SFP) optical transceivers, either single-fiber or double-fiber, have been tested at both PSI-PIF and CERN CNRAD. The test results showed that the SFP components are very sensitive to irradiation and for our current renovation program we are now considering to use specifically designed radiation hard optical transceivers rather than COTS components. For the final version, the FPGA and the ADC have still to be selected.
Read-out
The read-out board, shown in figure 3, is a custom made VME FMC Carrier, called VFC [3], which has been developed as a general beam instrumentation acquisition board. On the VFC there are two Xilinx Spartan6 configured to manage up to 6 SFP optical transceivers: 2 on the front panel, one of those receiving the Beam Synchronous Timing signals, like the SPS Turn Clock, and 4 available -4 - Measurements. From turn-by-turn data, several acquisition modes, presented in table 3, have been implemented in the FPGA of the read-out board. They provide the trajectory and orbit measurements as requested by the SPS operators. For each mode, two parameters must be defined and configured on the FPGA: the number of ADC slots selected per turn, from 1 up to 235, with an ADC sampling rate of 10 MS/s, and the number of turns to be acquired. The FIFO Mode is a debugging mode which allows to save all the data for 3 consecutive SPS turns, corresponding to about 70 µs. The Capture Mode is an operational mode used to acquire data over a time window, which can be up to 220 turns, if all the ADC slots are saved for each turn, or up to 64 k turns, when a maximum of 8 ADC slots are selected per turn. The data acquired with the FIFO and the Capture mode are transmitted to the software without any processing. These two modes have been used to test the first MOPOS prototype with several SPS beams in January/February 2013. The other acquisition modes, presented in table 3, have been already implemented in the firmware of the read-out board but were not tested with the beam. The main difference with respect to the FIFO and the Capture modes is that the data are already analyzed by the FPGA firmware, which outputs a mean value to the software. The Injection Trajectory contains up to 13 mean position values, one for each batch, which are defined by a certain number of programmable ADC slots. The Orbit Diagnostic outputs 235 mean position values, one for each ADC slot. On the contrary the Global Orbit gives a value for each turn, averaged on a selected number of ADC slots, which can possibly be non consecutive. Finally the Continuous Filter outputs one value per BPM plane every ms, which is the filter response in permanence over all the data received from the front-end.
First measurements with beams
The first MOPOS prototype has been assembled and tested in the SPS, with both proton and leadion beams, under different conditions, including single bunch, 25 ns and 50 ns bunch trains. During -5 - this test, the front-end, as visible in figure 4, was installed in the same room as the read-out electronics and therefore not exposed to radiation. The analogue boards were connected to a horizontal Stripline BPM and to a vertical "shoe-box" BPM. The digital front-end is made of an octal-14bit-ADC commercial evaluation board, a calibrator and the FPGA-based acquisition board, which is connected via an optical fiber to the read-out VFC board. The beam injection oscillations have been successfully measured in both planes. To evaluate the sensitivity of the system, local beam displacements, usually called orbit-bumps, were introduced during the SPS machine cycle using dipolar magnetic correctors. The beam-synchronous pre-pulse signal has been used to trigger the acquisition either at injection or during the orbit-bump.
Injection oscillations
The injection oscillations of a proton batch have been measured and are displayed in figure 5. The left plot presents typical single-bunch oscillations observed at injection on the horizontal stripline pick-up using the 40 MHz high sensitivity mode. These oscillations are usually damped within 1 ms when the transverse damper is active. On the right plot, smaller vertical oscillations are shown for the vertical shoe-box BPM, which have been measured with the 40 MHz low sensitivity mode for a beam of 48 bunches per batch and 25 ns bunch-spacing.
In figure 6, the left plot shows the beam position and intensity signals of 4 batches containing 36 bunches with 50 ns bunch-spacing and 1.4 10 11 protons per bunch. The measurement was done during the injection of the fourth batch, which is clearly off-centre with respect to the other batches.
-6 - The acquisition of a lead-ion beam of 10 10 charges per bunch, when batch 11 is injected, is presented on the right plot in figure 6. The batches of Pb 82+ ion beams injected in the SPS contained two bunches with 200 ns spacing. Notice that batch 11 is off-centre with respect to the other centred batches. For heavy ion beams, only the 40 MHz High-Sensitivity channel provided useful data; the other channels were dominated by the noise of the logarithmic amplifiers and the associated ADC-drivers.
Local beam displacements
In order to characterise the performance of the system, local beam displacements of ± 1 mm, ± 2.5 mm and ± 5 mm have been put in place in the vicinity of each BPM under test. Each orbit bump, either in the horizontal or in the vertical plane, is using 3 magnetic correctors. They are programmed to start 1 s after the injection. As depicted in figure 7, the bump is kept stable for 200 ms, with a rise and fall time of 100 ms. For each displacement, three measurements have been acquired: before, in the center, and after the bump.
Both the system sensitivity, which is the number of µm per ADC-bin, and the resolution, typically limited by noise, have been evaluated from these data. The results are shown in table 2.
-7 - The analogue noise on the 40 MHz high-sensitivity channel, which is active for the singlebunch proton beam acquisitions, was estimated at 150 ADC-bins on a turn-by-turn basis in the worst case, which means about 375 µm for the vertical measurements. This noise level needs to be reduced in order to improve the performance of this channel.
A proton beam of 48 bunches per batch with 25 ns bunch-spacing and 1.4 10 11 charges per bunch has been used to characterise the 200 MHz and 40 MHz low sensitivity channels. These results confirm that the sensitivity is about 1.7 µm/bin for the stripline and 2.5 µm/bin for the "shoe-box", which reflect different BPM apertures. Noise levels, including beam position jitter, limit the estimated resolution to 375 µm in trajectory mode and 80 µm in orbit mode, in the worst conditions.
Conclusions
The upgrade of the Multi Orbit POsition System is required to replace the obsolete electronics and to improve the quality of the current measurements in the SPS. A prototype of the new electronic system, based on logarithmic amplifiers and fast digital electronics, has been fully tested in the CERN-SPS, allowing the observation of proton and lead-ion beams under various beam conditions. The beam injection oscillations and the multi-bunch structure have been reconstructed. Using local beam displacements, the resolution of the system was estimated to be 375 µm for turn-by-turn acquisitions and 80 µm in orbit mode, which matches the specifications. The system is now being optimized to improve the sensitivity for low charge beams. Several commercial components have been already tested under radiation, while the ADC and the FPGA have not been selected yet.
A pre-production of the new MOPOS system will be installed in 2014 in an SPS sextant. At least for one year, both systems, the current and the new-one, will run in parallel to validate the new MOPOS electronics. | 2022-06-28T05:12:09.927Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "3317faf82b9c144101246dbb3c08e45a2e24e099",
"oa_license": "CCBY",
"oa_url": "http://iopscience.iop.org/article/10.1088/1748-0221/8/12/C12008/pdf",
"oa_status": "HYBRID",
"pdf_src": "IOP",
"pdf_hash": "3317faf82b9c144101246dbb3c08e45a2e24e099",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
247322984 | pes2o/s2orc | v3-fos-license | An Evaluation of Animal-Assisted Therapy for Autism Spectrum Disorders: Therapist and Parent Perspectives
Although there are a variety of psychological and therapeutic approaches to coping Autism Spectrum Disorder, people with autism still face some challenges in a “normal” therapy setting. Some therapy organizations and services have proposed an alternative therapy approach, Animal-assisted therapy (AAT). The aim of this phenomenological study was to gain a better understanding of the therapists and parents of individuals who choose the alternative approach, AAT. Data were collected through structured interviews with a group of three therapists and four parents. An analysis of the data revealed three main themes; the first theme named the perceived benefits of AAT with three subthemes consisting of physical benefits, sensory benefits and emotional benefits. The second theme is named the way AAT works with subthemes of client-centred therapy and mixed models, and the third theme is potential limitations of AAT. Limitations and suggestions for future research are discussed.
Introduction
Autism Spectrum Disorder (ASD), more commonly known as autism, is an umbrella term for individuals who have difficulty in communicating and socializing with others and exhibit repetitive and restrictive patterns (American Psychiatric Association [APA], 2013). Although they may experience different symptoms, depending on severity, they often show a lack of speech, limited eye contact, prefer to be alone, have difficulty reading others' emotions, repeat words or phrases, flail their arms, rock their body, have restricted interests, are resistant to environmental changes or changes in daily routine, and have unusually delicate sensory systems (de Schipper et al., 2016).
In addition to clinical symptoms, many people diagnosed with ASD also have a number of comorbid conditions such as anxiety, attention deficit disorder (Factor et al., 2017); and depression (Juraszek et al., 2019). Other comorbid disorders may include epilepsy, sleep disorders, sensory processing disorder, obsessive-compulsive disorder, and eating disorder (APA, 2013). Studies have found that on average, 60%-70% of individuals diagnosed with ASD also have a learning disability (Emerson & Baines, 2010).
Around the world, approximately 250 babies are born every minute; this equates to more than 130 million babies in a year. Among these 130 million newborns, 1 in 160 children are diagnosed with ASD. It is estimated that approximately 67 million people worldwide are affected by ASD (Mazurek et al., 2020). ASD is identified as a lifelong neurodevelopmental condition with unknown cause (de Schipper et al., 2016). It was originally thought to be the result of socialization with and around ''refrigerator mothers'' (Douglas, 2014). Autism was believed to be the result of cold mothers who did not show affection or love to their children. However, this notion was soon rejected as neuropsychological research progressed (Schmidt, 2019). Autism has also been falsely linked to vaccinations in the past; a claim that was not only based on falsified science, but also resulted in serious public health consequences (Flaherty, 2011).
Another explanation for ASD was offered by Baron-Cohen (2002), a theory referred to as the ''extreme male brain theory.'' Baron-Cohen (2002) proposed that the brains of women and men are different. While women's brains work to read the emotions of others and are more likely to empathize with social cues, known as socializing behaviours, men's brains work to recognize patterns, also known as systemizing behaviours. Patients with autism (both male and female) are thought to see the world through an extremely male brain. However, the questionnaire used to test the theory is said to have been developed based on gender stereotypes and sex differences, so its applicability is still open to speculation.
It is now widely accepted that ASD is the result of differences in brain function and structure. Using magnetic resonance imaging (MRI), research examining brains between individuals with and without ASD has found that individuals with ASD have abnormalities in grey and white matter in some regions of the brain, including the amygdala, cerebellum, and many other regions (Williams & Minshew, 2007). Studies state that ASD is likely a neural system condition, meaning that symptoms are caused by abnormalities in regionally distributed cortical networks rather than individual brain regions (Ha et al., 2015).
Animal-Assisted Therapy for ASD
For several decades, various therapeutic modalities such as music therapy, play therapy, occupational therapy, speech therapy, and art therapy have been proposed to help people with ASD (Waterhouse, 2013). AAT has also been suggested as an alternative approach for individuals diagnosed with ASD (Altschiller, 2011;Braun et al., 2009). According to Chandler (2017), AAT refers to psychotherapy that incorporates animals as part of a formal therapeutic process. In therapy, the animal serves as a co-therapist to promote the quality and strength of the collaborative relationship between client and therapist. As soon as the client establishes a bond with the animal, the client automatically trusts their therapist as an authority figure. Trust and security allows the client to open up more quickly and benefit from therapy to a greater degree. AAT is not a onetime event, but it is a structured, goal-oriented type of psychotherapy that usually takes place over a number of sessions. Over the course of the therapy sessions, the client's progress towards the goal set is measured and recorded (Altschiller, 2011;Kruger & Serpell, 2010).
It is well-documented that developing a relationship with animals brings untold benefits to humans (Bert et al., 2016;Chitic et al., 2012;Koukourikos et al., 2019;Van-Fleet & Faa-Thompson, 2014). Individuals who perceived pets as warm, compassionate, and caring were less stressed under conditions of chronic psychological pressure (Chandler, 2017). These changes have been shown to elevate overall levels of comfort and trust and decrease the body's stress-related responses. By using the animal as a gateway to building a relationship with the client, therapists may find it easy to relate to client effectively and motivate clients' commitment to the process of therapy (Braun et al., 2009). Science shows that AAT can have immense health benefits, such as recovery from health problems or management of certain medical conditions (Enders-Slegers et al., 2019). It has also been proven that AAT can improve fine motor skills and strengthen a person's core stability and body coordination. Spending time with animals helps improve a person's emotional state and wellbeing (Ç akıcı & Kök, 2020). AAT can also significantly promote positive social behaviours such as sharing, cooperation, and volunteering (Ç akıcı & Kök, 2020;Emerson & Baines, 2010;Kruger & Serpell, 2010).
The use of animals in therapy can be traced back to the last century. Freud, the founder of psychoanalysis, was one of the first therapists to introduce his pet dog into therapy sessions. During the session, he found that patients were more willing to communicate because of the presence of his pet (VanFleet & Faa-Thompson, 2014). In the 1940s, a group of war veterans suffering from PTSD were exposed to a farm environment with the animals. The results showed that their PTSD symptoms decreased with the animals' companion (Koukourikos et al., 2019). In the 1960s, AAT was officially coined by Levinson (Levinson, 1969). Similar to previous studies, Levinson discovered that mentally ill individuals were more likely to socialize with an animal than another human.
Research suggests that AAT has become increasingly popular when it comes to children with ASD. Although often withdrawn, children with autism sometimes relate better to animals than to people. Therapists are better able to make therapeutic connections and strides with them when animals are around (Braun et al., 2009). Studies have found that children with autism interact and engage more in the presence of a therapy animal (Chandler, 2017). Animals have also been found to have calming effects on children when they hold or pet the animal (Koukourikos et al., 2019). The presence of therapy animals may therefore be a way to keep a child attentive to the intervention (Marcus, 2013). Engaging with a therapy animal resulted in better communication skills and prosocial behaviours (Enders-Slegers et al., 2019). Not surprisingly, a decrease in autistic traits through AAT has been validated in previous studies (Chandler, 2017).
Dogs are the most commonly used animals in the therapeutic setting due to their social and affectionate nature. Research shows that children with ASD can benefit from time with a trained therapy dog (Turner, 2011). Therapists can utilize therapy dogs as an emotional bridge to tap into the client's worldview. Playing with a dog can help a child with ASD self-soothe, which can be a great antidote for meltdowns (Turner, 2011). Also, in Martin and Farnum's (2002) study, children with ASD were exposed to either a ball, a stuffed toy, or a therapy dog while being supervised by a therapist. Once the children played with the therapy dogs, they showed more signs of interaction, communication, and attention. They found it easier to be more proactive and accommodating in conversations with the therapist, indicating that the presence of a therapy dog is pleasant. Building trusting and meaningful relationships with therapy dogs can then carry over into relationships outside of the session room (Katcher, 2000).
In addition to dogs, there are a variety of other animals that can be used for AAT, from small animals like cats and guinea pigs to larger ones like horses and dolphins (Bert et al., 2016;Chitic et al., 2012;VanFleet & Faa-Thompson, 2014). Equine-assisted therapy (EAT), a psychotherapy that involves interaction with a horse, can be very beneficial both emotionally and socially. In a recent meta-analytical study, horseback riding was found to be a useful form of therapy in children with ASD (Trzmiel et al., 2019) and helped improve low moods in participants by building their self-confidence (Kern et al., 2011). It has been reported that children with ASD are able to develop motor skills and gain a sense of achievement by steering the horse (Chandler, 2017;Trzmiel et al., 2019). In addition to therapeutic horseback riding, therapy with farm animals is another form of therapy that is thought to be particularly effective for children with ASD. Therapist-led interaction with these friendly four-legged animals in a safe, structured context proves beneficial for social and communication skills. Recently, guinea pigs have become a popular choice when it comes to AAT. In a study by O'Haire et al. (2013), teachers were asked to rate their students on the interaction and play with guinea pigs. The teachers reported that their students showed greater social skills and fewer problem behaviours as a result of the activity.
Although there is already some evidence that AAT helps with autism, most previous scientific work uses a quantitative approach. In this case, a qualitative study is proposed to shed light on the possible positive components of AAT to better understand the process by which it works. Since individuals with ASD may have very limited speech skills, this study aims to explore the phenomenon of AAT from the perspectives of therapists and parents.
Method Participants
The study was based on a qualitative phenomenological design. Parents of children diagnosed with ASD and currently undergoing AAT, and therapists providing AAT for children with ASD were asked if they could participate in the interview. All participants were recruited from the AAT community and autism support groups. Demographic details of the participants are shown in Tables 1 and 2. Notably, all three professional interviewees in this study are special education specialists and certified animal-assisted therapists, with a minimum of a postgraduate degree and many years of experience in areas of psychotherapy, counselling, and coaching. During their past experiences, they integrated common counselling techniques with the non-verbal cues that presented on a child when animals are involved in therapy sessions.
Materials
Two semi-structured interview schemas were developed; one directed at parents and one directed at AAT therapists. The advantage of semi-structured interviews is that they are prepared in advance, which allows the interviewer to be prepared, professional and competent during the interview. Both interview schemas were designed to explore and uncover participants' personal experiences, feelings and opinions about the AAT. In addition, they explored the potential impact of AAT in general.
Procedure
The study was approved by the Psychology Ethics Committee University of Northampton. Participants were additionally advised that all data were handled in accordance with the General Data Protection Regulations (GDPR) 2018 and the UK Data Protection Act 2018 (DPA). Potential participants were approached via contacts within the AAT community and autism support groups. Interested participants were then able to contact us to follow up with further questions. Prior to data collection, information sheets were distributed to all participants detailing the purpose and methodology of the study. The sheet also clearly emphasized that participation in the study was voluntary and provided a detailed overview of the procedures used to maintain confidentiality and anonymity. Specially designed consent forms were also distributed to participants along with the information sheet. All participants signed and dated the consent form and returned it to the researcher, either in hard copy or via digital imaging.
Seven interviews took place over the course of two months. Prior to each interview, participants were reminded that they could opt out at any time during the interview and up to one month after the study was completed. They were also reminded of their right to decline to answer any questions without having to provide an explanation. Interviews were conducted via online video conferencing using social media applications (Skype and Facetime). This was introduced due to the coronavirus pandemic which made it impossible to meet the subjects face to face. Interviews lasted approximately 1 h and were voice or screen recorded. All participants gave consent for the interviews to be recorded. After the interview was conducted, debriefing forms were given to the participants and they were thanked for their participation.
All seven interviews were transcribed verbatim by the researcher. When personally identifying statements were made, such as names, places, etc., these were removed during transcription and replaced with (X). At this point, participants were also assigned pseudonyms so that they would not be identifiable. Only the researcher knows which pseudonyms were assigned to which participant.
Analysis
Qualitative data were analysed using Thematic Analysis in NVIVO (version 12). Braun and Clarke (2006) explain that thematic analysis is a method that allows data to be categorized into themes and subthemes. The analysis used in this study was a bottom-up approach as the codes and themes identified were extracted from close examination of the data, which is a more appropriate alternative than having an initial hypothesis and finding codes and extracts to support it (Maguire & Delahunt, 2017). The collected data was further analysed using the 6-step approach: 1. become familiar with the data, 2. generate initial codes, 3. identify themes, 4. review themes, 5. define and categorize themes, 6. create report. The first step involved reading the interview transcripts thoroughly to become familiar with the data. This involved listening to the voice recordings and transcripts individually several times. Once the coding process was complete, the codes were further reviewed and examined to interpret the data and create themes. Once the codes and themes were identified, the themes were reviewed and subthemes were created. Once the themes and subthemes were completed, the results were written in relation to the extracted data.
Results
We have constructed three major themes in our analysis, and we will present our analysis by using themes as headings: perceived benefits of AAT, how AAT works, and potential limitations of AAT.
Perceived Benefits of AAT
This theme includes narratives about the perceived benefits of AAT, as described by therapists and parents. All participants expressed positive views about the effectiveness and benefits of AAT for children with autism. Subthemes developed were: (1) physical, (2) sensory, and (3) emotional benefits. AAT was perceived to have many benefits that are often interrelated.
Subtheme 1: Physical Benefits
Overall, the physical benefits of AAT were emphasized by both therapists and parents. By physical benefits, we mean both being with the animals and being in a safe, therapeutic space. The role of the animal is central to AAT and is central to the sessions. The physical presence of animals as well as the emotional benefits they provide, such as friendship, love and companionship, are key benefits of AAT.
… it's a sense of companionship… (Kate, Parent). … the animals give a sense of purpose and love… (Gemma, Parent). An animal that offers unconditional, non-judgmental friendship. (Erin, therapist) Another benefit of AAT directly related to the presence of the animal is that the presence of the animal and therapist creates a safe space for clients. Participants indicate that the AAT environment and being with the animal provides a sense of a safe and secure place. One therapist (Erin) highlighted the ''unconditional, non-judgmental friendship'' that develops between the animal and the client, this may be another link to the importance of the animal during AAT. This allows clients to open up about personal issues.
I think that's one of the most important things too, the non-judgmental and their companionship, that they don't have to talk or anything. But if they do, the dog won't change their behaviour. Like a friendship. It's just like he's just a rock. (Erin, Therapist). She's able to talk about some of the challenges that she's facing non-confrontationally, which, um, allows for an open conversation about possible solutions or strategies that she could consider. Um, and there's gives just a safe, protected place where there's no judgment. (Annie, Parent)
Subtheme 2: Sensory Benefits
In the interviews, participants also emphasized the importance of sensory elements in AAT. Although each client and session is unique, there are many similarities when sensory elements are introduced. This can range from touching animals to being attentive to the environment. The therapist (Sophie) believes that animals are able to provide a dynamic multisensory experience, as different animals have different levels of arousal, which offers the potential for arousing and de-arousing influences.
Additionally, one therapist (Erin) highlighted the biological perspective of sensory benefits. Sensory stimulation releases oxytocin, which can have a calming effect.
Games and activities provide sensory stimulation for children with autism. The dog can be trained in various ways to help children with autism through different games and activities, such as tug of war, hide and seek, and massage. As a result of therapy, I have seen a reduction in meltdowns or tantrum, such as, humming and clicking noises, spinning objects, hand-posturing, roaming, and repetitive jumping, because the child was more aware of his or her surroundings in the presence of the dog. Additionally, ATT does not target the core symptoms of autism, but rather acts on some psychiatric symptoms related to autism, such as hyperactivity, aggression, irritability, and self-injury. However, there are different levels of arousal in different animals. Sometimes animals also trigger arousal or sensory overload leading to repetitive behaviour. Children with autism need a highly predictable and repeatable environment, referred to as a sensory social routine. (Sophie, Therapist) Children with autism who participated in therapeutic horseback riding showed greater sensory seeking, sensory sensitivity, and less irritability and hyperactivity. While riding a horse, restless children are distracted, which makes them less jumpy. Therapy animals enjoy being petted by children, which increases feelings of affection in children with autism. Cuddling and touching animals cause the release of oxytocin, which calms the child. (Erin, Therapist)
Subtheme 3: Emotional Benefits
Overall, the emotional benefits of AAT were highlighted by all participants. The perceived benefits were varied and included: building self-confidence, acceptance, bringing out hidden qualities and behavioural learning outcomes. This variety of benefits suggests that AAT is perceived as positive and effective for emotional wellbeing and development. Participants said that therapy animals did not reject people with autism, but made them feel accepted. It could be surmised that the clients' feeling of increased confidence and acceptance is the reason they are able to show their ''hidden qualities'' in the AAT. Two therapists in particular emphasized AAT's ability to ''go deeper into the person'' (Sophie, Therapist) and to ''show people's caring qualities'' (Donna, Therapist). In addition, they commented that therapists in AAT often see different sides or qualities of people than their usual carers. Individuals with autism often have problems with their social interactions, leading to experiences of rejection by others. This may also be related to the ''safe spaces'' of physical benefits subtheme.
Not in the way people make judgments (Donna, therapist) The connection between the animals and my son is amazing, the love is unconditional. It just makes it more effective because he feels like he is accepted. (Christine, Parent).
This could also be related to another perceived benefit of AAT, which is to combat loneliness.
And when you think about older people, maybe that connection and not being touched is another name. That's about loneliness. It's not a personal connection, it's more of an attendant connexion that's made. (Donna, Therapist). Its very positive because a lot of them experience loneliness. (Erin, Therapist).
Building self-confidence is one of the key benefits highlighted by all participants. This is also closely related to participants' reports of improved self-worth and focus. These aspects are inextricably linked. One therapist (Sophie) said that AAT can have an impact on an individual's self-esteem.
In sessions, individuals are allowed to develop and grow, which can then give them a sense of increased locus of control and self-esteem because they realise what they can achieve in life. So, it's more conducive. (Sophie, Therapist) Individuals with autism can display significant challenging behaviour which can be difficult to manage, as illustrated by one parent's report: … she can be challenging in some areas… intense. So, once she has an idea in her head, it just becomes this focus… the social aspects about the conditions are very challenging for her. Controlling her emotions and impulsivity is also very challenging for her and the rest of our family. (Annie, Parent) A significant motivator for parents who sent their children to AAT was that they were able to learn behavioural control, which they themselves struggled to teach their children, and to manage their challenging behaviours. Parents reported that they were very positive and satisfied with the progress made. Specifically, learning new behaviours, the ability to recognize emotions and deal with anger were cited as learning outcomes by parents. These learning outcomes align with the goals of AAT, as stated by one of the therapists (Donna) to be a central outcome of AAT: clients learn the consequences of their behaviours on others, for example, as animals mirror and respond to clients' emotions. A key outcome and benefit of this therapy is then perceived to be reflected in daily life outside of the therapeutic setting, as indicated by one mother (Christine) who said that increased behavioural control helps her to deal with challenging behaviour in other settings.
The animals will either mirror her emotions or respond to her emotions. (Donna, Therapist) They'll either walk away if it's a horse… or they'll react with body language. If it's a dog, confusion, appeasement, then we can use body language to facilitate the client better understand the impact of that anger. And then we can start to work on how they can regulate. And the reward is that the animal will read and understand them best. They regulate their emotions and keep themselves calm. (Donna, Therapist) … seems to have helped him regulate his behaviour and his emotional outbursts, which then helps me regulate his behaviour. (Christine, Parent) Furthermore, AAT provides clients with the opportunity to acquire new knowledge in applied situations, which in turn allows clients to balance independence and responsibility. While these two aspects may sound contrasting at first, they complement each other in AAT: for example, one parent (Kate) explained that while her child is capable of making his own decisions, he must still put the needs of the animal first because he is the caregiver. In addition, AAT provides an opportunity to gain new knowledge not only about the animals, but the knowledge and skills gained can be applied to oneself.
This horse needs to be groomed because it is very muddy. When does it need to be showered? Do you know how often? So, you can relate to the general self-care of the individual and we say that especially maybe with people who are suffering. (Donna, Therapist) So we think about all these things that we might put under the heading of self-care in the sense that we look at the animals that we can eat, but we can then generalize what we discover back to that person. (Donna, Therapist) There's another group of things that are more actionoriented, that are arranged so that you learn to take care of the animals. So, something about what do you feed them? How do you groom their fur? What does the environment need to be like, what do they need in their daily kind of routines? (Donna, Therapist)
Theme 2: How AAT Works
The second theme includes narratives about how ATT works differently than other forms of therapy. Participants described their experiences of AAT and how they believe it differs from other forms of therapy in terms of how it is delivered and perceived by therapists and users. The subthemes that were developed were: (1) client-centred therapy and (2) mixed models.
Subtheme 1: Client-Centred Therapy
Overall, therapists emphasized client-centred therapy. This may also be related to confidence building which, as mentioned earlier, is a key element of some therapy approaches. This is closely related to the participants' description of the focus on the client's needs.
So obviously everything is based on the client's needs and the goals that they want to set for themselves. (Donna, Therapist) To ensure that the patient is at the centre of the therapy process, therapists reported that each AAT session is carefully planned and that a therapy plan includes an essential pre-assessment and then a uniquely designed session plan. This was felt by therapist to be important to the success of therapy and related to the needs and concerns of clients. For example, therapists need to ensure that clients respond well to an animal and do not react negatively. For example, some children with autism have significant problems with chickens because they flap. Therefore, the suitability of animals as part of therapy must be fully assessed beforehand.
Having an animal-assisted intervention might not ideal for some children. Sometimes some your children do not like animals. That could cause some triggering. So, the first point to make really is that you have to have a careful assessment process. (Donna, Therapist).
Participants said that it cannot be assumed that clients talk to all animals or that all animals are beneficial to clients; therefore, they emphasize the assessment process. Specifically, the assessment process is essential as it allows sessions to be tailored to each client's individual needs, whether emotional support, learning, understanding or general support. For example, one therapist stated that assessment is important because of the need to consider safety for the client, the therapist and the animals. So, we can't assume anything. So have to conduct a pretty thorough assessment before we introduce animal-assisted people in any form, whether it's therapy or learning. (Donna, Therapist)
Subtheme 2: Mixed Models
In addition, one therapist discussed the importance of different treatment models; highlighting the complexity of different models for delivering therapy services. One therapist (Donna) highlighted the benefits of using different models. In addition, she described different scenarios in which these models would be used; this highlights the importance of pre-assessment prior to the AAT. So, one model would be what we call the triangle model where we have a client, an animal, a therapist. OK, which is good in certain circumstances. Right. But if we have a client who has quite significant needs, and if we're working with large animals or more than one animal, then we want to have two therapists with them. So that's called the diamond model. OK. And one of the advantages of the diamond model is that you have two pairs of eyes. So if you ask the question, what makes a session more effective? It might be because of which model you use. OK. If you have one person earlier in the session who has to watch the animal and the client, that therapist becomes dysregulated, it's very difficult. We split attention. Yes. And the whole session to notice what's going on. (Donna, Therapist) That's called the star model. And that's where you work with people who all need to be supported by a caregiver... quite common when you're working with hospitals. Yes. We have occupational therapists and physiotherapists working together… dogs… the handler. The client may well have a personal assistant or support worker with them as well. (Donna, Therapist)
Theme 3: Potential Limitations of AAT
This theme collates narratives about perceived limitations of AAT. The key limitation is the risk or concern that the animals might trigger autistic traits, possibly leading to a meltdown, agitation, anger, or upset the client. Two therapists emphasize the limitations that can be caused by the animal: whether it is the type of animal used/present, or the number of animals used/present during the session.
… chickens make people laugh because they're funny, the way they kind of run around and do things…, you have to be a little bit careful because when chickens flap… of course that can be a trigger. (Donna, Therapist) … usually not so much dogs because they can make people quite nervous sometimes (Donna, Therapist) When you take on a lot of animals, I think it can be a bit of a sensory overload sometimes. (Sophie, Therapist) However, therapists indicated that this limitation can be overcome and controlled to some extent through thorough pre-assessment and session planning.
Discussion
The purpose of the current study was to understand animalassisted therapy and whether it is perceived by therapists and parents as an effective treatment for children with ASD. Using semi-structured interviews and analysing them with Thematic Analysis, the results suggest that AAT is perceived as effective for patients with ASD, both from the therapists' and parents' perspectives. The findings of the current study ties into a wealth of studies on ASD and the effectiveness of AAT that other research has reported. Specifically, the findings of Theme 1 provide important insights into the usefulness of AAT and the benefits that can be viewed through a biopsychosocial lens that have been highlighted in other studies (Becker et al., 2017;Chitic et al., 2012;VanFleet & Faa-Thompson, 2014). The main items mentioned in relation to perceived benefits were 'Physical benefits', 'Sensory benefits' and 'Emotional benefits', which is why the subthemes were named as such.
Participants are aware of the positive components of AAT and the impact it can have on children's lives. The biopsychological aspect: for example, stroking animals increases the release of endorphin, a chemical in the torso that often allows an individual to feel good, and can decrease stress hormones, including cortisol, norepinephrine and epinephrine (Braun et al., 2009;Marcus, 2013) (biological) interacting with an animal can reduce disturbed mood and improve an individual's overall quality of life (Chandler, 2017) (psychological), and can help build relationships (Morrison, 2007) (social). The narratives of the participants in the current study reflect these benefits and indicate that they are aware of them. These findings tie into a study by Morrison (2007), although his study did not focus specifically on individuals with autism. Morrison (2007) described the significant health benefits that AAT can influence, including improvement in blood pressure, heart rate and self-reports indicating improvement in depression, anxiety, quality of life, and loneliness. It may further suggest that there is a beneficial perspective to AAT that can influence an individual in more than one aspect. Nonetheless, the participants in this current study acknowledge that the benefits may be different for each individual. This provides a further link to Theme 2, 'How AAT Works' and the different therapeutic models that can be used. Specifically, the findings in Theme 2 provide a comprehensive description of how AAT works and the different processes that can be considered when implementing AAT. Participants generally discussed that AAT is focused on a client-centred approach to therapy and the different models that can be considered for different participants. Which has been highlighted in previous studies (Altschiller, 2011;Chandler, 2017). The importance of client-centred therapy and the different models of therapy can be considered, which is why they were made subthemes.
Participants described the client-centred approach as the most important element of AAT. This can be related to previous researchers discussed in the introduction. For example, participants described AAT as being based on the client's needs and the future goals they would like to set. This links to a previous study by Altschiller (2011), although it explored a more holistic view of AAT. Altschiller (2011) described the different approaches to AAT and the positive effects that animal companionship and AAT have on different individuals, including individuals with autism. This may be further evidence that the therapeutic approach and delivery can have an impact on the client experience.
However, the participants in this current study recognize that the design of sessions is different for each individual, depending on their diagnosis and needs. Specifically, the findings from Theme 3 provide an overarching description of the potential limitations of AAT. The main limitations highlighted were the potential risk or concern of animals triggering clients, which could lead to meltdowns, agitation, anger or upset the client. This limitation has also been highlighted in other studies (Chandler, 2017). O'Haire (2013) also found the possible triggers that could be influenced by the animals involved during AAT. These may include the sounds of the environment or the kind of animals that need to be considered during the treatment.
The current findings suggest that the therapists and parents who participated in this study are aware of the positive components of AAT and the impact it can have on children's lives. Previous studies have reported their findings on how AAT works. It has been reported that interacting with an animal can decrease disturbed mood and improve a person's overall quality of life (Chandler, 2017). Animals have been found to increase the release of endorphin, which is a chemical in the trunk that often allows a person to feel good. Furthermore, clients who participate in AAT may also experience a decrease in stress hormones, including cortisol, norepinephrine and epinephrine (Braun et al, 2009;Marcus, 2013).
In addition, there are a variety of opinions, experiences, and explanations that can describe how ATT can impact and improve the life of a child with autism. The results showed that the participants had similar responses and rationales regarding AAT and the effectiveness of the therapy. Through the participants' responses, the results can be categorized into the biological, psychological, and social impacts (Becker et al., 2017;Chitic et al., 2012;VanFleet & Faa-Thompson, 2014). Compared to these previous studies, participants emphasized the role of the human-animal bond in their rationales for the personal experience and usefulness of AAT. Despite differences in location, cultures, and environments, all participants had significantly similar responses. Additionally, some participants did not indicate the severity of the disorder they work/live with, this could be interesting for future research as to which severity of autism is most affected. Another aspect that could be explored would be to examine the usefulness of AAT with small or large animals. Future research could examine the effects of AAT on individuals with a variety of developmental and psychiatric disorders. Aside from the benefits such research could bring to therapy practice, this study would also add to the limited number of studies examining the perceptions of ATT from the angle of both therapists and parents of children with autism.
Conclusion
In summary, this study reports on parents' and therapists' perceptions of AAT for children with autism. Therapists and parents noted the positive effects of animal-supported approaches on children with autism, particularly in relation to sensory, emotional, and physical functioning. Nonetheless, the study identifies that AAT is not a cure, but it may help alleviate some symptoms associated with ASD. The study also identifies the potential limitations associated with AAT, including the impact it may have on the client's emotional state. We believe that further qualitative and quantitative research is always needed, and that more programs focusing on the therapeutic use of animals in child therapy are also needed.
Author Contributions Both authors contributed to the design and implementation of the research, to the analysis of the results and to the writing of the manuscript.
Funding This study was not supported by any grant or fund.
Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
Declarations
Conflict of interest The authors declare that they have no conflict of interest.
Ethical Approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/ or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.
Consent to Participate Informed consent was obtained from all individual participants recruited in the study.
Consent for Publication All authors read the final version of the paper and give full consent for this paper to be published. | 2022-03-10T14:55:30.186Z | 2022-03-10T00:00:00.000 | {
"year": 2022,
"sha1": "a53b9239be281034f98bdfaf6152ab14cd772e13",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12646-022-00647-w.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "0f687ceff917e823bd4656ee8b689bf69bdb1b54",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219981052 | pes2o/s2orc | v3-fos-license | Graph Neural Networks and Reinforcement Learning for Behavior Generation in Semantic Environments
Most reinforcement learning approaches used in behavior generation utilize vectorial information as input. However, this requires the network to have a pre-defined input-size -- in semantic environments this means assuming the maximum number of vehicles. Additionally, this vectorial representation is not invariant to the order and number of vehicles. To mitigate the above-stated disadvantages, we propose combining graph neural networks with actor-critic reinforcement learning. As graph neural networks apply the same network to every vehicle and aggregate incoming edge information, they are invariant to the number and order of vehicles. This makes them ideal candidates to be used as networks in semantic environments -- environments consisting of objects lists. Graph neural networks exhibit some other advantages that make them favorable to be used in semantic environments. The relational information is explicitly given and does not have to be inferred. Moreover, graph neural networks propagate information through the network and can gather higher-degree information. We demonstrate our approach using a highway lane-change scenario and compare the performance of graph neural networks to conventional ones. We show that graph neural networks are capable of handling scenarios with a varying number and order of vehicles during training and application.
I. INTRODUCTION
Many reinforcement learning approaches in decisionmaking for autonomous driving use vectorial representations as inputs -e.g. a list of semantic objects or images. However, this requires a pre-defined input-size and order when using conventional deep neural networks. As a consequence -in semantic simulations -the maximum number and order of the vehicles have to be defined.
The number and order of vehicles in real-world traffic situations can change rapidly -as vehicles come into and leave the field of view or vehicles overtake each other. Thus, each situation requires an assumption of the maximum number of vehicles and also in which order they should be sensed by the vehicle. Of course, an arbitrary order of the vehicles could be passed to conventional neural networks during training. However, this would require the conventional neural network to see all possible combinations during training in order to handle this arbitrary order. On the contrary, graph neural networks (GNNs) are invariant to the number and order of vehicles as they directly operate on graphs. This makes them ideal candidates to be used as a decision-making entity in autonomous driving. In this work, we combine continuous actor-critic (AC) reinforcement learning methods with GNNs to enable a number and order invariant decision-making for autonomous vehicles. AC reinforcement learning methods exhibit stateof-the-art performance in various continuous control problems [1,2]. Additionally to the before-stated advantages, GNNs also introduce a relational bias to the learning problem -due to the connections between vehicles in the graph. Thus, relational information is provided explicitly and does not have to be inferred by using collected experiences. Moreover, GNNs propagate information through the graph due to their convolutional characteristics. In this work, we use a 'GraphObserver' that generates a graph connecting the n-nearest vehicles with each other and an 'Evaluator' that outputs a reward signal and that determines if an episode is terminal. Using the 'GraphObserver', the 'Evaluator' and the AC algorithm, the ego vehicle's policy can now be iteratively evaluated and improved.
The main contributions of this work are: • Using GNNs as networks in AC methods for decisionmaking in semantic environments, • comparing the performance of conventional deep neural networks to using GNNs and, • performing ablation studies on the invariance towards the number and order of vehicles for both network types.
A. Graph Neural Networks
Graph neural networks (GNNs) are a class of neural networks that operate directly on graph-structured data [3]. A wide variety of graph neural network architectures have been proposed [4,5,6,7]. These range from simple graphs [4], to directed graphs [5], to graphs that contain edge information [6], up to convolutional graphs [7].
In this work, we use the approach introduced by Battaglia et al. [8] that uses a directed graph with edge information. The graph G = (N, E) is defined having nodes n i ∈ N and directed edges e ij ∈ E from node n i to n j . Both -the nodes and edges -contain additional information. The node value is denoted as h i for the i-th node and the edge value as e ij connecting the i-th with the j-th node. The node value h i contains e.g. the vehicle's state and the edge value e ij relational information between two nodes. In each layer k of the GNN, a dense node neural network layer is applied per node and a dense edge neural network layer per edge.
Each GNN layer has three computation steps: First, the next edge values e k+1 ij are computed using the current edge values e k ij , the from-node values h k i and the to-node values h k j . These values are concatenated and passed into a (dense) neural network layer f k χ (·) that is parameterized by χ. This can be expressed as Next, all incoming edge values e k+1 ij to the node n j are aggregated. In this work, we use a sum as the aggregation function. Thus, the node-wise aggregation of the edge values can be written as with M being the number of incoming edges to node n j . Finally, the next node values h k+1 i are computed using a (dense) neural network layer f k ψ (·). This can be formulated as for the j-th node. These three steps are performed in every layer with each layer having (dense) network layers f l ψ (·) and f l χ (·). In this work, we do not use a global update as proposed in [8].
B. Reinforcement Learning
Reinforcement learning (RL) is a solution class for Markov decision processes (MDPs). Contrary to dynamic programming or Monte-Carlo methods, RL does not require knowledge of the environment's dynamics but only learns from experiences. RL solution methods can be divided into value-based, policy-based, and actor-critic (AC) approaches.
AC methods have an actor that learns a policy π(s) and a critic that learns a state-value function V (s) with s being the state. Most AC methods use a stochastic policy π(s) that has a distributional output-layer. In this work, we use an actor network that outputs a normal distribution N (µ, σ) with µ being the mean and σ the standard deviation. The state-value function can either be learned using temporal differences (TD) learning or Monte-Carlo methods [9]. We utilize TD learning to learn the state-value function V (s). The policy π φ (a|s) and the state-value function V ξ (s) are approximated using deep neural networks and, therefore, are parameterized by the network weights φ and ξ.
The policy update for the actor using TD learning is defined as with r t being the reward, s the state, a t the action and V ξ (s) the approximated state-value function at time t. Equation 4 increases the (log-) likelihood of an action if the expected return is large and decreases it otherwise. In this work, we use the proximal policy optimization (PPO) actor-critic algorithm that shows state-of-the-art performance in various applications [2,1]. The PPO uses a surrogate objective function that additionally clips Equation 4 to avoid large gradients in the update step.
The work is further structured as follows: In the next section, we will provide related work of RL, GNNs, and the combination of both. In Section III, we will go into detail of how we apply RL and GNNs for decision-making in autonomous driving. And finally, we will provide experiments, results and give a conclusion.
II. RELATED WORK
In this section, we will outline and discuss related work of graph neural networks (GNNs), actor-critic (AC) reinforcement learning and the combination of both.
A. Reinforcement Learning
Reinforcement learning (RL) solution methods can be categorized in three categories: value-based, policy-based, and actor-critic methods [9]. Of these three categories, the combination of value-based and policy-based RL in the form of AC methods have shown state-of-the-art performance in continuous and dynamic control problems [10,11,12,13].
The trust region policy optimization (TRPO) algorithm restricts the updated policy to be close to the old policy [10]. This is achieved by using the Kullback-Leibler (KL) divergence as a constraint in the optimization of the policy network. They additionally prove that the TRPO method exhibits monotonically improving policies. Since it is computationally expensive to calculate the KL divergence in every policy update, the proximal policy optimization (PPO) has been introduced [11]. Instead of using the KL divergence, the PPO uses a clipped surrogate objective function. The optimization of the clipped surrogate objective function can be done using unconstrained optimization and is less computationally expensive.
The soft actor-critic (SAC) method introduces an additional entropy term that is maximized [14]. The SAC method, thus, tries to find a policy that is as random as possible but still maximizes the expected return. As shown in their work, this yields the advantage that the agent keeps trying to reach different goals and does not focus (too early) on a single goal. However, the SAC method uses action-value functions Q(s, a) instead of a state-value function V (s). As this would introduce additional complexity combing GNNs with the SAC algorithm, we use the PPO algorithm in this work.
When using conventional neural networks the maximum number of vehicles and their order has to be specified. Therefore, either a maximum number of vehicles or handcrafted features are often utilized. Isele et al. [15] discretize an intersection using a grid world and use this as input for the neural network. However, some information is lost due to discretization errors.
Huegle et al. [16] propose to use deep sets (DS) in order to mitigate the changing number and order of vehicles. DS is invariant to the number and order of the inputs. However, DS does not contain any relational information and the network has to learn these implicitly. Contrary to that, GNNs can directly operate on graphs and utilize contained relational information.
Graph neural networks and reinforcement learning have been used together in various applications. Wang et al. [17] propose NerveNet where GNNs are used instead of conventional deep neural networks. By applying the same GNN to each joint, such as in the humanoid walker the GNN learns to generalize better and to handle and control each of these joints.
GNNs have also been used to learn state-representations for deep reinforcement learning [18,19].
Hgle et al. [3] propose a deep scenes architecture, that learns complex interaction-aware scene representations. They show the deep scenes architecture using DS and GNNs. They use the GNN in combination with a Q-learning algorithm that directly learns the policy.
Contrary to their work, we use AC methods to learn continuous and stochastic policies for the ego vehicle. Furthermore, we conduct studies on the robustness of conventional and graph neural networks. Contrary to Q-learning, the PPO algorithm is an on-policy method that can lead to a more efficient exploration of the configuration space. The risk of becoming stuck in local optima can be lowered by e.g. additionally optimizing the expected entropy as the SAC algorithm does.
III. APPROACH
This section describes how the graph is built, outlines the architecture of the actor-and critic-networks, and explains how graph neural networks (GNNs) and actor-critic (AC) reinforcement learning are combined for decision-making in autonomous driving.
In the semantic environment are M vehicles with each having a state s i that e.g. contains the velocity and the vehicle angle θ. A 'GraphObserver' observes the environment from the ego vehicle's perspective and generates a graph with nodes n i ∈ N that are connected by edges e ij ∈ E with i and j being the node indices. Vehicles that are within a threshold radius r near are included in the graph generation. All vehicles within this radius are connected to their nnearest vehicles.
The node value h i of the node n i contains intrinsic information of the i-th vehicle in form of a tuple x, y, v x , v y with x, y being the cartesian coordinates and v the velocity components. The edge value h ij of the edge e ij between node n i and n j contains relational information in form of relative distances composed of two components d x , d y . The structure of the graph G is depicted in Figure 2.
A further component -the 'Evaluator' -determines the reward signal r t for each time t and whether an episode is terminal. The reward signal r t is composed of scalar values that rate the safety and comfort of the learned policies. It can be expressed as r t = r col + r goal,reached + r goal,dist + r vel + r act (5) rating the collisions, reaching the goal, the distance to the goal, deviating from the desired velocity and penalizing large control commands, respectively. The goal is reached once the ego vehicle has reached a defined state configuration -a pre-defined range of x, y, v and θ. The reward signal r t is weighted to avoid collisions and to create comfortable driving behaviors. As outlined in Section I, we use the GNN approach proposed by Battaglia et al. [8] with slight modifications. Contrary to their work, we do not make use of global node features. The GNN directly operates on graphs that are structured as in Figure 2.
In the proposed approach, the actor network of the PPO directly takes the graph G as the input and maps it to output distributions of the control commands -the steering-rate δ and the acceleration a. A normal distribution N (µ, σ) for each of the control commands is used. By default, the GNN outputs a value for each vehicle in the graph. As we are only interested in controlling the ego vehicle, we only use the node value of the ego vehicle h ego . This node value is then passed to a projection network that generates a distribution for the steering-rate δ and the acceleration a. The projection network has dense layers and takes the node value of the ego vehicle h ego as input. The projection network builds distributions using the means [µ 0 , . . . , µ k ] and the standard deviations [σ 0 , . . . , σ k ] of each control command with k being the number of control commands. In order to limit the control commands, we additionally use a tanh(·) squashing layer to restrain the network outputs to a certain range. During training, the distributions are sampled to explore the environment and during application (exploitation) the mean µ is used. This network represents the policy π θ (·) of the PPO algorithm with θ being the neural network parameters. The architecture of the GNN actor network is depicted in Figure 3 (a). The critic network of the PPO has a similar architecture to the actor network. It also directly operates on the graph G and selects the node value of the ego vehicle h ego in the output layer of the GNN. The value of the ego vehicle node h ego is then passed into a dense layer and mapped to a scalar value that approximates the expected return. Using temporal difference learning, the state-value function V ξ (s) with s being the state and ξ being the neural network parameters is learned.
The node value of the ego vehicle h ego in the GNN has always the same vectorial size regardless of the number and order of the vehicles in the semantic environment. Unlike conventional neural networks, the maximum number of vehicles for the observation does not have to be predefined and fixed when using GNNs. The only additional hyper-parameters that are introduced are added in the graph generation -the threshold radius r near and with how many vehicles each vehicle is connected. However, information of not directly connected vehicles can still be propagated through the graph due to the convolutional characteristics of GNNs.
In the next section, we conduct experiments, evaluate the novel approach, and compare it to using conventional neural networks.
IV. EXPERIMENTS AND RESULTS
In this section, we conduct experiments and present results of our approach using graph neural networks (GNNs) as function approximator within the proximal policy optimiza-tion (PPO) algorithm. We compare the proposed approach with using conventional deep neural networks for the actor and critic network. As an evaluation scenario, we chose a highway lane-changing scenario with a varying number of vehicles. Additionally, we conduct ablation studies that evaluate the generalization capabilities of both approaches.
All simulations are run using the BARK simulator [20]. The ego vehicle is uniformly positioned on the right lane and its 'StateLimitsGoal' goal definition is positioned on the left lane. Thus, the ego vehicle tries to change the lane in order to achieve its goal. All vehicles besides the ego vehicle are controlled by the intelligent driver model (IDM) parametrized as stated in [21]. These vehicles follow their initial lane and do not change lanes. The vehicles -including the ego vehicle -are assigned an initial velocity that is sampled in a range of [10m/s, 15m/s]. The scenario used for training and validation is depicted in Figure 4.
The reward signal r t for time t is a weighted sum of the following terms: • r goal,dist squared L2 distance to the state-goal, • r vel squared deviation to the desired velocity, • r act squared and normalilzed control commands of the ego vehicle, • r col = −1 collision with the road boundaries or other vehicles and, • r goal,reached = +1 if the agent reaches its goal. The reward signal is additionally weighted to prioritize safety over comfort -r col is weighted more prominently than the other terms. An episode is counted as terminal once the defined goal has been reached or a collision with the ego vehicle has occurred. The 'StateLimitsGoal' definition checks whether the vehicle angle θ, the distance to the centerline r c , and the desired speed v des are within a pre-defined range.
As we focus on higher-level and interactive behavior generation, we neglect forces such as friction and use a simple kinematic single track vehicle model as used in [22]. This vehicle model has been parameterized with a wheelbase of 2.7m. To avoid large integration errors (especially of the IDM) we choose a simulation step-time ∆t = 0.2s.
The actor and critic networks are optimized using the Adam optimizer with a learning rate lr = 3e − 4. In this work, the actor and critic networks have identical structures. For the GNN we choose a layer depth of l = 3 with each node and edge layer having 80 neurons and for the conventional neural network (NN) we use dense layers having [512, 256, 26] neurons. All layers in this work use ReLU activation functions to mitigate the vanishing gradients problems of neural networks. In the next section, we will compare the performance of both networks used in the PPO algorithm.
A. Conventional vs. Graph Neural Networks
In this section, we compare the performance of conventional neural networks (NNs) with graph neural networks (GNNs). The number of vehicles varies in every scenario as the positions of the vehicles are uniformly sampled on the In the 'Ablation' study, the observations of the vehicles are perturbed by changing the order of the observations. All approaches have been evaluated using 100 scenarios.
Both configurations have been trained using the same hyper-parameters. For the NN we use a 'NearestAgentsObserver' that senses the three nearest vehicles, sorts these by distance to the ego vehicle, and concatenates their states into a 1D vector. The ego vehicle's state is added as the first state to this 1D vector. The GNN uses the before-described 'GraphObserver' that connects each vehicle to its nearest neighboring vehicles. We use n = 3 for the number of nearest vehicles and a threshold radius r near = 50m. These are the only additional hyper-parameters that are required when using GNNs. Table I shows the success-and the collision-rates for both approaches. Both -the conventional and the graph neural network -are capable of learning the lane-changing scenario well. In the 'Nominal' case both networks have almost the same success-rate. However, additional to a higher successrate, the GNN also has a lower collision-rate. The relatively high collision-rate can be justified that we do not check the scenarios for feasibility. This means that some of the scenarios might not solvable due to the steering-rate and the acceleration of the ego vehicle being limited. Thus, also optimal solutions might still cause collisions.
B. Ablation Studies
We conduct studies on how well conventional neural networks (NN) and graph neural networks (GNNs) cope with a changing order of vehicle observations. We use the trained agents that have been used for evaluation in Table I and the scenario shown in Figure 4. The scenarios have a varying number of vehicles and once a vehicle reaches the end of its driving corridor it is removed from the environment. This results in a varying number of vehicles in the scenario. Additionally, we now add noise to the sensed distances to other vehicles. This has the effect that the observations are being changed in both observers. The changing order and number of the vehicles models sensing inaccuracies that are persistent in the real-world due to e.g. sensor errors and faults.
In the 'NearestAgentsObserver' adding noise to the distance results in perturbing the concatenated observation vector as the order of the vehicles is changed. For the 'GraphObserver' the perturbed distances change the edge connections of the graph resulting in the vehicles not only being connected to their nearest vehicles.
The results of the ablation study are shown in Table I. The GNN shows a higher robustness towards the order of the vehicles. The success-rate remained high and the collisoinrate only increased slightly. Whereas in the conventional neural network, the success-rate decreased and the collision-rate increased significantly. Due to several layers and convolution characteristics of GNNs information can be propagated over several nodes in the network -e.g. from vehicles that the vehicle is not directly connected to. This shows a higher invariance of GNNs towards perturbations in the observation space. Additionally, as the ego vehicle's state is always in the first position when using the 'NearestAgentsObserver', the NN still roughly can infer which actions to take regardless of the other vehicles.
V. CONCLUSION
In this work, we showed the feasibility of graph neural networks for actor-critic reinforcement learning used in semantic environments. Both -conventional and graph neural networks -were able to learn the lane-changing scenario well. We compared the performance of GNNs to conventional neural networks and showed that GNNs are more robust and invariant to the number and order of vehicles.
We outlined advantages that make using GNNs more favorable than using conventional neural networks. GNNs do not require a fixed maximum number of inputs and are invariant towards the order of the vehicles in the environment. They use relational information that is available in the graph and do not implicitly have to infer these relations. Another advantage of GNNs is, that they make it possible to split intrinsic and extrinsic information. For example, the nodes can store the vehicle information and the edges the relational information between two vehicles.
We also performed ablation studies in which we changed the order of the vehicles. This showed that GNNs generalize better and are more invariant to the order of the vehicles compared to conventional neural networks. The success-and collision-rate of the GNN only dropped slightly whereas more significant changes are seen when using conventional neural networks.
In further work, additional edges to boundaries, traffic entities (such as traffic lights), and goals could be added and investigated. This could drive the approach towards a more universal behavior generation approach. | 2020-06-24T01:01:11.648Z | 2020-06-22T00:00:00.000 | {
"year": 2020,
"sha1": "1ca279c51bb4e3e7de0c5c565a9e4df8bc9ce39e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2006.12576",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1ca279c51bb4e3e7de0c5c565a9e4df8bc9ce39e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.