text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Photocatalytic Synthesis of Polycyclic Indolones
Abstract In this work, a photocatalytic strategy for a rapid and modular access to polycyclic indolones starting from readily available indoles is reported. This strategy relies on the use of redox‐active esters in combination with an iridium‐based photocatalyst under visible‐light irradiation. The generation of alkyl radicals through decarboxylative single electron reductions enables intramolecular homolytic aromatic substitutions with a pending indole moiety to afford pyrrolo‐ and pyridoindolone derivatives under mild conditions. Furthermore, it was demonstrated that these radicals could also be engaged into cascades consisting of an intermolecular Giese‐type addition followed by an intramolecular homolytic aromatic substitution to rapidly assemble valuable azepinoindolones.
Indoles are prevalent motifs in bioactive natural products and pharmaceuticals. 1 Therefore, the development of methods for the synthesis of functionalized of indoles under mild conditions is an important task in synthetic chemistry. 2 In this respect, catalytic transformations enabling the direct functionalization of indole C-H bonds are particularly valuable because they afford complex indole structures with an excellent step and atomeconomy. 3 We report herein a catalytic access to diverse polycyclic indolones starting from cheap and readily available indole precursors (Scheme 1). Importantly, such indolone motifs are found in a range of indole alkaloids 4 and are valuable intermediates in the total synthesis of related natural products. 5 Scheme 1. A photocatalytic strategy to access valuable polycyclic indolones.
Over the last decade, photoredox catalysis has emerged as a powerful tool for organic synthesis allowing the generation of reactive free radical species under mild conditions and from simple precursors. 6 Notably, photoredox catalysis can be an efficient tool for indole functionalization. 7 Redox-active esters such as N-acyloxyphthalimides (NAPs) are versatile precursors of alkyl radicals through single-electron reduction followed by decarboxylation. 8 In particular, NAPs have been used in photocatalytic Minisci-type reactions to generate nucleophilic alkyl radicals which reacts with electron deficient heterocycles such as pyridines or (iso)quinolines. 9 However, NAPs have rarely been applied to the functionalization of electron rich heterocycles like indoles. 10 We reasoned that an intramolecular cyclization could overcome the mismatch polarity of radicals with a nucleophilic character reacting with electron-rich aromatics. To this purpose, we studied the use of NAPs 2 derived from carboxylic acids obtained from the reaction of indoles and commercially available cyclic anhydrides (Scheme 2a). We expected these NAPs to undergo a single-electron transfer with an excited reducing photocatalyst leading to alkyl radical 3 after fragmentation followed by decarboxylation. Radical 3 would then undergo a 5-exo-trig cyclization leading to dearomatized intermediate 4 which after oxidation and proton elimination would afford indolone product 6 (Scheme 2b). We studied the feasibility of the envisioned process with substrate 2a, readily accessed in two steps from indole and succinic anhydride. We first evaluated the use of organic dyes as photocatalysts. 11 When 2a was reacted with 5 mol% of commonly used 4-CzIPN 12 (PC1, E red = -1.04 V vs. SCE) in DMSO under blue light irradiation, a small amount of 6a could be detected but most of the crude mixture consisted of unreacted starting material (Table 1, entry 1). Given the highly negative reduction potential of NAPs (E red = -1.3 V vs. SCE), we reasoned that a more reducing photocatalyst would facilitate a photoinduced electron transfer (PET) to the substrate, thus increasing the conversion of 2a. To this purpose we performed the reaction in the presence of PC2, a highly reducing phenoxazine photocatalyst recently developed by Miyake and coworkers (E red* = -1.93 V vs. SCE). 13 Pleasingly, the yield of 6a significantly increased to 58% (entry 2). Based on these results, we then further evaluated fac-Ir(ppy) 3 (E red* = -1.73 V vs. SCE) which proved to be a very efficient photocatalyst for the targeted transformation leading to 6a in 67% isolated yield (entry 3). The use of other common solvents such as DMA or DMF was detrimental (entry 4-5). Of note, the presence of up to ten equivalents of water does not affect the yield of the reaction so technical grade DMSO could be used as solvent for this study (entry 6). Furthermore, a control experiment revealed that the photocatalyst is required to observe the desired reactivity (entry 7). Finally, the use of a reduced catalyst loading (0.5 mol%) led to a similar yield after 14h (entry 8). With optimal conditions in hand to promote the desired cyclization, we developed a more efficient one-pot protocol enabling the synthesis of indolone 6a starting directly from carboxylic acid 7a. To this purpose, we investigated the use of coupling agents such as dicyclohexylcarbodiimide (DCC) and diisopropylcarbodiimide (DIC) ( Table 2, entry 1-2). Pleasingly, the use of DIC led to a similar yield when compared to our previously optimized two-step protocol (entry 2). The low yield obtained with DCC may be due to the formation of a poorly soluble dicyclohexylurea byproduct which might prevent a sufficient light penetration into the reaction medium. Of note, the addition of a catalytic amount of DMAP for the coupling was detrimental to the overall process (entry 3). The scope of the reaction was then evaluated with a range of different anhydrides and indole derivatives (Scheme 3). Substrates derived from several succinic anhydrides and leading to the formation of primary, secondary and tertiary radicals afforded the desired pyrroloindolones 6a-d in good overall yields. Importantly, compounds 6b and 6c were obtained as single diastereoisomers. Pleasingly, substrates 7e-g derived from glutaric anhydrides also led to the formation of pyridoindolones through a cyclisation step which then occurs via a 6-exo-trig addition. Then, a variety of indoles with different substitution patterns were also evaluated for this process. Substrates bearing both electron-withdrawing and electrondonating groups were successfully implemented in our methodology as shown with indolones 6h-v. The use of chlorinated and brominated indoles led to the desired indolones 6m-o uneventfully and allow for further modifications through cross-coupling reactions. A pyrrole-derived substrate was also competent for this process as shown with 6p. Finally, a range of 3-substituted indoles could be used to access indolones 6q-v.
Notably, several complex substrates derived from tryptamine, melatonine and tryptophan were successfully transformed into valuable indolones 6t-v in good yields. The structure of 6v was unambiguously confirmed by X-ray crystallographic analysis. 14 Scheme 4. Gram-scale reaction and synthetic applications.
To showcase the scalability of the process, we performed a gram-scale reaction using 4.25 mmol of 2q and a reduced catalyst loading of only 0.2 mol% without impacting the outcome of the reaction (Scheme 4). Then, to further demonstrate the utility of this method we performed some transformations on compounds 6 to prove their versatility as synthetic intermediates. First, 6q was reduced with borane to access in a single step the pyrroloindole scaffold (see 8) which is found is many bioactive compounds, 15 including the flinderole alkaloids 16 and many pharmaceutically relevant small molecules. 17 Importantly, 6q could also be selectively hydrogenated with a catalytic amount of palladium on charcoal to access the important indoline scaffold quantitatively (see 10). Then, the indolone moiety was also reacted with soft nucleophiles to afford C2-alkylated free indoles as exemplified with compound 9. Finally, electrophilic bromination of compound 6v led to complex pyrroloindoline 11 which is reminiscent of many naturally occurring alkaloids exhibiting a diverse range of biological activities. 18 The commercial availability of many succinic and glutaric anhydrides enabled us to efficiently synthesize a range of pyrrolo-and pyridoindolones using our methodology. However, the scarce availability of adipic anhydrides, prevented us to access the valuable azepinoindolone scaffold. 4a-b,19 To circumvent this issue, we envisaged to intercept radical 3 with an external olefin to access radical 12 which would then add to the indole moiety to afford azepinoindolone 13 as described in Scheme 5a. As an inherent challenge to this strategy, the intermolecular Giese-type addition to the olefin must be kinetically favored over the intramolecular 5-exo-trig cyclization to the indole. After some experimentation, we discovered that the use of acrylonitrile as a trapping olefin efficiently led to the desired azepinoindolones while only traces of the corresponding pyrroloindolones could be detected. 20 This strategy allowed us to access valuable azepinoindolones 13a-f in moderate to good yields (Scheme 5b).
Scheme 5. Synthesis of azepinoindolones.
In summary, we have developed a photocatalytic C-H alkylation strategy mediated by visible light that provides an efficient access to a variety of relevant polycyclic indolones. The reaction is scalable and the indolone products can be further used as valuable synthetic intermediates to access other important scaffolds such as pyrroloindoles and (pyrrolo)indolines. Finally, the development of a challenging two-component process enabled the straightforward synthesis of functionalized azepinoindolones. We expect this methodology to find a widespread use in the synthesis of indole-containing natural products and bioactive compounds. | 2,009 | 2020-04-08T00:00:00.000 | [
"Chemistry"
] |
Integrated Sensing and Communications for 3D Object Imaging via Bilinear Inference
We consider an uplink integrated sensing and communications (ISAC) scenario where the detection of data symbols from multiple user equipment (UEs) occurs simultaneously with a three-dimensional (3D) estimation of the environment, extracted from the scattering features present in the channel state information (CSI) and utilizing the same physical layer communications air interface, as opposed to radar technologies. By exploiting a discrete (voxelated) representation of the environment, two novel ISAC schemes are derived with purpose-built message passing (MP) rules for the joint estimation of data symbols and status (filled/empty) of the discretized environment. The first relies on a modular feedback structure in which the data symbols and the environment are estimated alternately, whereas the second leverages a bilinear inference framework to estimate both variables concurrently. Both contributed methods are shown via simulations to outperform the state-of-the-art (SotA) in accurately recovering the transmitted data as well as the 3D image of the environment. An analysis of the computational complexities of the proposed methods reveals distinct advantages of each scheme, namely, that the bilinear solution exhibits a superior robustness to short pilots and channel blockages, while the alternating solution offers lower complexity with large number of UEs and superior performance in ideal conditions.
A part of this article has been presented at the 2022 56th Asilomar Conference on Signals, Systems, and Computers, [1].
Within that context, a new research field called integrated sensing and communications (ISAC) [17]- [19], also known as joint communication and sensing (JCAS) [20]- [24], has recently gained significant attention as a promising technology to fulfill such requirements and enable new applications for B5G and 6G systems.In particular, ISAC technology seeks to enhance B5G and 6G systems by enabling both communication and environment sensing functionalities under the same wireless interface, thus realizing the concepts of ambient-sensing and environment-aware radio [25], which are crucial to emerging applications such as autonomous driving (AD) [26] and drone networking (DN) [27], besides offering new means to optimize system performance.
For instance, in B5G and 6G systems operating at high-frequency channels, which are sensitive to path-dependent scattering [28], [29], environment parameters of interest include not only the "usual" channel state information (CSI), but also the positions of users and obstacles that may lead to path blockages.In such systems, ISAC is an alternative to image-based path-blockage prediction approaches, crucial to mitigate the deleterious effects of blockages [30]- [32].
The prominent challenge of ISAC arises from the fact that two independently-developed wireless technologies, namely, wireless communications and radar systems [33], [34], are both fundamentally based on distinct air-interfaces, such that a concurrent deployment of existing waveforms is bound to suffer from performance degradation of both functions due to interference.
Aiming to frontally address this issue, the earliest family of ISAC approaches known as the radar and communication coexistence (RCC) [35], [36], consequently focused on minimizing the interference and maximizing the cooperation between the independently-operated communications and radar subsystems sharing the same frequency spectrum.While the RCC strategy addresses succeeds in managing interference and harmoniously allocating radio resources to operate both subsystems, the approach achieves relatively low spectral efficiencies and does not alleviate hardware costs, whose components remain separate for both subsystems [23], [35], [36].
In light of the fundamental drawbacks of RCC, techniques integrating the communication and radar functions into a single wireless interface have been recently proposed to truly realize the ISAC goal of jointly offering communication and sensing capabilities in a single system.
Literature [22]- [24] classifies such techniques into three types: a) Radar-centric ISAC (RC-ISAC) schemes, which realizes an additional communication functionality over typical radar waveforms, for example by utilizing index modulation (IM) to encode information into multipleinput multiple-output (MIMO) radar signals employing, e.g., the carrier agile phased array radar (CAESAR) waveform [37], or the frequency modulated continuous waveforms (FMCW) [38]; b) Communication-centric ISAC (CC-ISAC) schemes, which realizes an additional environment sensing functionality over standard communication waveforms, typically by extracting the radar parameters such as Doppler-shift and delay from waveforms designed fundamentally for communications functions, such as the IEEE 802.11ad waveform [39], the orthogonal frequency-division multiplex (OFDM) waveform [40], or the orthogonal time frequency space (OTFS) waveform [41]; and c) Dual-function radar communications (DFRC) schemes, which while not having exclusive boundaries with aforementioned RC-ISAC and CC-ISAC categories, are based on waveforms that can be adaptively or jointly optimized between the two functionalities [42], e.g., via waveforms designed based on mutual information [43] or the multi-beam approach in the mmWave bands [44].But still, all these approaches are related by the fact that the target sensing is enabled by the fundamental radar relationship between measurable physical quantities and information on the target [33], [34], that is, target range can be extracted from the delay in the signal, velocity from the Doppler frequency, and bearing from the angle-of-arrival (AoA).
Concomitant with the aforementioned methods, contributions have also been recently made to realize environment sensing capabilities not via target detection with radar, but rather by new standards of environment sensing information such as ambient human activity [45], [46], and three-dimensional (3D) environment image [47]- [49].In particular, the latter family of works exploit the voxelated occupancy grid [50]- [53] to discretize the environment into 3D cubic units of space representing its state (solid or void).Such methods exhibit a unique advantage in that the extracted information not only describes the location of objects, but also their 3D shape and orientation, in any desired resolution according to the voxelated model, enabling useful applications such as 3D environment mapping and ray-tracing propagation modeling [54], [55].
However, wireless voxelated imaging technology is still a very new notion in context of ISAC, due to the inherently convoluted channel paths arising from the discretization of the environment scatterers, and the fundamental challenge that the unknown information symbols must be simultaneously recovered on top of the very large number of environment voxels.
In light of the above challenges, we offer in this article the following contributions: • An extension of the discrete voxelated environment model utilized in [47]- [49] is introduced, in which an empirical stochastic-geometric approach is incorporated to capture the viability of non-line-of-sight (NLOS) paths, in addition to an extension where the occupancy coefficients are not limited to 0 or 1, but instead can take on non-binary complex values, enabling reflection losses and phase shifts of reflected waves to be modeled 1 .
• A novel, scalable CC-ISAC scheme is proposed, in which the 3D voxelated environment image and the transmit symbols are estimated alternately via two distinct linear modules; • A novel, high-performance CC-ISAC scheme is proposed, in which the the 3D voxelated image of the environment and the transmit data symbols are concurrently estimated via a single message passing (MP) module which leverages a bilinear inference framework; • Insights on the advantages of the two proposed CC-ISAC algorithms are provided via performance assessment and comparison against the state-of-the-art (SotA), which highlights the robustness of the bilinear method against short pilot lengths and path blockages, and the accuracy of the alternating solution in systems with many user equipment (UEs).
Notation: Scalar values are denoted by slanted lowercase letters, as in x, while complex vectors and matrices are denoted by boldface lowercase and uppercase letters, as in x and X, respectively.The transposition, complex conjugation, diagonalization, absolute value, and ℓ-th norm operators are denoted by (•) T , (•) * , diag(•), | • |, and || • || ℓ respectively, while E x (x) and Var x (x) respectively denote the expectation and variance of x with respect to its distribution P x (x).The sets of real and complex numbers are denoted by R and C, respectively, and N (µ, ν) and CN (µ, ν) denote the real and complex Normal distributions with mean µ and variance ν.
II. SYSTEM MODEL
The system model considered throughout the article, consists of three parts: a) the environment model, where a voxelated occupancy grid is used to discretely approximate the true environment and its scattering properties, b) the channel model, which defines the unique channel paths arising from the voxelated environment model, and c) the signal model, where the uplink communication scheme between the UEs and the access points (APs) is described.
A. Environment Voxelation Model
The 3D image of an environment can be represented via a number of techniques, including the well-known point-cloud and the ray-tracing methods, which are often utilized in robotics, machine vision and computer graphics [54], [56].Another well-known method, however, is 1 We clarify that although the message-passing rules derived in this article are for this extended paradigm, such that the proposed algorithms are fully generalized, binary-valued occupancy coefficients are considered in Section IV for the purpose of evaluation performance, in order to enable direct comparison with state-of-the-art (SotA) methods.
the voxelated occupancy grid [50]- [53], where the region of interest (ROI) is represented as a cuboidal space of dimensions L x × L y × L z , each denoting the lengths of the x, y, z-axes in meters, respectively, as depicted in Figure 1.In this model, the ROI is subdivided into a regular grid consisting of N V ≜ N x • N y • N z voxels, where N x ≜ Lx L V , N y ≜ Ly L V , and N z ≜ Lz L V denote the number of partitions along the x, y and z-axes, respectively, and L V is the edge length of a voxel in meters, which therefore corresponds to the image resolution.
The environment is then represented by a 3D tensor of dimensions (N x × N y × N z ), whose elements indicate the occupancy of the voxels, and thus whether that portion of the space is empty or filled with a given material.One may therefore consider, in general, each k-th voxel to be represented by an occupancy (a.k.a.scattering) coefficient v k , with k ∈ {1, • • • , N V }, where v k = 0 indicates that the k-th voxel is empty, while an occupied voxel is indicated by a complex number i.e., v k ≜ β k • e −jω k ∈ C. In such a model, the constants β k and ω k , which dependent not only on the material itself, but also on the frequency and the angle of incidence of propagating signals [29], capture the effect of the material occupying a given voxel onto the electromagnetic wave reflected or refracted by it.For the sake of reducing the complexity of the ISAC algorithm to be later introduced, however, we will in this article consider a simplified model whereby the occupancy coefficients v k take on discrete real values in the interval ∈ [0, 1].
Since phase rotations due to reflections are captured by channel estimation, this simplification is equivalent to the assumptions that the ISAC waveform is narrowband, so that frequencydependence can be ignored [29].The incorporation of the geometry of the interaction between propagating waves and occupied voxels will be described in Subsection II-C.
B. Channel Model
Consider a scenario in which the ROI contains N U single-antenna UEs, and N A multi-antenna APs equipped with N R receive antennas each.As illustrated in Fig. 2, the effective channels between the UEs and APs contain two distinct types of components, namely, line-of-sight (LOS) components which are direct paths between the UEs and APs, and NLOS components which encompass paths reflected at occupied voxels corresponding to parts of the environment, as described in Subsection II-A.Assuming that the operating frequency band is sufficiently high that the power of paths reflected more than once is negligible [28], [57], NLOS components may be decomposed into two subpaths, the UE-to-voxel subpath and the voxel-to-AP subpath, which together with the voxel scattering coefficient comprises the aggregate NLOS channel.
In light of the above, the effective channel between the N U single-antenna UEs and the ensemble of N A N R receive antennas of all APs can be compactly described by where and UE-to-voxel NLOS subpath, respectively, whose elements are assumed to follow zero-mean complex Normal distributions with variances σ 2 H , σ 2 A , and σ 2 B , respectively; while v ∈ C N V ×1 is the vector containing all scattering coefficients of the voxelated grid.
Although not further exploited in this article, we emphasize that the channel model in equation ( 1) can be straightforwardly extended to a multi-carrier scenario by simply introducing frequencyselectivity such that the environment variables are functions of the carrier frequency, i.e., with f denoting a specific frequency in the set F of all subcarrier frequencies.
We leave details for follow-up work, but it shall become evident that under such an extended model, the sensing component of the ISAC algorithm to be introduced in Section III can also be extended for even better performance, for instance by exploiting knowledge of channel correlation across carriers [58], and to incorporate additional features, such as the estimation of material types based on the frequency-dependence observed on estimated scattering coefficients [29].Since the above also requires incorporating message-passing rules that exploit cross-carrier correlation to the algorithms, such extension will be pursued in a future work, and only the frequencyindependent (i.e., single-carrier) model of equation ( 1) will be assumed in this article.
subpath Fig. 2: Illustration of the LOS and NLOS channel components and their subpaths.
C. Stochastic Geometric Environment Model
Notice that the channel model summarized by equation (1) implies that all paths between UEs, APs, and voxels are available.Although such an assumption is common in related literature (see e.g.[47]- [53]), in practice, many subpaths may not be available due to either physical phenomena (e.g.blockage by air-borne particles, absorption, or path loss) or the finite resolution of the voxelated model itself.In order to capture such realistic behavior, the work in [49] considers the occlusion effect of waves reaching the voxels, where the UE-to-voxel subpaths are assumed to be unavailable if a voxel is present nearby the path.This perturbation effect was approximated by applying a 3D Gaussian kernel convolution to the channel matrix, but it was acknowledged [49] that this approach is a highly simplified model of the true physical phenomena.Building on the latter, we therefore seek to contribute to improving the voxelated grid model by considering the statistical feasibility of paths.
To this end, we refer to the physical phenomena occurring at the reflection of propagating waves, in particular, the fact that for any given frequency: a) a critical angle θ * exists such that, as illustrated in Fig. 3a, if the incidence angle θ > θ * , the wave is absorbed rather than reflected, and consequently the corresponding voxel-to-AP NLOS subpath is not available [29]; and b) the curvature of the surface exposed to the impinging wave may be such that no signal is reflected towards an AP 2 [50], as illustrated in Fig. 3b.But since the complexity of modeling such phenomena at each voxel is far too complex to carry out, especially if the resolution of the voxelated ROI is large, we instead employ a statistical approach whereby the angle between the 2 Although the phenomenon in Fig. 3b would reduce to the phenomenon in Fig. 3a for infinitely small voxels, such extreme resolution leads to prohibitive complexity of the algorithms, so that modeling both phenomena distinctly is preferred in practice.impinging and reflected waves at each voxel, hereafter referred to as the scattering angle, and consequently, the availability of each voxel-to-AP NLOS subpath, are considered.
In light of the above, the following stochastic-geometric model is proposed to integrate the aforementioned phenomena into the channel matrices of the voxelated grid environment model.
First, the positions of the UEs and the APs are discretized into the 3D grid of the voxelated ROI, such that their positions may be described by voxel coordinates 3 Denoting the 3D coordinates of an UE, an AP, and an environment voxel, respectively by the scattering angle θ of the NLOS path reflected at the voxel is given by where arccos(•) denotes the inverse cosine trigonometric function.
The empirical 4 probability distribution functions (PDFs) of the scattering angles θ can be obtained by evaluating equation ( 3) for all possible combinations of admissible locations of UE, AP and voxel within the ROI, respectively given by c U , c A and c V , and examples of the latter for an environment voxelated at various resolutions are shown in Fig. 4. 3 It is also assumed that the multiple antennas of the APs are placed within a single voxel, such that their AoA are assumed to be identical, albeit each with a different channel path coefficient. 4In principle, the analytical distribution of scattering angles in eq. ( 3) can be derived, either by studying the N V 3 constituent angles within the highly subdivided geometry of the grid, or via a stochastic geometry-based grid analysis [59].To the best of the authors' knowledge, however, solutions to this problem exist only for vertex-to-vertex distance distributions [60], with the case of vertex angles never addressed before.Since the focus of this article is to develop ISAC estimators, we leave this matter for a future contribution and meanwhile consider the proposed highly-accurate approximate model, as illustrated in Figure 4.
Probability Distribution of Scattering Angles in a Voxelated Grid
Scattering Angle, 3 It is visible in Figure 4 that for a sufficiently large N V , the distribution of scattering angles θ can be well modeled by a mixture of two scaled beta distributions, namely, Utilizing this empirical stochastic-geometric approach, the increasingly popular voxelated model utilized in various related works [47]- [49], [51]- [53] can be improved by the incorporation of random blockages of NLOS subpaths, in proportion to the complement cumulative distribution of the approximated beta mixture PDF, and in accordance to the scattering angles at each voxel, as a function of a selected critical angle.
D. Signal Model
Consider an uplink communication scenario between the group of N U UEs and a total of N A APs, under the models described above in Subsections II-A through II-C, with the N A APs connected to a central processing unit (CPU) via error-free fronthaul links of unlimited throughput, such that the received signals at all N A N R receive antennas are aggregated without loss of information or delay.
Then, the aggregated received signal matrix Y, over N T discrete transmission instances (symbol slots) is given by where G ∈ C N A N R ×N U is the effective channel matrix as described in Subsection II-B; X ∈ C N U ×N T is the transmit signal matrix collecting the symbols from all N U UEs, each drawn from the constellation X of cardinality N X ; and W ∈ C N A N R ×N T is the receive additive white Gaussian noise (AWGN) matrix with independent and identically distributed (i.i.d.) elements drawn from CN (0, N 0 ), where N 0 is the noise variance.
The transmit signal X comprises of a pilot block X P ∈ C N U ×N P and a data block where N P and N D denote the number of symbol slots allocated to the pilot and data sequences, respectively, with N T = N P + N D ; and where the pilot symbol matrix X P is assumed to be perfectly known at the CPU.
In view of equations ( 1), ( 4) and ( 5), the goal of the ISAC schemes to be hereafter presented can be concisely stated.The communication objective of the CPU is to estimate the unknown data symbol matrix X D , under the knowledge of only the pilot symbols in X P , after the estimation of the channel matrix G.In turn, the sensing objective is to extract the voxelated model of the environment as the vector of occupancy coefficients v, from the said channel matrix G.
III. PROPOSED ISAC SOLUTION
By combining the channel decomposition model of equation ( 1), the received signal model of equation ( 4), and the transmit signal in model of equation ( 5), the overall system model becomes where the unknown variables of interest are the environment (voxel coefficients) vector v and the data symbol matrix X D .
Similar to related literature [47]- [49], [51]- [53], it is assumed hereafter that the LOS channel H, and the UE-to-voxel and voxel-to-AP subpath of components A and B in equation ( 6) are known, which still leaves an atypical relationship between the two variables diag(v) and X D .In particular, the latter unknowns are related, under equation ( 6), by an asymmetric bilinear system, requiring sophisticated algorithms to be either decoupled or jointly estimated [61]- [66].
In light of the above, we propose in the sequel two novel ISAC solutions for the joint estimation problem of the asymmetric bilinear system expressed by (6), by leveraging the Gaussian belief propagation (GaBP) MP framework.The first proposed method incorporates two separate linear estimation modules for each of the unknown variables v and X D , estimating them in an alternate fashion via feedback between the two modules.In turn, the second method utilizes only a single bilinear estimation module which enables the simultaneous extraction of both unknown variables.
A. Proposed Alternating Linear ISAC Algorithm (AL-ISAC)
The first proposed method, dubbed the "Alternating Linear ISAC (AL-ISAC)" algorithm, leverages two separate linear GaBP MP modules based on equation ( 6), which are respectively described as: 1) a linear GaBP module to estimate the environment vector v, given the transmit signal X, and conversely; and 2) a linear GaBP module to estimate the transmit signal matrix X, given the environment vector v.
The two linear GaBP modules and the constituting MP rules are derived in the next subsections, followed by the construction of the full ISAC algorithm encompassing the two derived modules.
1) Linear GaBP for Environment Vector v:
The linear GaBP algorithm operates on only one unknown variable, so that in order to estimate v, the entire transmit signal matrix X must be assumed known, in addition to the known channel matrices H, A, and B. Assuming knowledge of X, the system in ( 6) may be reformulated as where, since the channel matrix and BX ∈ C N V ×N T and are known, the described system in ( 7) is linear on v, to which a corresponding factor graph may be obtained as in Fig. 5.
corresponds to the factor nodes (square nodes) and each element v k of the unknown environment variable v, with k ∈ {1, • • • , N V }, correspond to the variable nodes (circular nodes).In turn, each (m, t)-th factor node on the factor graph has a corresponding soft-replica of each variable node element v k denoted by vk:m,t , with the corresponding mean-squared-error (MSE) given by < l a t e x i t s h a 1 _ b a s e 6 4 = " u j H / s y p i j Fig. 5: Factor graph of the linear system formulated for the estimation of v.
Utilizing the soft-replicas and their MSEs, the factor nodes perform soft-interference cancellation (IC) on the received signal y m,t for each variable v k , yielding the IC symbol where x n,t and w m,t are the (n, t)-th and (m, t)-th elements of X and W, with n ∈ {1, • • • , N U }; and c k,t ≜ N U u=1 b k,u x u,t represents the aggregated incident signal at the k-th voxel from all UEs.Next, by leveraging the central limit theorem (CLT), the sum of difference and noise terms are approximated as a complex Gaussian scalar, such that the PDF of the interference-cancelled symbols ȳv k:m,t can be modeled as with the corresponding conditional variance ν v k:m,t given by where is the noise variance.
The conditional variances for all v k are computed by all factor nodes, and the message is sent to the corresponding variable nodes.Consequently, the k-th variable node obtains the N A N R N T conditional variances from all factor nodes, from which the extrinsic belief ℓ v k is computed.In GaBP, self-interference is suppressed by excluding the conditional PDF of ȳk:m,t at the k-th variable node to yield ℓ v k:m,t with the PDF where the extrinsic mean µ v k:m,t and variance Ψ v k:m,t is respectively given by Finally, by following the Bayes rule, the updated posterior may be obtained by combining the PDF of the extrinsic belief and the prior distribution of v k , from which the updated soft-replica is obtained as where the normalizing factor in the denominator is the integrated updated posterior over C.
Similarly, the updated error variance of the soft-replica is obtained by evaluating Given the information of the voxel coefficient distributions, i.e., binary coefficients with a discrete prior given by a Bernoulli distribution with occupancy probability the soft-replica and its MSE can be efficiently obtained in closed-form, respectively given by The updated soft-replica and the MSE of each variable node are then transmitted back to all factor nodes for the next iteration of the GaBP MP algorithm.After a given number of GaBP iterations to refine the soft-estimates, a belief consensus is taken at each variable node across the soft-replicas to obtain a single estimate ṽ by with consensus mean μv k and variance Ψ v k expressed as which is consequently used to yield the final estimate by The above MP equations ( 8) to (20) fully describe the linear GaBP module to estimate the voxel environment v, which is summarized as a pseudocode in Algorithm 1.It is important to note that the signal matrix X is assumed given -i.e., X is not estimated by Algorithm 1and therefore is kept constant throughout the iterations, as seen by the pre-computation of the effective signals c k,t .The complementary block dedicated to the estimation of X given v will be described subsequently, leading to an alternate approach as previously mentioned.The algorithm also contains a damping update mechanism [67] using a damping factor η ∈ [0, 1] to prevent early convergence to a local optimum.
In order to derive the linear GaBP module to estimate the signal matrix X given v, the system model in ( 6) is first reduced to the one in (4), with the effective channel G ≜ H+Adiag(v)B , with the corresponding factor graph as illustrated in Fig. 6, where each element x n,t of the unknown signal matrix X with n ∈ {1, • • • , N U }, is the variable node (circular nodes).
Notice that in this case, since the variable X is two-dimensional, the linear system results in a factor graph that is separated into "pages", such that the variable nodes and factor nodes corresponding to different t ∈ {1, •, N T } are independent and that the messages are only exchanged by nodes with the same time index t.Other than this separation of factor graphs, the derivation of the MP rules is similar to that of Algorithm 1.
. . . . . . . . .< l a t e x i t s h a 1 _ b a s e 6 4 = " 6 s y V i z 6 u L D V A m w S l I U i R p x X s J q 8 = " > A A A B 7 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B g 5 R E i n q S g h e P F e w H t K F s t p t 2 6 e 4 m 7 G 6 x < / l a t e x i t > x n,1 Fig. 6: The separated factor graphs of the linear system for X.
The soft-replica of the transmit signal matrix element x n,t to the (m, t)-th factor node is denoted by xm,t:n,t , with the corresponding MSE given by The soft-replica and the MSE are used in the soft-IC of the received signals for x n,t , following with the conditional PDF described by and the conditional variance ν x n,t:m,t is given by The conditional PDFs are combined with self-interference cancellation at the variable nodes to yield the extrinsic beliefs ℓ x n,t:m,t following with the extrinsic mean µ x n,t:m,t and variance Ψ x n,t:m,t respectively given by µ x n,t:m,t = Ψ x n,t:m,t In turn, since the symbols have a uniformly discrete prior from the symbol constellation X , with the symbol probability For the particular case of M -ary quadrature amplitude modulation (M -QAM) with M = 4, the soft-replica and MSE computations reduce to efficient closed-form expressions given by where ] denotes the average symbol power of the constellation X , and tanh(•) denotes the trigonometric hyperbolic tangent function.
Finally, the consensus PDF, which is taken after the iterations is given by with the consensus mean μx n,t and variance Ψ x n,t expressed as Equations ( 21) to (33), collected in the form of a pseudocode in Algorithm 2, fully describe the linear GaBP module for the estimation of the signal matrix X given the environment vector v, which together with the previously described module for the estimation of v given X, completes the proposed AL-ISAC scheme.All that remains is to describe the alternating procedure to estimate both unknown variables, which is addressed in the sequel.
Algorithm 2 : Linear GaBP Estimator for Signal Matrix X Inputs: Received signal matrix Y, channel matrices H, A, and B, environment vector v, noise variance N 0 , and prior distribution of transmit symbols P xn,t (x n,t ).Outputs: Estimated transmit signal matrix X. ▷ ∀n, t * The termination criteria can be set in accordance to Section IV-C and as described in Algorithm 1.
3) Combined Alternating Modular Structure:
With the two estimation modules given by Algorithm 1 and Algorithm 2, either of the two variables v or X may be estimated, assuming full information of the other variable.However, the very inherent problem of ISAC in equation ( 6), is that neither of the variables are fully known such that the linear modules may not be directly applied for estimation.
To address the problem, the proposed AL-ISAC algorithm successively applies the two linear GaBP modules to estimate the two sets of variables.To enable this, the received signal is separated into the blocks corresponding to the pilot phase and the data phase, as where First, by using only the pilot phase of the system (34a), Algorithm 1 is applied to estimate the initial environment vector ṽinit with the pilot block X P as known input signal matrix.
Next, by using the data phase of the system (34b), Algorithm 2 is applied to estimate the unknown data block XD using the initial environment estimate ṽinit as the known input environment vector.Finally, the environment vector is obtained by using Algorithm 2 again, but with the initial environment estimate ṽinit as the initialization value of the soft-replicas at all factor nodes, and [X P XD ] as the input transmit signal matrix.The described AL-ISAC algorithm is illustrated in a schematic form in Fig. 7, and summarized as pseudocode in Algorithm 3.
Algorithm 3 : Proposed Alternating Linear ISAC (AL-ISAC) Algorithm † Inputs: Received signal matrix Y, channel matrices H, A, and B, pilot matrix X P , noise variance N 0 , prior distribution of environment and transmit symbols P v k (v k ) and P xn,t (x n,t ).Outputs: Estimated environment vector ṽ and estimated data signal matrix XD .
Using only the block corresponding to the pilot sequence, i.e., t = {1, • • • , N P }: 1: Estimate initial environment ṽinit via Algorithm 1, using pilot X P as known signal input.
Using only the block corresponding to the data sequence, i.e., t = {N P + 1, • • • , N P + N D }: 2: Estimate data signal XD via Algorithm 2, using ṽinit as known environment input.
4: Output ṽ as the estimated environment vector, and XD as estimated data signal matrix.† Although our simulations indicate that a single iteration (as shown in Fig. 7) is sufficient, the linear GaBP modules in steps 2 and 3 can be iterated multiple times, with feedback at modular level and possibly adaptive MP denoising between feedback loops.These, and other potential improvements remain open points for a follow up work.
(Alg. 1)
< l a t e x i t s h a _ b a s e = " T H of .
(Alg. 1) < l a t e x i t s h a 1 _ b a s e 6 4 = " T H R H 1 i f P + y i l P w = < / l a t e x i t > ṽ < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 N U A x S 6 q x H 6 7 P E f s 8 a W 2 o e W x p g k = " > < l a t e x i t s h a 1 _ b a s e 6 4 = " r s x H 6 0 Y 5 c 4 h + g P j 8 w c P y 5 W V < / l a t e x i t > XD < l a t e x i t s h a 1 _ b a s e 6 4 = " c z z 7 n k i y j c r B w 4 a t o X n T 7 S 6 d / x H 6 0 Y 5 c 4 h + g P j 8 w c P y 5 W V < / l a t e x i t >
Using only data phase
Using both pilot and data phase < l a t e x i t s h a 1 _ b a s e 6 4 = " c z z 7 n k i y j c r B w 4 a t o X n T 7 S 6 d / Despite having a potential complexity advantage, especially for scenarios with large numbers of UEs, as shown later in Subsection IV-A, the alternating approach of the AL-ISAC algorithm has the drawback of causing a heavy dependence on the length of the pilot sequence X P , which as shall be shown in Section IV, affects the performance of both environment and data signal estimation, in addition to the obvious trade-off with the total communication throughput.Aiming to circumvent this deficiency, in the next subsection we propose another ISAC method in which both v and X D are estimated simultaneously via a bilinear inference method.
B. Proposed Bilinear ISAC Algorithm (Bi-ISAC)
In this section, we develop a new ISAC algorithm in which the sensing and communication variables v and X D are estimated in parallel, by using a bilinear message passing technique which incorporates the uncertainty of both estimates at each iteration, thus requiring only a single estimation module to acquire both variables, as illustrated in Fig. 8.
We start by observing that the unique asymmetric bilinear relationship of v and X D , as per equation (7), prevents the application of recently discovered bilinear estimators, such as the bilinear generalized approximate message passing (BiGAMP) [61], which operates only on symmetric systems described by equations in the form Y = VX + W for the joint estimation of the unknowns V and X; or the parametric BiGAMP [64], [65], which works on systems with In contrast to these two examples, the problem dealt with here is, as described by equation ( 6), in neither of the aforementioned forms, nor can it be transformed to fit general bilinear forms, which implies that new, purpose-built bilinear Gaussian belief propagation (BiGaBP) [62], [63], [66] MP rules must be derived for its solution.Therefore, the BiGaBP message passing is performed on a tripartite factor graph as illustrated in Fig. 9, where the factor nodes (square Joint Est. of and .(Alg.4) q d 0 7 R H 1 i f P + y i l P w = < / l a t e x i t > ṽ < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 N U A x S 6 q x H 6 7 P E f s 8 a W 2 o e W x p g k = " > A C k 3 m 6 X P 8 a l W h j i I p X 4 R 4 J n 6 e y M j o V L T 0 N e T R U i 1 6 B X i f 9 4 g h e D K z X i U p M A i O j 8 U p A J D j I s q 8 J B L R k F M N S F U c p 0 V 0 z G R h I I u r K Z L s B e / v E y 6 5 w 3 7 o t G 8 a 9 Z b z b K O K j p G J + g M 2 e g S t d A t a q M O o u g R P a N X 9 G Y 8 G S / G u / E x H 6 0 Y 5 c 4 h + g P j 8 w c P y 5 W V < / l a t e x i t > XD < l a t e x i t s h a 1 _ b a s e 6 4 = " c z z 7 n k i y j c r B w 4 a t o X n T 7 S 6 d / x < / l a t e x i t > x n,1 < l a t e x i t s h a 1 _ b a s e 6 4 = " u j H / s y p i j X 6 i E I Y A 6 T v h q n B / / i g = " > A A A B 6 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B U 0 l E 1 G P R i 8 e K 9 g P a U D b b T b t 0 s w m 7 k 0 I J / Q l e P C j i 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " 6 s y V i z 6 u L D V A m w S l I U i R p x X s J q 8 = " > A A A B 7 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B g 5 R E i n q S g h e P F e w H t K F s t p t 2 6 e 4 m 7 G 6 Fig. 9: The tripartite factor graph of the bilinear system.nodes) are the received symbols, and the two sets of variable nodes (circular nodes) corresponds to the environment vector v and the signal matrix X, respectively.An important distinction is made between the two types of variable nodes, which is that a data variable node receives messages only from N A N R factor nodes corresponding to the same time instance t, while an environment variable node receives messages from all N A N R N T factor nodes.
We highlight the higher complexity of the factor graph in Fig. 9 compared to those of linear GaBP schemes shown in Figs. 5 and 6, due to the asymmetric and embedded system structure of equation ( 6) in relation to both variables together, as opposed to those of linear systems where one variable is considered at a time.Other than that, the messages transferred over the graph edges are the same information as in the linear GaBP in Section III-A, i.e., the soft-replicas, MSEs, and conditional PDFs of the variables of interest.However, since neither of the latter is known, except as soft-replicas, the corresponding calculation of the messages must incorporate the uncertainties in both variables, in the form of the respective MSEs.
In light of the above, similarly to Section III-A, the exchanged messages are constructed on the basis of soft-replicas of the variables.Since the entries in X corresponding to t ∈ {1, • • • , N P } are pilot symbols X P , the corresponding soft-replicas are set to their known values, i.e., xn,t:m,t = (x p ) n,t ∀t ∈ {1, • • • , N P }, with the corresponding MSE values set to 0. The remaining softreplicas and MSEs for t ∈ {N P + 1, • • • , N T } are as given in equations ( 8) and (21).
The complete description of the proposed bilinear ISAC (Bi-ISAC) algorithm is summarized in the form of a pseudocode in Algorithm 4, and the corresponding equations of the message passing rules are elaborated in the following.fully known intelligent reflective surface (IRS) in the ROI, and strongly relies on sparsity in the received signal, offered by means of an SCMA interface between the UEs and the AP, while our contributions are not limited to such a conditions.Instead, the proposed methods operate over fully dense received signals and with no support of IRSs.
Due to this distinction in system set-up, an adequate system parametrization must be considered for a fair comparison of our proposed algorithms against the SotA method of [47].Specifically, in the SCMA scheme of [47], each single-antenna UE transmits an M -bit code by utilizing d f out of R orthogonal frequency bands, which is received by a single AP with N R receive antennas.Since the considered system for our proposed algorithms is a single-frequency model (4), the diversity gain between each UE and the CPU is mimicked by setting the number of APs as N A = d f , such that N A N R = d f • N R , such that the number of nodes and edges of the final factor graph is the same in both systems.Fig. 11 illustrates the sensing and communication performances of the two proposed ISAC algorithms in comparison to the SotA ISAC algorithm of [47], in terms of the MSE and SERs, respectively.First, in Fig. 11a, the sensing performance, i.e., the MSE of the voxel occupancy coefficients, is evaluated.Under equivalent system parameters, in particular with a pilot ratio 7 of ρ = 0.5, the MSE of the proposed Bi-ISAC algorithm is found to significantly outperform that of the SotA method at all signal-to-noise ratio (SNR) values, while the AL-ISAC is found to be slightly outperformed by the latter at the high SNRs regime.The value ρ = 0.5 is taken from [47] in order to enable direct comparison.
Then, in Fig. 11b, the communication performances of the ISAC systems are evaluated in terms of the SER of the estimated symbols.It can be seen that both proposed algorithms exhibit superior symbol estimation performances compared to the SotA, with the Bi-ISAC exhibiting the additionally desirable feature that no error floors are observed even at higher SNRs.
All in all, the results corroborate the claim that both proposed methods generally outperform the SotA method of [47] in both sensing and communication functionalities.
C. Convergence Behavior of the Proposed Algorithms
In view of the superior performance of the two proposed ISAC algorithms in comparison with the SotA [47], we proceed in this section to further analyze the two proposed ISAC algorithms in detail, aiming also to clarify advantages of each relative to the other.To that extent, we first study the convergence behavior of the proposed ISAC algorithms so as to obtain insight on the appropriate MP termination criteria and damping parameters to be utilized.It can be observed in Fig. 12a that while different values of η v all result in convergence of MSE, the convergent value is sensitive to the parameterization of η v , especially for the Bi-ISAC which suffers from high MSE with low damping, as opposed to the AL-ISAC which is less prone to such errors in both the initial (white circles) and the final (black circles) estimation.
Similar and consequent results in the SER are observed in Fig. 12b, where convergence is also achieved in all cases, but the convergent value is largely affected by the MSE performance and η v , while the effect of η x is not as prominent to the final result compared to η v .
In light of the above results, the parameterization η x = 0.5 and η v = 0.9 and the convergence criteria of λ = 100 iterations is selected in the following simulations to ensure a reliable convergence behavior 8 .
D. Robustness Analysis of the Proposed Algorithms
To that end, we shall utilize performance metrics for the sensing and communication functions of ISAC systems other than the MSE of the estimated voxel coefficients and the SER of the estimated communication symbols, which were used in [47]- [49] and the previous section.
For the sensing function in particular, we remark that metrics used for radar-based ISAC cannot be used directly due to the unique voxelated occupancy grid-based approach followed here.It is therefore sensible to instead introduce a new metric, referred to as the voxel-occupancy-errorrate (VOER), which measures the rate of false-positive (FP) and false-negative (FN) elements, defined as the incorrect estimation of an occupied voxel element in presence of an empty groundtruth, and the incorrect estimation of an empty voxel element in the presence of an occupied ground-truth, respectively.Mathematically, the VOER is therefore defined as E ||v − ṽ|| 0 /N V , where v is the ground truth, ṽ is the estimate vector, while ∥ • ∥ 0 denotes the ℓ 0 -norm of a vector.
Notice that for the trivial all-empty (or "blind") estimator, which returns ṽ = 0 N V ×1 , the VOER reduces to E v ≜ E[||v|| 0 ]/N V = VOER empty , which is the average sparsity of the environment.We can therefore utilize this figure as an absolute reference of performance, in the sense VOER ≪ VOER empty indicates a good sensing performance by a given ISAC method.
Finally, instead of the SER often used in related literature, we opt to evaluate the communication performance of the proposed ISAC schemes in terms of the more descriptive BER, defined as BER ≜ E[B e ]/B, where B e denotes the number of errorneously detected data bits of X D , and B is the total number of bits conveyed in X D .
Fig. 1 :
Fig. 1: Illustration of a voxelated grid map-based model of the environment of a given ROI.
P3 > 3 $
at h u n av ai la b le AP UE LOS UE-to-AP NLOS voxel-to-AP NLOS UE-to-voxel (a) Path unavailability due to obtuse scattering.
P
at h u n av ai la b le AP R e. ec te d w av e UE LOS UE-to-AP NLOS voxel-to-AP NLOS UE-to-voxel (b) Path unavailability due to skewed surface.
Fig. 3 :
Fig. 3: Illustration of physical phenomena leading to the unavailability of propagation paths.
and where γ is a weighing factor and the quantities a 1 , a 2 , b 1 , b 2 are shape parameters optimised to match the empirical data obtained by evaluating equation (3), with c U , c A and c V taken randomly within the voxelated grid.
j k T w u e n 6 H / S O L b d U 7 t 4 X S x U z u d x Z G E P 9 u E Q X D i D C l x C F e p A I I R 7 e I Q n S 1 o P 1 r P 1 M m v N W P O Z X f g G 6 / U D 9 / 2 Q h g = = < / l a t e x i t > x n,N T < l a t e x i t s h a 1 _ b a s e 6 4 = " V t 7 o h Y h 3 s N H S p A S 3 X f d t L w A G e F s = " > A A A B 7 3 i c b V A 9 S w N B E J 2 L X z F + R S 1 t F o N g I e F O g l p Y B G w s I 5 g P S I 6 w t 9 l L l u z t n r t 7 Y j j y J 2 w s F L H 1 7 9 j 5 b 9 x L r t D E B w O P 9 2 a Y H T L W 2 j o d j d t l 4 R s + B d e P G i M V / + N N / + N X d i D g p M 0 m c y 8 l 8 4 b P x Z c G 9 f 9 d g p r 6 x u b W 8 X t 0 s 7 u 3 v 5 B + f C o p a N E M W y y S E S q 7 V O N g k t s G m 4 E t m O F N P Q F P v r j 2 8 x / n K D S P J I P Z h p j L 6 R D y Q
F 3 /
o 2 T N g t t P T B w O O d e 7 p k T J J w p 7 T j f 1 s b m 1 v b O b m W v u n 9 w e H R s n 9 S 6 K k 4 l h Q 6 N e S z 7 A V H A m Y C O Z p p D P 5 F A o o B D L 5 j e F X 5 v B l K x W D z q e Q J + R M a C h Y w S b a S h X f M 0 4 y P I v I j o S R B m s z w f 2 n W n 4 S y A 1 4 l b 7 q l 9 d J 5 6 r u X d c b D 4 1 a s 1 H U U Y Y z O I d L 8 O A G m n A P L W g D g w S e 4 R X e n N R 5 c d 6 d j 6 W 1 5 B Q 7 p / A H z u c P J z S R u g = = < / l a t e x i t > X P < l a t e x i t s h a 1 _ b a s e 6 4 = " r s 7 R f o 2 B n B 3 1 l T d o q O D A p e q U Q 9
y 6 5 w 3 7
o t G 8 a 9 Z b z b K O K j p G J + g M 2 e g S t d A t a q M O o u g R P a N X 9 G Y 8 G S / G u / E x H 6 0 Y 5 c 4 h + g P j 8 w c P y 5 W V < / l a t e x i t > XD < l a t e x i t s h a 1 _ b a s e 6 4 = " c z z 7 n k i y j c r B w 4 a t o X n T 7 S 6 d / / l a t e x i t > Y P < l a t e x i t s h a 1 _ b a s e 6 4 = " 6 r J Y W + i 5 w F m x s l P D/ W Q 0 R b I p G B A = " > A A A B 8 3 i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s x I U Z c F X b i s Y B / S G U o m z b S h S W Z I M k I Z + h t u X C j i1 p 9 x 5 9 + Y a W e h r Q c C h 3 P u 5 Z 6 c M O F M G 9 f 9 d k p r 6 x u b W + X t y s 7 u 3 v 5 B 9 f C o o + N U E d o m M Y 9 V L 8 S a c i Z p 2 z D D a S 9 R F I u Q 0 2 4 4 u c n 9 7 h N V m s X y w U w T G g g 8 k i x i B B s r + b 7 A Z h x G 2 e N s c D u o 1 t y 6 O w d a J V 5 B a l C g N a h + + c O Y r K M I J n M I 5 e H A F d b i F B j S B g Y J n e I U 3 x z g v z r v z s R g t O P n O M f y B 8 / k D z N a Q + A = = < / l a t e x i t > Y < l a t e x i t s h a 1 _ b a s e 6 4 = " A 5 P j c 2 u v e v s H K P W w D 7 5 k c 4 x Z 0 q 1 p k I w 5 0 9 e N L 2 z h n n e a N 4 1 6 + 1 m G U c V H I F j c A p M c A H a 4 B Z 0 Q B d g 8 A i e w S t 4 0 5 6 0 F + 1 D + 5 y 1 V r R y 5 h D 8 k / b 9 A 7 k t p X g = < / l a t e x i t > [X P XD ] = X < l a t e x i t s h a 1 _ b a s e 6 4 = " p 4 2 7 z B g I O 8 w p U D C 4 / 1
7 q l 9 d J 5 6 r u X d c b D 4 1 a s 1 H U U Y Y z O I d L 8 O A G m n A P L W g D g w S e 4 R X e n N R 5 c d 6 d j 6 W 1 5 B Q 7 p / A H z u c P J z S R u g = = < / l a t e x i t > X P < l a t e x i t s h a 1 _ b a s e 6 4 = " r s 7 R f o 2 B n B 3 1 l T d o q O D A p e q U Q 9 1 b 3 6 h u 1 r a 2 d 3 b 3 z P 2 D r o p T S V m H x i K W f Z 8 o J n j E O s B B s H 4 i G Q l 9 w X r + 5 L r w e w 9 M K h 5 H 9 z B N m B u S U c Q D T g l o y T O P H O B i y D I n J D D 2 g 6 y f 5 9 6 N Z 9 a t h j U DX i Z 2 S e q o R N s z v 5 x h T N O Q R U A F U W p g W w m 4 G Z H A q W B 5 z U k V S w i d k B E b a B q R k C k 3 m 6 X P 8 a l W h j i I p X 4 R 4 J n 6 e y M j o V L T 0 N e T R U i 1 6 B X i f 9 4 g h e D K z X i U p M A i O j 8 U p A J D j I s q 8 J B L R k F M N S F U c p 0 V 0 z G R h I I u r K Z L s B e / v Ey 6 5 w 3 7 o t G 8 a 9 Z b z b K O K j p G J + g M 2 e g S t d A t a q M O o u g R P a N X 9 G Y 8 G S / G u / E x H 6 0 Y 5 c 4 h + g P j 8 w c P y 5 W V < / l a t e x i t > XD < l a t e x i t s h a 1 _ b a s e 6 4 = " K 4 + n V U C 3 e Z q K l v 2 C N B w k N N U H F + o = " > A A A B 8 X i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s x I U Z c F N y 4 r 2 I e 2 Q 8 m k m T Y 0 k x m S O 0 I Z + h d u X C j i 1 r 9 x 5 9 + Y t r P Q 1 g O B w z n 3 k n r K M I J n M I 5 e H A F d b i F B j S B g Y J n e I U 3 x z g v z r v z s R g t O P n O M f y B 8 / k D z N a Q + A = = < / l a t e x i t > Y < l a t e x i t s h a 1 _ b a s e 6 4 = " r s 7 R f o 2 B n B 3 1 l T d o q O D A p e q U Q 9
j k T w u e n 6 H / S O L b d U 7 t 4 X S x U z u d x Z G E P 9 u E Q X D i D C l x C F e p A I I R 7 e I Q n S 1 o P 1 r P 1 M m v N W P O Z X f g G 6 / U D 9 / 2 Q h g = = < / l a t e x i t > x n,N T < l a t e x i t s h a 1 _ b a s e 6 4 = " V t 7 o h Y h 3 s N H S p A S 3 X f d t L w A G e F s = " > A A A B 7 3 i c b V A 9 S w N B E J 2 L X z F + R S 1 t F o N g I e F Og l p Y B G w s I 5 g P S I 6 w t 9 l L l u z t n r t 7 Y j j y J 2 w s F L H 1 7 9 j 5 b 9 x L r t D E B w O P 9 2 a Y r b / h z r 9 x + l h o 6 4 E L h 3 P u 5 d 5 7 o o x R p R 3 n 2 y o t L a + s r p X X K x u b W 9 s 7 9 u 5 e W 4 l c Y u J h w Y S 8 j 5 r b l 1 d w 6 0 S r y C 1 K B A a 1 j 9 G o w i k g g q D e F Y 6 7 7 n x s b P s D K M c D q r D B J N Y 0 y m e E z 7 l k o s q P a z+ b k z d G a V E Q o j Z U s a N F d / T 2 R Y a J 2 K w H Y K b C Z 6 2 c v F / 7 x + Y s I b P 2 M y T g y V Z L E o T D g y E c p / R y O m K D E 8 t Q Q T x e y t i E y w w s T Y h C o 2 B G / 5 5 V X S u a x 7 V / X G Q 6 P W v C 3 i K M M J n M I 5e H A N T b i H F r S B w B S e 4 R X e n N h 5 c d 6 d j 0 V r y S l m j u E P n M 8 f C a m P X w = = < / l a t e x i t > y m,1 . . . . . .< l a t e x i t s h a 1 _ b a s e 6 4 = " t W W o c n A l P F 8 o T h z W 8 + U J d R N y J n 0 = " > A A A B 8 H i c d V D L S s N A F L 3 x W e u r 6 t L N Y B F c S E i 0 1 a 6 k 4 M a V V O h L 2 h A m 0 0 k 7 d C Y J M x O h h H 6 F G x e K u P V z 3 P k 3 T h + C z w M X D u f c y 7 3 3 B A l n S j v O u 7 W w u L S 8 s p p b y 6 9 v b G 5 t F 3 Z 2 m y p O k o b 0 L 4 / B T 9 T 5 o n t n t m l 2 5 K x e r F P I 4 c 7 M M B H I E L 5 1 C F K 6 h B A w g I u I d H e L K k 9 W A 9 W y + z 1 g V r P r M H 3 2 C 9 f g D 4 m p C I < / l a t e x i t > y m,N T < l a t e x i t s h a 1 _ b a s e 6 4 = " B Y / 8 u t d P x P y N 4 Y J Q O 7 E b S l N B w r P 6 e S H G o 1 D A M T G e I d V / N e p n 4 n 9 d K d P e s n T I R J 5 o K M l n U T T j U E c z C g R 0 m K d F 8 a A g m k p l b I e l j i Y k 2 E d o m B G / 2 5 X n S O C 5 5 J 6 X y d b l
Figure 12
Figure12depicts the convergence behavior of the proposed ISAC algorithms in terms of the MSE of the voxel coefficient soft-replicas vk:m,t and the SER of the data symbol soft-replicas xn,t:m,t , for three selected combinations of their respective damping parameters η x ∈ [0, 1] and η v ∈ [0, 1], following the damped update rule given by[67] x(τ) n,t:m,t
Fig. 12 :
Fig. 12: Convergence behavior of the proposed AL-ISAC and Bi-ISAC algorithms with varying damping factors η x and η v with ρ = 0.1, E v = 1.5%, SNR = 15dB, as a function of MP iterations in a system with N U = 4, N A N R = 12, N V = 512, and N T = 100. ȳx Until termination criteria is satisfied * do 1: Compute the effective channel matrix G ≜ H + Adiag(v)B; 2: Initialize the soft-replica at all variable nodes as xn,t:m,t = E xn,t [x n,t ]; ▷ ∀m, n, t 3: Initialize the MSE at all variable nodes as ψ x n,t:m,t via (21); ▷ ∀m, n, t 11: Project xn,t to the symbol constellation X ; ▷ ∀n, t 12: Output projected xn,t as final hard estimate; | 15,964.2 | 2023-08-21T00:00:00.000 | [
"Engineering",
"Computer Science",
"Physics"
] |
FUNCTIONAL EVOLUTION EQUATIONS WITH NONCONVEX LOWER SEMICONTINUOUS MULTIVALUED PERTURBATIONS
In this paper we prove some existence theorems concerning the solutions and integral solution for functional (delay) evolution equations with nonconvex lower semicontinuous multivalued perturbations ,
Problem (P) for which E is reflexive, A(t,.) is an m-accretive multivalued operator and F is a Lipschitz single-valued function cf Tanaka [2] Problems (P) and (Q) without delay cf Cichon [3], [4], Ibrahim [5] and the references therein 2. NOTATIONS AND DEFINITIONS Let E* be the dual of E, Eo the Banach space E endowed with the weak topology a(E, E') If B is a multivalued operator from E to 2 E then B is said to be accretive if for each A > O, zl,z2 E D(B) (the domain of B), Yx E B(:r,x) and Y2 B(z2) we have We say that B is m-accretive ifB is accretive and ifthere exists A > 0 such that R(I + AB) E, where I is the identity map It is known that if B is m-accretive, then for every ,k > 0 the resolvant JaB (I + AB) -1 and the Yosida approximation of B; B, (I-JaB)/,k, are defined everywhere The generalized domain of B is defined by D'(B) {zEE'[B(x)[ =li_,rn[[Bx[, < oo}.
lira If E is metrizable then lower semicontinuity and lower semicontinuity in the Kuratowski sense are equivalent (cf [8], [9]) The following known result will be used in the sequel LEMMA .1 [6].For every I, let A(t, .)be an m-accretive multivalued operator from E to 2 E {} satisfying the following condition: (C1) There exist A0 > 0, a continuous function h I E and a nondecreasing continuous function L-[0, oo)--+ [0, oo)such that for all A $ (0, A0)and for almost , s I, Then D* (A(t, .))and D(A(t, .))are independent oft So if A is as in Lemma 2.1 we may write D'(A):= D'(A(,.)) and D(A):= D(A(, .));I respectively LEMMA .[10].Let E be a Banach space and M a compact metric space If T is a lower semicontinuous multivalued function on M and with nonempty closed decomposable values in L](I), then T has a continuous selection.on I--r, 0] and u(t) b(O) + f(s)ds; f E K is nonempty and convex, where K z {f LIE(I) If(t)[ _< 5 a e on I} If E is reflexive then X s compact subset of C6o ([ r, T]) If, in addition, E is separable then X is metrizable PROOF.It is obvious that X is nonempty, convex and equicontinuous and that the set {u(t) u X}; t I, is bounded So, ifE is reflexive then, X is relatively compact in CEo([r,T]) by Ascoli's theorem Let us verify that X is closed in CE ([-r, T]) Let (u,) be a sequence in X converging to u CEo ([-r, T]) Then u on [r, 0] and for each n > 1 there exists f,, K such that un(t)-b(0)+ f f(s)ds; I Since E is reflexive, K; is weakly compact in LE(I) Hence, the sequence (f,) has a subsequence, denoted again by (fn), converging weakly to f Ko Let G be a multivalued function from Eo to the nonempty closed subsets of E such that G is lower semicontinuous in the Kuratowski sense.If (xn) is a sequence converging to z in Eo, then for every z E, PROOF.Let y lim,_,ooinfG(x) Then there exists a sequence (y,) such that y G(xn)'n >_ and y, y as n c For any z E E we have which proves the first inequality The second inequality follows from the lower semicontinuity of G TttEOREM 3.1.Let E be a reflexive separable Banach space Let A(t,.)" I be an m-accretive multivalued operator from E to 26-{} satisfying condition (C) together with the following conditions (C2) There exist z > 0 such that for all x E E, the function w (I + A(t, .))-1belongs to L2E(I) (C3) For all r > 0 there exists 6(r) > 0 such that for all A > 0 and all x (A) with [[x[[ < r, IIJA(O,) 11 < ().
Let F be a measurable multivalued function from I x C([r, 0]) to P(E) satisfying the following conditions (F1) There exists a > 0 such that sup{]lull y F(t, u)} _< a, V(t, u) I x CE([r, 0]).
(F2) For all I,F(t, .) is lower semicontinuous in the sense of Kuratowski from CE([r, 0]) to (Fa) For all u Cs([ r, 0]) the multivalued function F(t, stu) admits a measurable selection Then for every CE([r, 0]) with @(0) E D*(A), the problem (P) has a solution.
PROOF.We split the proof into the following three steps (1) Let f E Ko { LE(I) II(t)ll _< aa.e on I}.Since A satisfies conditions (C), (C) and (C3), then by Theorem 4 of [5], there exists a unique absolutely continuous function uf :I E such that (i) u'l(t A(t, u(t)) + f(t) a e. on 1, u,(0) (0), (ii) Iluz(t)l _< 1 (O -[-1)T --[--L(r)suptezllh(t)[[ + 5(r),Vt I, where r (x + L(I](0)II)) + [A(0, x0)l, A G IBRAHIM (iii) the function f u/is continuous from Ko to CE (I) (2) Set X1 (U CE([-r, T]), u -= on [-r, 0] and u(t) (0) + ff(s)ds, f K s } By Lemma 3 1, X1 is a compact subset of Co([-r, T]) and is metrizable.Define a multivalued function T on X1 by Tl(u) {f Ka f(t) F(t, stu) a e on I} In this step we prove that T1 has a continuous selection V'X Ko For this purpose, we show that T satisfies the conditions of Lemma 2 2 Condition (F3) assures that the values of T1 are nonempty Moreover, if D is a measurable subset of I and gx, g2 Tx (u) for some u X1, then the function g Nogl + Nt-Dg2 belongs to T1 (u), where N is the characteristic function.Then the values of T1 are decomposable It remains to prove that T1 is lower semicontinuous Since X1 is compact metrizable in CEo([-r, T]), it suffices to show that T is lower semicontinuous in the Kuratowski sense So, let (u,) be a sequence in X converging to u X l, with respect to the topology on Co([r, T]) and let g Tl(U) Since F is measurable, then for all n > 1 the multivalued function B.(t)= {z F(t, stun)'llg(t)-zll-d(g(t),F(t, stu.))} has a measurable selection g I E. Thus, by Lemma 3 2, for all I, lira Ilg(t) g(t.)ll < lim supd(g(t),F(t, stu.)) <_ d(g(t),lirninf F(t, stu)) This means that T1 is lower semicontinuous and hence there exists a continuous function V X Ko such that V (x) T(x), V x X (3) Define a function O'Xl--X by O(z)= ul,f V(x) By (iii) of the first step, 0 s continuous Hence, by Tichonoffs fixed point theorem, there exists u X such that u u,f Vx(u) T(u) This means that u'(t) A(t,u(t)) + f(t) and f(t) F(t, stu) ae on I The theorem is thus proved.TItEOREM 3.2.Let H be a Hilbert space and F be a measurable multivalued function from 1 x CH([r, 0]) to P(H) satisfying conditions (F), (F) and (F3) Let F be a multivalued funcuon from I to the family of nonempty closed convex subsets of H, with compact graph G and satisfies the following conditions.
(F1) There exists -), > 0 such that IIx-projr(t)xll <_ ( t) for all (t,x) G and all z I, (t < -) (F2) The function (t,x) 5(x,F(t)) sup{(x,y) "y F(t)} is lower semicontinuous on I x Bo, where Bo is the relative weak topology Then for all CE([r, 0]) with b(0) F(0), the problem (Q) has a solution PROOF.We split the proof into the following three steps (1) Let f Ko Since F has a compact graph and satisfies conditions (F) and (F) then by Theorem 3 11 ], there exists a unique absolutely continuous function u/ I H such that (i) u(t) Nr(tl(u(t)) + f(t) a.e. on I, (ii) u/(0) (0), ul(t) r(t), v e z, (iii) Iluy(t)ll <_ T(7 + a),'t I and the function f ul is continuous from Ko tOCH (2) Set X2 {u CH([-r, T])" u on [-r, 0] and u(t) (0) + ff(s)ds, f K& and define a multivalued function T2 on X2 by T(u) {f Ko f(t) F(t, stu) a.e on I} As in the second step of the proof of Theorem 3.1 we can show that T2 has a continuous selection (3) Define the function 0 X2 X by 0(x) ul, f V2(x) As in the third step ofthe proof of Theorem 3.1, we can show that there exists a unique u X2 such that u u I, f T2(u) Clearly u is a solution of (Q) where f LI(I) By an integral solution of (P') we mean a continuous function u I D(A) with u(0) x0 such that Itu(t)-zll _< Ilu(s)-zll + [u(r)-z, f(r)-y]+dr, for each z D(A), y A(z) and 0 <_ s <_ < T, where It is known that [7] if A is an m-accretive operator then for each (xo, f) D(A) x LE(I), the problem (P*) has a umque integral solution uf, such that the function f u I is continuous In this section we are concerned with the existence of integral solutions of the functional evolution equation where F is a multivalued function from I CE([-r, 0]) to 2 E {), q; > 0 is the operator of translation defined in section and is a given function, belongs to CE([r, 0]) with b(0) D(A) By an integral solution of (P**) we mean a continuous function u-[-r, T] E with u on [-r,]0, such that u is an integral solution of the evolution equation u' (t) -A (u / f(t), u (0) (0), where f LE(I) and f(t) F(t, su), a e on I We say that the operator A E 2 E {} has the (M)-property ( [7], [12]) if for each xo D (A) and each uniformly integrable subset Q of LIE(I), the set {u s g Q} is a relatively compact subset of CE(I) where u s is the unique integral solution of the evolution equation u'() -A(u(t)) + g(t) a e on I; u(0) x0.It is well known that ( [7], [12]) if the proper operator -A generates a compact semigroup (via Crandall-Liggett's exponential formula [3], 13 ]), then A has the property (M) TiIEOREM 4.1.Let E be a Banach space and A an m-accretive multivalued operator from E to 2 E {} having the (M)-property.Let F be a measurable multivalued function from I x CE([-r, 0]) to the non-empty closed subsets of E satisfying the condition (Fs) together with the following conditions (F4) There exists a function h L (I) such that sup{llzll z e F(t, )} < h(t), V (t, ,) e C([-, 0]).
(Fh) For all I, F(t, .)CE([ r, 0]) E is lower semicontinuous in the Kuratowski sense Then for all CE([r, 0]) with q.,(0) D(A), the problem (P**) has an integral solution PROOF.Consider the set Q {f LE(I) [If(t)[[ _< h(t) a e. on I} One can easily show that Q is nonempty and uniformly integrable subset of LE(I) As mentioned above, for each f Q there exists a unique continuous function u I I D(A) such that u I is the unique integral solution of the evolution equation u'(t) A(u(t)) + f(t), u(0) q.,(0) and the function f uf is continuous from Q to CE(I).Let X" ={u,}CE([-r,T])'fQ}, where u)= on I-r, 0] and u" l=uI on I Since a has the property (M), X" is compact in the metric space CE([-r,T]) Now, define a multivalued function T on X:" by T(x) {f LE(I) f(t) F(t, sx) a e on I} As in the second step of the proof of Theorem 3 1, we can show that T has a continuous selection V X* LE(I) Also, define a function # "X* X*, #(z) u), f V(z) The function # is clearly continuous and hence has a fixed point a: E X* It is obvious that z is the desired solution 5. EXAMPLES In this section we give some examples illustrating the scope of the results developed in sections 3 and EXAMPLE 1.Let for all I, A(t) B-h(t) where h :I E is integrable and B is an m- accretive operator on E Clearly A(t) is m-accretive for all I Let .k> 0, s, I and z E Then 1 A(t,x) A(s,x)[] <_ -JA(t,x) JA(s,x)[ <_ [Ih(t) h(s) [.Hence condition (C1) of Lemma 2.1 holds EXAMPLE 2. In [6] there are several examples for operators A such that for every I, A(t) is m-accretive and satisfies condition (C1) EXAMPLE 3. Let H be a real Hilbert space with inner product (., .)and let #:H H be a proper lower semicontinuous convex function.The set i)#(x) {z H :#(x) <_ #(y) + (x-y, z} for each y H} is called the subdifferential of # at the point x We recall that D(0#)= {x H 0#(x) is nonempty}.Now if we define an operator A:D(A) DO(#) 2 H by A(x) O#(x), then A is m-accretive and the following conditions are equivalent [7] (i) For each ,k > 0, the resolvent JA is a compact operator (ii) The function # is of compact type (iii) The semigroup generated by the operator A is compact EXAMPLE 4. Take E L([0, zr]) and let us define A" D(A) C_ E E by Au u()(t) for each u D(A) where D(a) {u E u (/ E E, u(0) u(Tr) 0} The operator A is m-accretive and the semigroup {S(t) > 0} generated by -A(S(t) limn-oo (I+-A) -n) is compact [7]
3 .
EXISTENCE OF SOLUTIONS FOR THE PROBLEMS (P) AND () To prove our results we need the following lemmas LEMMA 3.1.Let b be an element of CE([ r, 0]) and/3 be a positive real number The set { Io ) X= u E CE([-r,O]) u
4 .
EXISTENCE OF INTEGRAL SOLUTIONS FOR TI'IE PROBLEM (P) WI'IEN TI:IE OPERATOR A IS INDEPENDENT OF TIME In this section A denotes a multivalued operator from E to 2 E-{8} Consider the evolution equation '(t) -A(u(t.))+f()ae onI (P') u(O) xo D(A), | 3,292.8 | 1998-01-01T00:00:00.000 | [
"Mathematics"
] |
Bandwidth Enhancement of Millimeter-Wave Large-Scale Antenna Arrays Using X-Type Full-Corporate Waveguide Feed Networks
A one-time-reflection equivalent model of the full-corporate waveguide feed networks is investigated to propose a novel approach to bandwidth enhancement of millimeter-wave large-scale antenna arrays. Theoretical analysis reveals that the in-phase superposition phenomenon of multiple small reflections at specific frequencies caused by the topology of the feed network is also a significant factor to affect the achievable bandwidth of the large-scale arrays, apart from the bandwidth performance of the individual power dividers and radiators composing the array. In order to weaken the undesirable effects on bandwidths caused by the small reflections, a full-corporate waveguide feed network with an X-type topology is then presented. Air-filled waveguide X-junctions and waveguide-fed horn sub-arrays are designed to fulfill new three-dimensional (3D) printed V-band antenna arrays. Excellent performance, including an improved bandwidth of about 40%, a gain of up to 27.8 dBi, and stable unidirectional radiation patterns with cross polarization of less than −32 dB, are confirmed experimentally by an $8 \times 8$ prototype. The theoretical model and the bandwidth enhancement scheme in this paper are valuable to realize the high-gain wideband antenna arrays for emerging millimeter-wave applications.
I. INTRODUCTION
M ILLIMETER-WAVE communications are of great importance to various emerging applications with the need of ultra-high data transmission rate, including virtual reality (VR), virtual assistants, augmented reality (AR), advanced mobile devices and so on [1], [2], [3], [4]. In order to guarantee the wireless link budget and to effectively use the plentiful spectrum, a great deal of attention has been dedicated to enhancing both the gain and bandwidth of millimeter-wave antenna arrays [5].
A high radiation efficiency is required for increasing the gain of arrays. Several methods, including the air substrate [6], the bandgap structures [7], and the backed cavities [8], have been introduced to the antenna elements to suppress the undesirable surface waves travelling along radiating apertures that affected the array efficiency. Meanwhile, it has been verified that for an array with a large scale, the loss of the feed network was the main reason for restricting the achievable gain of the array [9]. Therefore, in comparison with the feed networks consisting of substrate integrated This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ transmission lines [10], [11], [12], [13], [14], the air-filled waveguide feed networks without the dielectric losses were a better choice for the array design with a gain of above 30 dBi [15], [16], [17], [18], [19], [20].
In respect of the array bandwidth enhancement, wideband antenna elements and feed networks were two approaches that have been persistently investigated in the literature. Since the millimeter-wave slot [21], cavity [22] and microstrip patch [6] antennas in an early stage usually suffered from narrow bandwidths, more efforts were focused on widening the bandwidth of radiating elements [23], [24], [25]. With the assistant of different types of millimeter-wave antennas with broadened bandwidths of more than 20% that were fed by traditional full-corporate feed networks, the array bandwidths with relatively stable radiation characteristics were increased from about or even less than 10% to around 20% [26], [27], [28], [29], [30].
In order to improve the array bandwidth in a further step, feed networks with wider bandwidths were necessary as well, and thus have also been studied. For the widely used full-corporate feed networks with an H-type topology [10], [15], T-junctions operating as cascaded power dividers played a crucial role in the bandwidth improvement. Additional metallic pins [31], [32] and irises [20], [33], [34] were employed in the design of H-plane substrate-integrated and air-filled waveguide T-junctions to extend their operating bandwidths. Meanwhile, stepped or tapered waveguide structures are another kind of scheme to improve the impedance matching of both E-plane [16], [35] and H-plane [36] waveguide T-junctions. Benefiting from those wideband waveguide T-junctions, bandwidths of more than 30% have been achieved by several millimeter-wave arrays with promising radiation performance [16], [31], [32], [33], [35], [36], [37]. Unfortunately, a bandwidth of 40% that is desirable for the fifth generation (5G) millimeter-wave multi-band applications was still a challenge in the reported high-gain arrays with large scales. Recently, a Ka-band parallel-feed continuous transverse stub array was designed in [38], which realized a bandwidth of 40% and a gain close to 30 dBi. However, the relatively complex geometry was not flexible for the array design with a larger scale. On the other hand, the bandwidth of a parallel feed network consisting of microstrip lines was enhanced to more than 40% by considering the whole network as a continuous impedance transition device [39]. Moreover, a 4 × 4 microstrip line fed magnetoelectric (ME) dipole array on LTCC with a bandwidth of 45% was reported in [40]. Nevertheless, as aforementioned, those feed networks based on the TEM-mode substrate-integrated transmission lines were not promising for the high-gain requirements. The continuous impedance transformation in the entire air-filled waveguide feed network was not easy to realize as well.
For the purpose of further enhancing the bandwidth of the millimeter-wave antenna arrays with air-filled feed networks, a study based on the one-time-reflection circuit model [41] is implemented to analyze the impedance matching features of the full-corporate feed network in this paper. It reveals that apart from the reflections of the separated power dividers and the radiators, the in-phase superposition of the small reflections in the full-corporate feed network is another factor restricting the array bandwidth, especially for the large-scale arrays. To overcome the issue, a revised full-corporate feed network topology constructed by air-filled X-junctions rather than T-junctions is then presented, investigated, and used for designing a 60-GHz high-gain wideband horn array that is fabricated by employing the metallic three-dimensional (3D) printing technology [42], [43], [44]. The results and the proposed array in this work are valuable to the bandwidth enhancement of millimeter-wave large-scale antenna arrays.
The paper is organized as follows. Section II depicts the theoretical analysis on the full-corporate feed networks. Section III designs the wideband horn antenna array in the V-band. Measured results and discussions are given in Section IV. Eventually, a brief conclusion is presented in Section V.
II. THEORETICAL ANALYSIS ON BANDWIDTH ENHANCEMENT
Millimeter-wave antenna arrays with full-corporate waveguide feed networks are usually composed of radiating elements, feeding cavities acting as sub-array feed networks, and the major full-corporate feed networks, which are located in the upper, middle and lower layers of the designs respectively [9], [15], [17]. In this section, a simplified one-time-reflection circuit model of the full-corporate feed network is developed based on the theory of small reflections [41], which provides an effective means to analyze the reflection characteristics of the full-corporate feed networks with arbitrary sizes. With the help of the model, a method of enhancing the bandwidth of large-scale waveguide feed networks is then discussed.
A. ONE-TIME-REFLECTION MODEL OF THE FULL-CORPORATE FEED NETWORK
The topology of the conventional H-type full-corporate feed network consisting of a series of T-junctions is shown in Fig. 1 (a). The output ports of the feed network are connected with 2 × 2 sub-arrays that are considered as the loads of the feed network in this model. As the transmitting paths from the input port to all output ports are parallel to each other, the feed network can be equivalent to a cascade network as depicted in Fig. 1 (b), where the load Z L represents the sub-array. Clearly, a feed network with a size of 2 N × 2 N is used for exciting an array with a size of 2 N + 1 × 2 N + 1 , while the number of T-junctions constructing the cascade network is 2N. Meanwhile, the path length l i between two neighboring T-junctions can be expressed as where d is the element spacing of the antenna array. As illustrated in Fig. 1 (b), due to the discontinuities caused by the T-junctions, small reflections would exist in the feed network. In this study, only the one-time-reflections from the T-junctions are considered and the loss of the feed network is omitted. The promising accuracy of the simplified model will be confirmed later. Therefore, the total reflection coefficient at the input port of the feed network in can be calculated as Tr i (2) where i and Tr i are the reflection and transmission coefficients for the T-junction T i . L is the reflection coefficient of the load. For the one-time-reflection model, Tr i can be calculated as Then,θ i = βL i and β is the propagation constant of the feeding waveguide. L i is the total path length from the input port of the feed network to the T-junction T i+1 , which can be expressed as where L 0 = 0. In order to verify the effectiveness of the one-timereflection model proposed above, two H-type full-corporate feed networks with sizes of 4 × 4 and 8 × 8 are then analyzed. The corresponding element spacing d of the arrays is set to 0.73λ c , where λ c is the cutoff wavelength for the TE 10 mode of the waveguide in the feed networks.
The H-plane air-filled waveguide T-junctions used for the feed networks have the same geometry with Design A in [44]. A full-wave electromagnetic solver Ansys HFSS [45] is employed to determine the simulated reflection and transmission coefficients of the T-junctions, which are then introduced in the model as i and Tr i to calculate in of the feed networks. Besides, L is set to 0 in this section. The simulated i of the T-junctions and the calculated | in | of the feed networks are plotted in Figs. 2 (a) and (b). On the other hand, the overall geometry of the two feed networks is modeled and simulated in the full-wave electromagnetic solver as well, whose simulated | in | results are provided in Fig. 2 (b) for comparison. It is noted that only the frequency range supporting the single TE 10 mode in the rectangular waveguide, namely f /f c varying from 1 to 2, is considered here. Clearly, a good agreement between the calculated and simulated reflection coefficients of the feed networks can be observed in Fig. 2 (b), which demonstrates the promising accuracy of the presented one-time-reflection model for evaluating the reflection characteristics of the full-corporate feed networks.
B. REFLECTION CHARACTERISTICS OF THE H-TYPE FEED NETWORK
With the use of the one-time-reflection model proposed in the above section, the reflection characteristics of the H-type feed network are investigated in a further step in this section. For better revealing the influence of the feed network topology on the reflection features, the magnitudes and phases of the reflection coefficients of the T-junctions and the loads are set to fixed values −20 dB and 180 • over the considered frequency range. In addition, d still equals to 0.73λ c . More null points can be observed on the | in | curve for the feed network with a larger size. Meanwhile, the peak values of | in | also increase with the size of the feed network, which appear at fixed frequencies. Specifically, the frequencies with the largest values of | in | are marked as M1, M2 and M3, while the frequencies with the second largest values of | in | are marked as S1, S2 and S3 in Fig. 3 (a). Therefore, it is found that apart from the individual T-junctions, the overall topology of the full-corporate feed networks is another factor affecting their impedance matching. Due to the increased peak values of | in | discussed above, the bandwidth enhancement is a tougher task for the feed network with a larger size.
For the purpose of exploring the reason for the increased | in | at specific frequencies, the calculated phase delay from the input port of the feed network to each T-junction θ i is depicted in Fig. 3 (b). At the normalized frequencies of 1, Fig. 3 (a), θ i equals 0 for all the T-junctions. As a result, the small reflections from the T-junctions are in-phase at the input of the feed network, which leads to the maximum reflections. Moreover, the in-phase superposition of the small reflections can be achieved at the normalized frequencies of 1.06, 1.43 and 1.98, namely S1, S2 and S3 in Fig. 3 (a), for all the T-junctions save for one with the out-of-phase reflection, which results in the second largest values of | in | over the frequency range of interest. According to (1) and (2), for the H-type feed networks, the frequencies satisfying the in-phase superposition condition of the small reflections are determined by the element spacing d and the propagation constant of the waveguide β. However, it is found in the study that with the existence of the practical waveguide T-junctions, the equivalent transmitting path length for each T-junction is slightly shorter than the physical length calculated by using (1) and (4), which has been considered in the calculation in Fig. 2 to get a better accuracy, but is ignored in the analysis in Fig. 3 for simplification. Therefore, the frequencies with the peak values of | in | for the feed networks discussed in this and last sections having the same d and β are slightly different with each other. The detailed frequency values are listed in Table 1 for comparison.
Based on the aforementioned analysis, it is confirmed that apart from designing wideband T-junctions that has been widely addressed in the literature, the influence of the topology of the full-corporate feed networks is also worth considering for enhancing the bandwidth of arrays. Actually, it may be of more importance for widening the bandwidth of large-scale arrays, considering the achievable impedance matching features of the individual T-junctions.
C. BANDWIDTH ENHANCEMENT METHOD
By employing the idea revealed in the last section for the bandwidth enhancement of the arrays, two full-corporate feed networks with the revised topology are investigated in this section, which are illustrated in Figs. 4 (a) and (b), separately. Both d and β of them are kept the same with those in the last section.
Different from the conventional H-type feed network, a half of the T-junctions indicated in Fig. 1 (a) is replaced with the 1-to-4 power dividers shown in Fig. 4 (a). Another half of the T-junctions are separated into two right angle transmission lines. The X-shaped junctions are then introduced in the feed network depicted in Fig. 4 (b). By the means, half of the T-junctions in the H-type feed network can be saved, and thus the number of nodes with small reflections in the one-time-reflection model of the modified feed network with a size of 2 N × 2 N is N as shown in Fig. 4 (c). Consequently, the total reflection coefficient at the input port of the revised feed networks in is calculated as Meanwhile, the path length l i between the neighboring two T-junctions is extended and can be expressed as for the feed network in Fig. 4 (a) and for the feed network in Fig. 4 (b). Three different kinds of full-corporate feed networks with the same size of 4 × 4, including the conventional H-type, the H-type composed of 1-to-4 power dividers, and the X-type, are analyzed, whose calculated | in | results based on the one-time-reflection models are compared in Fig. 5 (a). First, in comparison with the results of the traditional H-type design, the H-type feed network consisting of the 1-to-4 power dividers has a lower maximum value of | in |, because the reduced number of the junctions weakens the in-phase superposition of the small reflections for the feed network with a fixed size. However, due to the same transmission paths for the small-reflections, the frequencies with the peak values of | in | are not changed for the two H-type feed networks. Second, by replacing the 1-to-4 power dividers constructing the H-type feed network with the X-junctions, the path lengths between junctions are shortened, which makes the peak values of | in | move to higher frequencies as exhibited in Fig. 5 (a). The calculated θ i results for the X-type feed network are given in Fig. 5 (b), which confirms that the in-phase superposition occurs at the normalized frequencies 1, 1.11, 1.39, 1.76, respectively. Therefore, it is seen that the reduction in the path lengths of the small-reflections is helpful to decrease the number of points with the peak values of | in | within a fixed frequency range.
The effect of the reflection coefficients i and L on the impedance matching of the X-type full-corporate feed network are then considered in Fig. 6. It is noted that when i or L varies, the other one is fixed to −20 dB. As shown in Fig. 6, although different of i or L does not vary the frequencies with the peak values of | in | for the feed network, but | in | around those frequencies varies significantly with i and L . Clearly, i and L , which are determined by the specific designs of power dividers and sub-arrays in practice, are not constant within the frequency range of interest. Therefore, smaller i and L around the frequencies with the in-phase superposition of small reflections are desirable for guaranteeing an acceptable | in | throughout a wide band. Otherwise, the bandwidth of the feed network is difficult to enhance due to the existence of the strong reflection points. Fig. 7 provides the relation between | i | and | L | under the condition of | in | ≤ −10 dB for the H-type and X-type feed networks with different sizes over the entire frequency range supporting the single transmission mode. Due to smaller reflection nodes existing in the feed networks with a larger size, lower | i | and | L | are required for realizing | in | ≤ −10 dB. More importantly, taking advantage of decreasing the maximum values of | in | discussed in the last section, the required | i | and | L | for getting | in | ≤ −10 dB is easier to fulfill for the X-type feed network compared with that for the H-type one as shown in Fig. 7. According to the aforementioned discussions, it can be summarized that the X-type full-corporate feed network topology that weakens the in-phase superposition effect in the feed network due to small reflections provides a new means to realize the millimeter-wave wideband arrays with large sizes.
III. WIDEBAND LARGE-SCALE ARRAY DESIGN
Based on the bandwidth enhancement scheme investigated in Section II, a novel V-band wideband horn array fed by a multi-layered full-corporate feed network consisting of airfilled waveguide X-junctions is implemented in this section.
A. ARRAY GEOMETRY
An overall hollow configuration of the proposed antenna array is depicted in Fig. 8, which is composed of horn elements, feed cavities and a multi-layered feed network. Four horn elements are combined with a feed cavity to realize a 2 × 2 sub-array, which is seen as the load in the bandwidth enhancement scheme. Moreover, a set of airfilled rectangular waveguide X-junctions with varied sizes are assigned in different layers to prevent the geometric overlap in the X-type full-corporate feed network. By using a series of vertical waveguides with a short length to connect the X-junctions and sub-arrays located in adjacent layers, the input power from a standard WR-15 waveguide port can be transmitted into all radiators successfully with equalamplitude in-phase excitations. It is noted that the overall thickness of the design increases with the array scale due to the use of the multi-layered X-type feed network. In addition, the minimum printable dimension of about 0.5 mm for the used commercial 3D-printed facility is considered in the design process.
In order to achieve a wide operating band accompanied with promising radiation features, three considerations are addressed in the array design. First, the spacing between neighboring radiation elements is set to 4 mm, corresponding to a wavelength in free space at 75 GHz. Therefore, stable radiation patterns can be maintained by the array within the V-band. Second, the broad wall sizes of the waveguides in the feed network are set to around 3.4 mm, which leads to a single-mode frequency range between 44.1 and 88.2 GHz for the TE 10 mode. Third, based on the selected element spacing and broad wall sizes, the reflection characteristics of the whole X-type feed network can be assessed by using the model discussed in Section II. It is found in the study that the additional influence of the vertical waveguides on the total transmission path in the feed network should be considered for a promising accuracy. The calculated inphase superposition of the reflections appears at about 51 and 65.5 GHz for the feed network operating in the V-band. Consequently, the sub-arrays and the X-junctions with small reflection coefficients in the vicinities of the two frequencies would be helpful to realize the wideband antenna array.
B. 2 × 2 SUB-ARRAY
The detailed geometry of the 2 × 2 sub-array is shown in Fig. 9. The four E-plane horn elements have a radiation aperture size of l horn × l horn , which are loaded with short sections of square waveguides with a height of h 1 . The input ports of the horns are linked with vertical waveguides that act as the output ports of the feed cavity. As indicated in Fig. 9 (b), two triangular irises are cut in the cavity for impedance matching. Besides, the input waveguide port is in the center of the bottom surface of the cavity. In comparison with the sub-array designs reported previously in [44] and [46], two modifications are implemented in this work for achieving a better bandwidth performance. First, a portion of the feed cavity is bended upward to link with the vertical waveguides as shown in Fig. 9 (c), which makes it like a Y-shaped power divider in the xoz-plane with improved impedance matching features. Second, an inverted pyramidal slit with a height of h sp and a width of l sp is cut at the center of the sub-array aperture. By this means, a portion of the common metallic walls for the four horn elements is removed. It is found that the existence of the slit is helpful to adjust the impedance matching characteristics of the sub-array, due to the reduction of the mutual coupling between the neighboring horn elements in the H-plane. The final dimensions of the proposed 2 × 2 sub-array are listed in Table 2. Fig. 10 presents the simulated |S 11 | of the feed cavity whose output ports are connected with loads directly. It is seen that the height of the cavity h 5 is of importance to its matching characteristics. A simulated wide bandwidth of 43% for |S 11 | of less than −15 dB (from 51.5 to 79.9 GHz) is achieved by the cavity with h 5 = 0.85 mm. Furthermore, as illustrated in Fig. 11, the wideband features are not affected significantly when the matching loads are replaced with horn elements. The gain varies from 13.3 to 16.8 dBi throughout the operating band. Additionally, the simulated radiation patterns of the proposed sub-array drawn in Fig. 12 are symmetric and stable in both the E-and H-planes, while the cross polarization is less than −40 dB.
C. AIR-FILLED WAVEGUIDE X-JUNCTION
The air-filled waveguide X-junction constructing the X-type full-corporate feed network is shown in Fig. 13. The major portion of the X-junction is similar with the feed cavity used for 2 × 2 sub-arrays [36] and [44], in which the input power can be transmitted to four waveguides with an equal amplitude. The four waveguides are extended along ±45 • directions and then connected with the vertical waveguides acting as the output ports. In order to get the required inphase outputs, the upper two and the lower two vertical waveguides are located at the opposite sides of the planar waveguides. Besides, four irises are added at the shorted-ends of the planar waveguides for impedance matching. Detailed dimensions of the three types of X-junctions in the feed network are listed in Table 3.
The simulated S-parameters of the three X-junctions are illustrated in Fig. 14. An overlapped impedance bandwidth of 37% for the simulated |S 11 | of less than −10 dB (from 50 to 72.5 GHz) is obtained. By adopting the analysis process mentioned in Fig. 7, the combination of the designed sub-arrays and the X-junctions is able to guarantee a reflection coefficient of less than −10 dB within the vicinities of 51 and 65.5 GHz for the proposed array. Moreover, the simulated magnitude and phase difference among the outputs are less than 0.2 dB and 4 • in the frequency range between 50 and 73 GHz for the three X-junctions.
D. SIMULATED ARRAY PERFORMANCE
By combining the sub-arrays and the X-junctions presented above, the 8 × 8 and 16 × 16 horn antenna arrays fed by the X-type full-corporate feed networks are designed in the V-band, whose simulated |S 11 | and gain results are given in Fig. 15. Simulated impedance bandwidths of 40% (from 50.5 to 75.4 GHz) and 42% (from 50.8 to 77.7 GHz) for |S 11 | of lower than −10 dB are obtained by the 8 × 8 and 16 × 16 horn antenna arrays, respectively, which can almost cover the entire V-band. The simulated |S 11 | of the 16 × 16 array is still less than −10 dB at about 73.5 GHz even if |S 11 | of the 1 st X-junction is slightly higher than −10 dB, which can be attributed to the partial reflection cancellation at the input port of the array caused by other two junctions. The wide bandwidths demonstrate the effectiveness of the proposed bandwidth enhancement scheme based on the X-type feed network. Furthermore, the simulated gain gradually increases with the frequency and is stable across the whole operating band for both of the arrays. The maximum gain results of the arrays are 28.0 and 33.6 dBi separately, while the variation is less than 4.5 dB throughout the operating band. The additional loss caused by the printed surface roughness of about 10 μm [35] has been considered in the simulation.
IV. MEASUREMENT AND DISCUSSION
For experimentally verifying the operating characteristics, a prototype of the designed 8 × 8 horn antenna array shown in Fig. 16 was fabricated by employing a commercial 3D printing facility developed by EOS GmbH Company. The printing material is aluminum alloy AlSi10Mg powder. The resolution of the printing process is between 50 and 100 μm.
The overall size of the array is 34.5 × 34.5 × 21.8 mm 3 . The input port of the array is a standard WR-15 waveguide located on the bottom surface. The reflection coefficient of the array was measured by using a Keysight PNA-X network analyzer N5247B with a V-band frequency extender N5293AX03. In addition, the radiation performance was measured in a far-field anechoic chamber. The gain of the array was calculated by comparing with a V-band standard gain horn.
A. S-PARAMETERS
The measured and simulated |S 11 | of the 8 × 8 horn antenna array are given in Fig. 17. The measured impedance bandwidth of less than −10 dB is 38% (from 50.5 to 73.9 GHz), which is in reasonable agreement with the simulated one of 40% (from 50.5 to 75.4 GHz). The satisfying bandwidth not only confirms the advantages of the proposed design method, but also demonstrates the feasibility of using commercial 3D printing technology for realizing V-band antenna arrays. The minor difference between measured and simulated results is mainly caused by the printing tolerance. Besides, it should be mentioned that the measured |S 11 | is slightly higher than −10 dB at 72.9 GHz.
B. RADIATION PATTERNS
A comparison between the measured and simulated radiation patterns of the 8 × 8 horn antenna array in the E-and H-planes at 55, 65 and 74 GHz are depicted in Fig. 18. The radiation patterns are symmetrical and stable in the two planes orthogonal to each other over the operating band. The first sidelobe level is around −13 dB, which is close to the theoretical value for the array with a uniform aperture distribution.
Moreover, it can be seen that there are some discrepancies between the measured and simulated sidelobe levels for the angle of larger than 45 • . The possible reasons are explained below. By comparing the fabricated prototype with the designed model, it is found that the printed horn aperture size is slightly smaller than the simulated one, which would affect the aperture field distribution and thus varying the sidelobe. Besides, the measurement setup close to the antenna under test may also influence the radiation pattern in the large angle directions. In addition, the measured cross polarization of the proposed horn array is less than −32 dB. Fig. 19 shows the measured and simulated gain curves and the simulated directivity of the array. The measured and simulated gains of the printed 8 × 8 horn antenna array are up to 27.8 and 27.9 dBi with variations of 3.6 and 3.7 dB over the operating band, respectively. An extra metallic loss of around 0.56 dB caused by the surface roughness can be observed by employing Hall-Huary model in the simulation [47]. In addition, by comparing the measured gain with simulated directivity, the estimated radiation efficiency of the array is about 89%, which is comparable with the previous results of the 3D printed array in the Ka-band [44]. Besides, by comparing the simulated directivity of the array with the maximum achievable directivity, the aperture efficiency of the array is larger than 80% over the operating band.
D. COMPARISON AND DISCUSSION
Geometrical and operating characteristics of the proposed and reported millimeter-wave wideband high-gain antenna arrays with air-filled full-corporate feed networks are summarized in Table 4. It is seen that the H-type full-corporate feed networks were adopted by the reported arrays. The bandwidths of the arrays in [20] and [32] are similar with those of antenna elements and power dividers composing the arrays. However, even though the bandwidths of the antenna elements and power dividers are improved to more than 50% in [36], the bandwidth of the array is still about 30%. Furthermore, by employing the proposed bandwidth enhancement scheme based on the X-type feed network, the bandwidth of the millimeter-wave high-gain array can be promoted significantly to around 40%. Meanwhile, benefitted from the X-junctions having a simple geometry, the tiny irises for the impedance matching of the wideband T-junctions in [36] are saved such that the requirement for the minimum printable dimension in the 3D printing process can be met in the V-band. The satisfying measured results of the printed prototype confirm the feasibility of realizing wideband high-gain antenna arrays operating at 60-GHz by using the commercial 3D printing technology.
In terms of the gain and radiation efficiency, because of a shorter transmission path in the presented X-type feed network compared with the counterpart in the H-type one with the same size, the metallic loss of the feed network can be reduced as well. Hence, the radiation efficiencies of the proposed arrays operating at higher frequencies are still comparable with the results of the reported Ka-band array in [36].
V. CONCLUSION
A bandwidth enhancement scheme for the millimeterwave high-gain antenna arrays with large sizes has been investigated by using a one-time-reflection model of the antenna array with full-corporate feed networks. It has been explored that benefitted from the weaker in-phase reflection superposition and the shorter transmission path length, a wider bandwidth can be achieved by the proposed X-type full-corporate feed network. Based on the analysis, novel 60-GHz 3D printed horn arrays composed of wideband subarrays and air-filled X-junctions have been designed. Wide bandwidths of about 40%, stable radiation patterns over the operating bands and high-gain performance have been achieved by the arrays with sizes of 8 × 8 and 16 × 16. Taking advantage of the simple geometry that can be printed in a whole piece, the improved bandwidth, and the promising radiation characteristics, the presented design method and antenna array are attractive for the advanced millimeter-wave wideband communications. | 7,490 | 2022-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
HIV and Solid Organ Transplantation: Where Are we Now
Purpose of Review We review the international evolution of HIV and solid organ transplantation over 30 years. We emphasise recent developments in solid organ transplantation from HIV-infected to HIV-uninfected individuals, and their implications. Recent Findings In 2017, Johannesburg, South Africa, a life-saving partial liver transplant from an HIV-infected mother to her HIV-uninfected child was performed. This procedure laid the foundation not only for consideration of HIV-infected individuals as living donors, but also for the possibility that HIV-uninfected individuals could receive organs from HIV-infected donors. Summary Recent advances in this field are inclusion of HIV-infected individuals as living organ donors and the possibility of offering HIV-uninfected individuals organs from HIV-infected donors who are well-controlled on combination antiretroviral therapy (cART). The large number of HIV-infected individuals on cART is an unutilised source of otherwise eligible living organ donors. HIV-positive-to-HIV-negative organ transplantation has become a reality, providing possible new therapeutic options to address extreme organ shortages.
Introduction
Solid organ transplantation is the best therapeutic option for those with end-stage organ failure, most commonly of the kidneys, liver or heart [1][2][3]. Worldwide, the pool of human donors (living or deceased) falls far short of the increasing demand for organs, resulting in the deaths of many patients waitlisted for transplant. Efforts to increase the pool of available donor organs include donation after cardiac death (DCD), increased use of extended criteria or marginal organs, living donor programmes for kidney and liver donation, and split liver transplantation [4][5][6][7]. Currently, xenotransplantation, whilst appealing, has failed to overcome barriers to implementation and is not a viable option [8]. As organ shortages persist, the option of utilising living or deceased donor organs from people with chronic viral infections such as HIV and hepatitis C virus (HCV) has become a therapeutic reality. These advances have been facilitated through the availability of improved treatment options, such as direct-acting antivirals (DAAs) for HCV, and triple-combination antiretroviral therapy (ART) for HIV [9][10][11]. Presently, most organs from HIVpositive deceased donors have been implanted into HIVpositive adult recipients. There is scant literature on transplantation in the paediatric HIV-positive population [12][13][14].
HIV is a complex condition, not only due to its pathogenesis and symptoms, but also because of the social milieu surrounding it. It is associated with systemic stigma, which persists even in countries like South Africa (SA) that have robust HIV management and prevention programmes, as well as a vocal and committed activist community. In other countries, HIV-related stigma seems more prolific, and HIV is still often negatively associated with homosexuality, intravenous druguse and promiscuity. It is this stigmatised framework that complicates the field of HIV and solid organ transplantation, because all decisions need to be considered in terms of the much broader and potentially harmful social implications for those involved, not to mention the medical ramifications.
The field of solid organ transplantation and HIV is rapidly evolving. Where we are now is that in 2017, our team at Wits Donald Gordon Medical Centre (WDGMC), part of the University of the Witwatersrand medical teaching complex in Johannesburg, SA, performed the first living donor liver transplant from an ART-suppressed HIV-infected donor mother to her HIV-uninfected child [15••]. This transplant is notable for being an intentional, controlled transplant of an HIVpositive donor organ in order to save the life of the recipient, something that had not been previously attempted. What makes this transplant particularly unique, however, is that we assumed HIV transmission to our recipient was a fait accompli, but this might not have occurred. This was the first report of a known HIV-positive person intentionally accepted as a living donor for any organ, worldwide.
In this fast-moving field, the utility of HIV-positive living donors is now being pushed further, and an HIV-positive living person recently donated a kidney to an HIV-positive recipient in the United States of America (USA) (https://www. scientificamerican.com/article/worlds-first-hiv-to-hiv-kidneytransplant-with-living-donor-performed-successfully/). With each bold move, the transplant community opens up new options for expanding the donor pool and enabling transplantation.
A recent excellent review has focused on the history and progress made in HIV-positive-to-HIV-positive transplantation, where pioneering work has been done [16•]. This current review extends the field, to consider specifically HIVserodiscordant organ transplantation from HIV-infected donors to HIV-uninfected recipients. In this paper, we attempt to answer the question of 'where are we', currently, in solid organ transplantation and HIV. We review the legislative process governing HIV and solid organ transplantation over time, and how it has evolved. We then explore the potential for HIVpositive donation and outcomes for recipients who have received HIV-positive donor organs. Finally, we consider some of the new possibilities in HIV and solid organ transplantation for the future, especially in terms of diagnostic challenges that we now face.
The History of Solid Organ Transplantation in HIV-Infected Individuals Figure 1 depicts the key events that highlight progress in the field of solid organ transplantation and HIV. Prior to the emergence of the triple-combination ART in 1996, survival of HIVpositive patients receiving an organ transplant was inferior to that of their HIV-negative counterparts [17,18], and the procedure was not widely performed. Since the advent of relatively widespread access to ART, HIV-infected patients are now accepted recipients of both HIV-infected and HIVuninfected donor organs in specialised transplant centres worldwide. Studies suggest that HIV-infected recipients of HIV-uninfected solid organs have similar survival rates to those of HIV-uninfected recipients [19]. Transplant eligibility criteria for HIV-infected patients have also evolved, and HIVpositive status is no longer a contraindication for organ transplantation. HIV-infected transplant candidates are required to fulfil the same eligibility criteria as their HIV-uninfected counterparts, with some additional HIV-related stipulations [20].
Even with the advent of ART, however, the transplant community has viewed HIV in solid organ transplantation with scepticism. Some countries have adopted a more progressive approach through legislation than others. For instance, the Swiss Federal Act for Transplantation of Organs, Tissues and Cells has allowed transplantation of HIV-infected organs to HIV-positive recipients since 2007 [21].
The USA has a more complex history. In 1988, the use of organs from donors "infected with the etiologic agent for AIDS" was banned in the USA through the National Organ Transplant Act [19]. Recognising the growing need for transplantation in HIV-positive individuals, the HIV Organ Policy Equity (HOPE) Act was passed in the USA in 2013 and was largely a reaction to research taking place in South Africa [9,22]. HOPE reversed the federal ban and mandated criteria for conducting research involving the transplantation of organs from HIV-positive donors, both living and deceased, to HIVpositive recipients. The intention was to increase the number of organs available to HIV-positive recipients. With this change, numerous new considerations need to be taken into account [9,14,19].
At Johns Hopkins University Medical Centre in the USA, the "HOPE in Action" clinical trial (https://clinicaltrials.gov/ ct2/show/NCT03500315)-that evaluates HIV-positive-to-HIV-positive kidney and liver transplantation-is currently taking place. The results of this study will further explore the feasibility of utilising HIV-positive organ donors and how this could expand the donor pool. This would be to the benefit of both HIV-positive and HIV-negative individuals waiting for transplants.
South Africa: Pioneering Through Necessity
It is noteworthy that the most comprehensive legislative measure in HIV transplantation, the HOPE Act of 2013 in the USA [23•], was primarily based on work done in Cape Town, SA, from 2008 onwards [24••]. Motivated by poor organ supply and lack of access to dialysis, Muller et al. [22] performed the first kidney transplants from HIVinfected donors to HIV-infected recipients in this setting. The use of organs from HIV-infected deceased donors was shown to be a safe, feasible alternative to dialysis [9].
South Africa has 7.2 million people living with HIV, 19% of the global burden, with 4.4 million receiving ART [25]. There seem to be two drivers of South Africa's influence on international HIV-positive organ donor frameworks. The first is likely the unique nature of our HIV pandemic-which is of such scope as to warrant the consideration of HIV-infected people as donors in the face of extreme donor shortages. Moreover, the two-pronged National Strategy of PMTCT and ART for all infected with HIV in South Africa has called for specific and situational consideration of HIV-positive people as organ donors [26]. Secondly, South Africa's transplant legislation-and accompanying health law-defers to the over-arching legal principle that informed consent is imperative for all such procedures. This applies to all medical and surgical interventions in this country regardless of their novelty [27]. In undertaking the first intentional transplantation of a liver segment from an HIV-positive living-donor mother to her HIV-negative child in 2017, our team had little local legal precedent from which to draw. The HOPE Act was relatively helpful in guiding donor selection; however, it does not address issues in selecting HIV-negative recipients. It is essential that national guidelines are formulated for HIV-positive donors and HIV-positive and HIV-negative solid organ recipients, and this process has commenced in South Africa.
HIV-Associated Organ Disease
Broadly speaking, solid organ-specific manifestations of HIV can be divided into pathogenic effects from (i) HIV infection of the organ; (ii) opportunistic pathogens infecting the organ, particularly in the absence of ART and low CD4 counts; and (iii) consequences of ART. These effects, summarised in Table 1, include solid organs transplanted into HIV-positive recipients-most commonly liver, kidney and heart. They are far less frequent in combined organ transplants such as kidney-pancreas and liver-kidney [28]. Early initiation of ART with widespread access to treatment and long-term retention in care has largely mitigated the afore-mentioned effects of HIV and pursuant opportunistic infections on solid organs. Therefore, the "post-ART" era comprises a pool of potential living donors with well-controlled HIV whose survival is equivalent to those without HIV, despite the deleterious effects of some ART regimens. This is particularly relevant for countries with high HIV infection rates, good treatment access programmes and severe organ shortages, like SA.
HIV-Infected Individuals as Living Donors
In living donation, wellbeing of the donor is paramount and either a single kidney or a segment of the liver may be utilised. Good evidence from healthy HIV-negative donors confirms that one can live a normal life with one kidney and the liver's capacity to regenerate ensures adequate liver function after donation.
Specifically relating to the kidney, theoretical concerns have been raised that even with well-controlled HIV on ART, HIV-associated kidney disease (Table 1) may compromise the donor's remaining kidney function. However, there are no data to confirm or refute this. It could be argued that if there was no evidence of HIVassociated kidney disease prior to transplant and donors remain on ART after donation, future HIV-related compromise of kidney function is unlikely. Recently, the first HIV-positive living donor kidney transplant was successfully performed in the USA. Longitudinal follow-up will begin to provide some answers.
In relation to liver donation, living donor liver transplants from HIV-positive donors in the USA have not been performed due to concerns of a possible increased risk for surgical complications. However, this is contrary to a substantial body of evidence confirming equivalent [28]. Inferior outcomes of HIVpositive recipients after liver transplant in the subgroup with HIV and HCV co-infection have also been flagged; however, these outcomes were prior to the advent of DAA's for treatment of HCV [28,29]. Other concerns pertain to potential risks of hepatic injury from HCV/ HCV co-infection and/or ART-related hepatotoxicity that might compromise a living donor after transplant, placing the donor at higher risk for end-stage liver disease [19,30].
Transplantation Outcomes in HIV-Positive Recipients
US and European studies have demonstrated favourable outcomes in HIV-infected recipients of uninfected kidneys and livers. The relative risk of rejection and graft failure is higher than in HIV-negative patients, but the HIV itself does not seem to increase overall mortality [10,31].
Stock et al. [32] reported that 150 HIV-positive patients who received organs from HIV-negative donors demonstrated good survival and that HIV-positive patients could safely Drug-induced liver injury (ART and non-ART related) Living and deceased undergo organ transplantation. Muller et al. [9] described reasonable outcomes in the transplantation of 27 HIV-infected recipients with HIV-infected deceased-donor kidneys. Patient survival at 1 and 3 years were both 84% while 5year survival was 74%. Graft survival at 1, 3 and 5 years was 93%, 84% and 84%, respectively. These findings were similar to those in HIV-infected recipients receiving HIV noninfected organs. Whilst rejection rates were slightly higher in this cohort compared to Stock et al. [32], patient and graft survival was comparable at 1 and 5 years, respectively. A large, multicentre Italian study described long-term transplant outcomes of liver, kidney, heart, lung and combined kidney-pancreas transplants in HIV-infected recipients [14]. Twenty-nine transplant centres participated, and 257 qualifying solid organ transplants were performed during the study period. Kidney and liver transplants were most common. The inclusion criteria for transplantation in HIV-positive recipients were the same as those in HIV-negative recipients, with additional criteria being CD4 count greater than 200 cells/mm 3 , viral suppression on ART and the absence of AIDS-defining events. The primary cause of death post-transplant was HCV co-infection, particularly in liver and kidney recipients. HCV/ HIV coinfection outcomes were reported to be poorer than HIV mono-infected patients. HCV infection was also more aggressive in the HIV/HCV coinfected group than it was in the non-HIV infected group. Specifically, survival in liver transplant patients with HCV co-infection was only 50% while kidney transplant patients showed better outcomes with 95% survival. The poorer outcome in liver patients appeared to be associated with chronic HCV, and to a lesser extent HBV infection. With the advent of DAA's, these figures are expected to improve in the future.
Data from the USA appears to demonstrate better survival than data from Italy [33]. One multicentre study compared data on HIV-infected transplant recipients to age-and racematched data in HIV-uninfected recipients obtained from the United Network for Organ Sharing (UNOS) [34]. There was no statistical difference in the cumulative survival rate at 1, 2 and 3 years in HIV-infected patients (87%, 73% and 73%) as compared to the HIV-uninfected group (87%, 82%, 78%). In the HIV-infected group, poorer outcome was associated with HCV co-infection, CD4 counts of less than 200 cell/mm 3 prior to transplantation and poor tolerability of ART posttransplantation.
Unintentional Organ-Associated HIV Transmission
There are few reported examples of inadvertent HIV transmission events, where deceased organ donors were HIV-infected, but this was not detected on screening tests pre-transplant [35][36][37][38][39][40]. One article cited laboratory error as the cause of unintentional transplants from an HIV-positive donor to five HIV-negative recipients [41]. In this case, the heart, liver, both kidneys and a single lung were implanted. All recipients received ART within 48 h post-transplant. At 4 years post-transplant, all recipients were still on ART and demonstrated undetectable HIV viral load and CD4 > 200 cells/mm 3 . No opportunistic infections were reported. Graft and patient survival was 100%.
Since the inception of our paediatric liver transplant programme at WDGMC in 2005, we have been involved in one case of inadvertent HIV transmission. We observed seroconversion in a paediatric patient who received a liver graft from a deceased donor in the window period for HIV infection at time of death (unpublished). The recipient of the HIV-infected donor liver at our centre is doing well and stable on ART.
Intentional HIV-Positive-to-HIV-Negative Transplantation
HIV-positive-to-HIV-negative transplantation currently presents an ethical dilemma, because the primary objective should be procuring a disease-free organ for transplantation. The notion of implanting an infected organ, especially in the context of HIV, is met with shock and disbelief. However, the context is vital. Worldwide, but especially in developing countries like SA, there is a very limited pool of donor organs available for patients with end-stage organ failure. Children are particularly vulnerable in this regard because the need to size-match organs requires paediatric donors, which are few and far-between. Furthermore, with the success of the PMTCT programme, HIV-negative children are being born to HIV-positive mothers. Should these children require liver transplantation, their best chance of success is a related living donor. Many children succumb to organ failure before they have been transplanted. At WDGMC, 15-20% of children listed for liver transplantation die on the waiting list. For example, 72 children were listed for liver transplantation at the beginning of 2018. By the end of 2018, 22 remained on the list. During 2018, we transplanted 42 children, 12 children died and the remainder were removed from the waiting list because they recovered, became un-transplantable or presented with nutritional challenges that needed to be managed before re-listing. This scenario grounded the decision taken by our team to undertake the world-first living donor liver transplant from an HIV-positive mother to her HIV-negative child. We faced numerous ethical issues, which we carefully considered. Ultimately, it was concluded that it was in the best interests of the child to live with HIV, rather than to face certain death resulting from complications of end-stage liver failure due to biliary atresia [42•].
The transplant was undertaken as part of a research study, and Institutional Review Board (IRB) approval from the Wits Human Research Ethics Committee (Medical) [Clearance number M170290] was obtained prior to the procedure. Engagement with the IRB, and experts across numerous relevant disciplines, allowed us to consider the ethical implications from many different angles. A separate article detailing the ethics of the procedure has been published [42•]. The primary ethical issues we faced involved protecting the autonomy of the mother as donor, as she had on several occasions expressed a wish to donate despite her HIV status and with full knowledge of the transmission risk to her child. In this context, it was vital that the mother was carefully informed and counselled about the additional risks and the many unknown variables we faced. To this end, our Independent Donor Advocate (IDA) was invaluable in promoting the mother's autonomy and communicating with the transplant team on her behalf.
Weighted against the mother's autonomy were the best interests of her child, too young to give consent to the procedure, but ultimately the one who would live with the physical and psychological consequences. At the time of the transplant, it was unanimously agreed that saving the child's life, whilst potentially transmitting HIV simultaneously, was in the child's best interests. However, we did not expect the ambiguous nature of our HIV test results, which have raised new ethical issues such as the merits of an Antiretroviral Treatment Interruption and factors surrounding disclosure. In the face of these uncertainties, and as we navigate this landscape, we continue to consult widely in order to best manage these ethics quandaries, bearing in mind that the best interests of the child are paramount in our decision-making process.
Whilst not a donor, we invited the father-as well as the mother-to ultimately consent to the procedure, as both parents are responsible for caring for the child into the future. At this stage, that future is uncertain with regard to HIV status and determining the nature of ongoing diagnostic investigations and management.
Because HIV infection can be effectively controlled with ART, related virally suppressed living donors could potentially donate their organs as readily as HIV-uninfected individuals do. If HIV infection emanating from the transplanted organ can be controlled by ongoing provision of ART with good adherence, recipients may remain virally suppressed and thus have a similar quality of life to HIV-uninfected patients. Importantly, it is now well established that HIV infection is not accelerated by immunosuppressive therapy post-transplantation, if ART is appropriately initiated [43]. HCVpositive-to-HCV-negative organ transplants in the context of the provision of DAAs are now conducted in many US centres, highlighting how quickly transplantation options evolve. In our case, the mother was virally suppressed on ART for at least 6 months prior to donation with a CD4 count > 200 cells/ mm 3 , and the recipient was started on ART the evening before transplantation to reduce the risk of acquisition of HIV infection. Both remain on ART.
Several landmark studies suggest that successful ART translates to zero risk of sexual transmission, supporting the U = U (undetectable = untransmissable) dictum [44, 45, 46••]. A donated organ as the source of HIV exposure, albeit from a virally suppressed patient, presents a very different scenario and we cannot assume U=U in this context. To inform how one might reduce the risk of transfer of HIV with solid organ transplantation, more studies are required to understand HIV persistence and compartmentalization and the immune environment in different organs of ART-suppressed individuals. Whether a longer course of preventative ART given to the recipient might further reduce the risk of infection is also not known. Furthermore, it is not known how the very different environment once grafted in the HIV-negative recipient host will impact on these same features. Given the diagnostic challenges in this setting (described below), answering these questions requires knowledge of the HIV infection status which can only be unequivocally determined off ART.
Diagnostic Challenges and Future Research Needs: Controlled HIV-Positive-to-HIV-Negative Organ Transplants
Standard diagnostic tests for determining HIV infection measure HIV-specific antibodies, or HIV antigens, plasma HIV RNA or HIV cell-associated DNA. The WDGMC case was seropositive at 49 days post-transplant, which was the first time point tested after surgery [15••]. Although these responses have waned subsequently, HIV-specific antibodies remain detectable to date. However, we propose this response likely represents a maternal memory response to HIV rather than a de novo response produced by the child in response to HIV. Mechanistically, it is possible that recipient responses could be sensitised by passive transfer of HIV antibodies bound to Fcγ receptors on the mother's immune cells or actively produced by mature maternal liver resident B cells or B cells transferred with the graft. As antibodies can engage multiple arms of the immune system through their Fc receptors and antigen presentation functions [47], any induction of recipient responses would be expected to be dependent on sufficient presence of HIV antigens and sufficient levels of maternal antibodies. It can be argued in our WDGMC child case that both are insufficient in quantity to appear to be a major factor in this regard.
Several studies support the transfer of donor-specific response to recipients. Cases of donor-to-recipient transfer of peanut allergy support donor-specific transfer of allergic responses [48][49][50]. A study of HCV-negative patients who received kidney allografts from donors who were HCVantibody positive but nucleic acid negative (Ab+/NAT) found 14 of the 32 patients (44%) seroconverted following transplantation but were HCV RNA negative [51], suggesting the transfer of organism-specific immune responses that are not necessarily associated with acquisition of infection in the recipient. Another recent study highlights the early emergence and donor origin of anti-HCVantibody responses in recipients of HCV-infected organs [52]. Further complexities in interpreting the recipient serological responses have been highlighted by Nel et al. [53] and include possible decreased and delayed response rates and seroreversion as evidenced in response to vaccines and other infections [54][55][56][57][58]. The latter effects are all likely consequences of immunosuppressive treatment.
The development of detectable HIV-specific antibodies upon HIV encounter, ordinarily driven by a sustained period of viremia following acquisition of HIV infection, is compromised by early administration of ART. In adult studies of very early ART (Fiebig 1), the associated lack of detection of virus and of HIV-specific antibodies did not preclude rapid viral rebound when ART was stopped in 8 Thai patients [59], or rebound despite remission of 7.4 months in a single case with a very low peak HIV RNA of 22 copies/ml [60]. A substantial proportion of early treated HIV-infected children are found seronegative when tested years later [61][62][63].
These collective findings highlight that one cannot convincingly determine the HIV status of an uninfected recipient receiving an organ allograft from an HIV virally suppressed positive donor. With very early ART, it is unlikely that the recipient would develop a detectable HIV antibody response. Likewise, lack of detection of HIV RNA or cell-associated HIV DNA with highly sensitive tests cannot guarantee lack of HIV-1 infection as small numbers of latently infected cells below the level of detection of assays may be present and fuel viral rebound if ART is stopped. How immunosuppressive drugs might impact on viral rebound if the recipient is in fact infected is also unknown and potentially a conc er n. Wi t h o n go i n g a dv an c es i n t h e fi e l d of immunomodulation and solid organ transplantation in attempts to overcome complications of immunosuppression and to enhance graft survival [64], such innovations may also be of benefit in the HIV solid organ transplant arena.
The setting of the WDGMC case, and future cases the team plans to undertake as part of an IRB-approved research project (Clearance number M171035] [15••], provides us with an informative and unusual (parenteral) mother-to-child transmission model that can be utilised to address very early events in the HIV exposure-infection continuum in children. This experience has raised many questions concerning the likelihood of acquisition of HIV in the recipient when the organ donor is HIV-infected and virally suppressed, and ART is given to the recipient prior to transplantation. As things currently stand, the only way to definitively establish HIV infection in the recipients of HIVpositive-to-HIV-negative transplantations would be to stop antiretroviral therapy with close monitoring of possible recrudescent infection. However, much remains to be understood concerning this unique setting of HIV exposure/infection and infection risk. Risk and benefit of any such an intervention would need to be weighed up very carefully and on a case by case basis.
Conclusions
In this review, we have highlighted recent advances in the field of solid organ transplantation and HIV. One of the most notable findings is that controlled solid organ transplant from HIV-positive donors to HIV-negative recipients has been interrogated to a very limited extent. The HOPE Act does not address this type of transplantation and, if amended, may open-up additional therapeutic options for transplant in the USA. Clearly, HIV is still highly stigmatised internationally, and this may prevent lawmakers from considering serodiscordant transplants as an option. Once again, South Africa has taken the lead-out of necessity but possibly also because we have been desensitised to HIV to a greater extent than the rest of the world. We make every effort to view HIV as a chronic, manageable disease, and we aim to assist HIVpositive people in accessing healthcare services regardless of their status. It is hoped that this poses a challenge to established preconceptions of HIV which still linger, and which may prevent policy-makers from fully exploring such options. In the context of extreme organ scarcity, this position is not without its merits, because it allows us to save the lives of children who would otherwise die awaiting an organ transplant. As this field expands globally, increasing utilization of virally suppressed HIV-infected donors could expand the options for patients who are in desperate need of organs.
Compliance with Ethical Standards
Conflict of Interest All authors declare that they have no conflict of interest.
Human and Animal Rights and Informed Consent All reported studies with human subjects performed by the authors have been previously published and complied with all applicable ethical standards (including the Helsinki declaration and its amendments, institutional/national research committee standards, and international/national/institutional guidelines). | 6,332.2 | 2019-09-04T00:00:00.000 | [
"Medicine",
"Biology"
] |
Physical, thermal, morphological, and tensile properties of cornstarch-based films as affected by different plasticizers
ABSTRACT The current research was designed to determine the effect of various concentrations (0%, 25%, 40%, and 55%) of fructose, sorbitol and urea plasticizers in cornstarch-based films, with the aim of achieving a new polymer for the application of biodegradable materials. Casting technique was used to prepare the films. The physical, morphological, thermal, and mechanical properties of produced films were evaluated. The results showed that the thickness, moisture content, and water solubility increased with the addition of plasticizer concentration. While the glass transition temperatures showed an insignificant effect with high plasticizer content. Regardless of plasticizer sort, the tensile stress and Young’s modulus of plasticized films decreased as the plasticizer concentrations were raised beyond 25%. Likewise, the relative crystallinity decreased by increasing the plasticizer content from 0% to 25%, but it began to grow once the concentration increased above 25%. The fructose-plasticized films presented consistent and more coherent surfaces compared to sorbitol and urea counterparts, which appeared less homogeneous surfaces with microcracks. In summary, the plasticizers types and concentrations are affected significantly on the properties and performance of the cornstarch-based film. Film plasticized with 25% fructose appeared the finest set of features and achieved the highest mechanical performance among the plasticizers used in this study.
Introduction
The concerns over current environmental issues have forced scientists and engineers to find solutions to ensure a sustainable green environment. Unlike biodegradable wastes, nonbiodegradable materials cannot be disposed of easily. Non-biodegradable wastes are those who cannot be decayed or decomposed by natural agents. They last for many years without any degradation. [1] A prominent example of this is synthetic plastic, which is a common source of environmental contamination used in almost every area of life. It is estimated that worldwide production of synthetic plastic is about 140 million tons per year, with an increasing rate of 2.2% per year. [2] In this regard, the production of environmentally friendly materials to be used as an alternative to nonbiodegradable is inevitable. [3][4][5] Thus, biopolymers such as thermoplastic starches derived from natural sources are one of the most promising alternatives to mitigate the abovementioned issues. Starches are carbohydrate polymers consisted mainly of a combination of polysaccharide amylose and branched polysaccharide amylopectin with different ratios depends on the botanical source of the starch. [6] Starch is the most biopolymer that is used to produce films with high biodegradability because of its ability to act as linking matrix between fillers. Moreover, it is available, renewable, and cost-effective source. [7] Corn plant is the predominant sources of native starches commercially available. Almost, more than 85% of starch production in the world is extracted from corn tree. The other minor plant sources of native starch are rice, cassava, potato, and wheat. [8] The individual granule of corn contains 70% semi-crystalline starch, and the rest is crude oil, protein, sugar, and ash. [6,9] Despite their multiple advantages such as biodegradability, availability, recyclability and low cost, the starch-based materials are also known to have several disadvantages like high water sensitivity (hydrophilic character) and less mechanical performance compared to traditional industrial polymers. [10] Therefore, it is necessary to incorporate reinforcement materials like plasticizers to enhance the mechanical performance of these biomasses. The primary role of the plasticizers is reducing the strong attraction of hydrogen bonds within amylose and amylopectin molecules in the starch network, as well as facilitate the mobility rate of the polymer macromolecular chain, this, in turn, minimizes the glass transition temperature and then improves the flexibility and stiffness of starch-based plasticized materials. [11,12] Recently, a considerable literature has grown up about the use of plasticizers to reinforce the biopolymers products. For instance, urea [13,14] , glycerol [15,16] , sorbitol [17,18] , xylitol [19] fructose, glucose and sucrose [20,21] , as well as tri-ethanolamine and glycol [14,22] were used as plasticizers of biodegradable films. This research, therefore, sets out to assess the effect of different plasticizers, namely, fructose (F), sorbitol (S) and Urea (U) with concentrations (0%, 25%, 40%, and 55%) on the physical, morphological, and thermal characteristics of cornstarch (CS) based films as well as the mechanical performance.
Materials
CS was isolated from granules of fresh corn ear based on the method described by. [23] The CS chemical composition was analyzed in dry form following the procedures reported in. [24] The CS contained 0.26% ash, 7.13% lipid, and 7.7% crude protein. The amylose and amylopectin content of starch was found to be 24.64 g/100g and 75.36 g/100g, respectively, as were measured using the standard in house method (food analytical chemistry) revealed in previous work. [25] The particles of CS varied from polyhedral to a spherical shape, and the majority of distributed particles (89%) had sizes less than 40 μm. Whereas the starch moisture content was 10.45g/100g and the density was 1.4029 g/cm 3 . The Fructose, Sorbitol, and Urea plasticizers were supplied by Evergreen Engineering & Resources SDN-BHD, Malaysia. According to previous work, [26] almost all plasticizers are compatible with polymers. In practice, the type of plasticizer to be used with polymers is usually selected experimentally through trial and error. Selection criteria depend on the required thermal and mechanical properties as well as the barrier and the rheological properties. However, the use of fructose, sorbitol, and urea for plasticizing starch-based filmmaking was indicated in the introduction section.
Preparation of films
The cornstarch-based films were manufactured by conventional casting procedure. Aqueous solution 100 ml distilled water contained 5 g of pure CS was heated to 85 ± 2°C for 20 min with constant stirring in a thermal water bath. The objective of this step is to provide a homogenous forming suspension. Afterward, the three types of plasticizers were added individually in the forming solution at concentrations 0%, 25%, 40%, or 55% (w/w, dry starch basis). The heating of the solution continued for an additional 20 min at the same temperature. After that, the slurry was left to cool before casting it in a thermal casting dish. The casted mixture was then dehydrated in an air circulation oven at 45°C for 24 h. The dehydrated films were peeled off from the casting dishes and kept in plastic bags at room temperature for a week before characterization processes. Films produced according to the type of plasticizers and concentration were encoded as follows: F25%, F40%, F55% for Fructose, S25%, S40%, S55% for sorbitol, U25%, U40% and U55% for Urea, and CS for none-plasticized CS film (control).
Characterization
Films thickness and density An electronic caliper (Mitutoyo-Co, Japan) with ± 0.001-inch accuracy was used to calculate the film thickness. The average value of five random measurements for each film was determined to be the exact film thickness. The density of films was measured directly from their volume (v) and weight (m), where the volume of each film was computed from the resultant of the film suggested dimensions (10mmx30mm) times the obtained thickness from the previous step. Therefore, film density (ρ) was calculated via the following equation:
Film moisture content (MC)
The moisture content of the material is defined as the amount of water that could be removed from the material without changing the chemical composition in relation to the weight of the material. [27] MC of the films was measured according to the method introduced by. [28] A known weight film was kept in the oven for 24 h at 105ºC. The weight differences before (w 1 ) and after (w 2 ) heating were used to obtain the MC for each sample indicated by percentage or gram/100g.
Film water solubility (WS) WS is the amount of chemical substances that can melt in the water at a certain temperature, preventing the loss of plasticizer. [15] Three samples of each film (1x3mm 2 ) were placed in a lab oven for 24 h at 105°C and then weighed (w initial ) dry matter. The dry weighted samples were soaked up in a sealed beaker contains 50 ml distilled water for 24 h at ambient temperature. Afterward, the residues of the film were removed from the beaker and were dried for 24 h at 105°C and then reweighed (w final ) dry matter. WS of each film was measured by: Thermal gravimetric analyzer (TGA) An analyzer type (Q500 V20.13 Build 39) was used to perform TGA and DTG tests. Film size 10 mm 2 was placed in platinum crucibles under a Nitrogen atmosphere and exposed to heat from ambient temperature to 450°C at a constant rate of 10°C/min. This method of thermal analysis evaluates the thermal stability of samples and measures the mass loss over time as a function of temperature.
Scanning electron microscopy (SEM)
An instrument type Hitachi S-3400N was used to investigate the surface morphological structure of samples. The sample was coated with a thin golden layer before applying an acceleration voltage of 20 kv through a high vacuum. High-resolution images at different magnification factors resulted from this test.
Fourier transform infrared spectroscopy (FTIR)
The FT-IR test was performed by an infrared spectrometer (Bruker vector 22). The spectral frequency range was set between 4000 and 400 cm −1 over a spectral resolution of 4 cm −1 . The preparation of specimens was handled by KBr-disk technique.
X-ray diffraction (XRD)
The X-ray diffraction analysis was conducted using 2500 X-ray diffractometer (Rigaku-Tokyo, Japan). The device was managed by 0.02 (θ) s −1 scattering speed within 5 • to 60 • (2θ) angular range. Under operating voltage and current of 40 kV and 35 mA, respectively. The crystallinity index (Ci) was measured based on the calculus of crystallinity area (Ac) and amorphous area (Aa) in diffractogram using the equation:
Tensile properties
The mechanical performances of the films were obtained by using 5KN INSTRON tensile machine with D882 (ASTM, 2002) standard. The preparation of film samples and settings of the tensile machine was carried out according to the method reported by. [29,30] Film strip of 10 × 70 mm dimension was fixed between tensile grips and then was dragged out with a crosshead speed of 2 mm/min and grip separation of 30 mm. The variation in the crosshead speed between 0.5 and 10 mm/min does not appear to affect the tensile properties of composite materials. [31] Five replicates were performed for each specimen to find out tensile strength and elastic modulus.
Statistical analyses
The statistical analyses of the experimental data were carried out using Microsoft Excel 2016, and the resulting curves and graphs were plotted using Tecplot 9.0 software.
Thickness and density
From the data in Table 1, it was found that the thickness of the CS-based films increased insignificantly with the addition of the plasticizer concentration. Hence, the films containing 40% and 55% of plasticizers were thicker than films containing 25% plasticizers. These observations may be relevant to the plasticizer function in reconstructing the molecular chain of the polymer film, where the higher plasticizer content generates more spaces, resulting in thickening of the film thickness. A similar explanation about the effect of plasticizers on the film thickness was reported by. [15,[32][33][34][35] Regarding the film's density, the type and concentration of plasticizes appeared different influences on films densities. The increase of fructose concentration decreased the density of starch film from 1.559 to 0.988 g/cm 3 , this consistent with results of Edhirej et al. [29] where they studied the effect of fructose on cassava starch films. Meanwhile, the addition of sorbitol and urea concentration significantly increased the density of films. However, the film plasticized with F55% recorded the lowest value of density (0.988 g/cm 3 ) while, the highest density (5.454 g/cm 3 ) was obtained from U55%-plasticized film.
Moisture content and water solubility
Regardless of plasticizer type, the addition of plasticizer concentration from 25% to 40% and then to 55% to some extent increases the water retention of the plasticized films that depend on the plasticizers holding water capacity. [36] However, the effect of urea on the moisture content of CSfilms was more noticeable than that of fructose and sorbitol-plasticized films. The difference in water retention of plasticized films was attributed to the resemblance of glucose units in the molecular structure of the plasticizers where the lower resemblance of glucose units causing a weak molecular interaction between the plasticizer and the intermolecular of biopolymer chains. Consequently, the chance of plasticizer to interact with water molecules became greater. [37] The water solubility of the starch-based film is a key feature property for the applications that may require water insolubility to enhance water resistance as well as product integrity. [38][39][40] Therefore, due to the hydrophilic nature of the plasticizers, both the type and concentration of plasticizer showed a strong influence on the water solubility of CS-films. The addition of plasticizers reduces the interactions between polymer molecules, and that creates greater free space in the structure of the polymer chain, this, in turn, allows for the penetration of water fragments in the film matrix, thus, maximizing the plasticized film solubility. [29,37] Thermal gravimetric analyzer (TGA) CS-plasticized films were thermogravimetrically examined to compare their thermal stability and decomposition characteristics at different concentrations. TGA is an essential analysis used to investigate the degradation and thermal behavior of starch-based materials that were used for industrial and scientific perspectives for biocomposites development. [41][42][43] Figure 1 presents the thermogravimetric (TGA) and derivative thermogravimetric (DTG) curves of plasticized samples. Apparently, all curves displayed a similar trend, showing that the thermal degradation and mass loss of the CS-based films occurred in three distinctive phaseseach phase associated with a peak in the DTG curve, which represents a particular event during heating. The first thermal action occurred at temperatures below 100 • C caused the initial mass loss; this was mainly due to the elimination of moisture and water fragments by evaporation. [44,45] Accordingly, F and U films showed a significant weight loss as compared to S films. This observation is likely to be attributed to the substantial moisture content of F and U films as shown in Table 1. Thus, the higher the moisture content, the higher mass loss in this phase. Further heating led to the second weight loss at temperatures between 150°C and 250°C. This loss in mass was mostly ascribed to the volatilization of plasticizer molecules together with water remains. Similar findings were previously reported by. [16,46] The authors stated that most plasticizers begin to evaporate at 150°C. The last mass loss of film's degradation occurred at temperatures higher than 270 • C. The weight loss in this phase was related to the depolymerization and degradation of carbon chains in starch structure. It is well known that the beginning of the thermal reactions of starches usually occurs at about 300°C. [47,48] From the data in Table 2, all films exhibited close decomposition temperatures varied between 277 • C and 296 • C, close to the degradation temperature of the native starch film (control) which recorded its maximum decomposition at 287 • C. These evidences that the incorporation of plasticizers did not change the thermal stability of CS-based films. Such results are consistent with the data obtained in [49] , who also studied the effect of different plasticizers (glycerol, sorbitol, and polyvinyl alcohol) on cornstarch-based films, and declared that the rate of degradation of films ranged from 290.9 • C to 295.4 • C. However, the addition of plasticizer concentration affected significantly on the thermal degradation of films, which reflects on the amount of residue after the last decomposition as revealed in Table 2. For instance, the percentage of mass residue decreased with the addition of sorbitol as well as with the urea addition, while it increased with the addition of fructose as compared to the control film (24%). This phenomenon confirmed that F-plasticized films are more thermally stable than the S and U-plasticized films at a temperature above 300 • C. These observations are in line with those of previous studies by [29] the researchers investigated the effect of various plasticizers (fructose, urea, tri-ethylene glycol, and triethanolamine) on cassava-based films and stated that fructose films achieved the best thermal stability. Figure 2 provides the (SEM) images of none plasticized (control) film along with plasticized films. The micrograph of none plasticized control film showed a uniform and relatively smooth surface with the presence of some non-dissolved starch particles, which is reflecting the morphological structure of CS. The addition of the different plasticizers by 25% to pure starch film showed a little disturbance on film surfaces caused by the high temperature and continuous stirring during preparation. The addition of F25% showed a relatively smooth and coherent surface with no pores, while the surfaces of the S25% and U25% were coarse and covered with some impurities and agglomerates of none melt starch. The films plasticized by 40% and 55% of U and S plasticizers exhibited less consistent surfaces with large porosity as well as microcracks were observed in the structure, unlike the F-films counterpart which revealed better surface integrity compared to the S and U films at the same concentration. However, the appropriate addition of plasticizer concentration to starch-based films supports dissolving of whole starch molecules; improves the structural integrity and coherence of the film surface. [50] Accordingly, the highest proportion of plasticizers used in the manufacturing of starch-films is 60% (w/w dry basis), the exceeding of this limit the produced film appeared to be weak, incoherent and hard to peel off from the casting container. In contrast, the prepared film with less than 15% (w/w dry basis) plasticizer content appeared to be brittle, sticky, and difficult to remove from casting container. Consequently, the evaluation of their properties has become impossible. These observations are in good accord with those mentioned in previous literature. [15,29] For all different plasticizer concentrations in the current study, F-plasticized films evidenced to be rather smooth, coherent and more homogeneous.
Fourier transform infrared (FT-IR) spectroscopy
To investigate the potential interactions between CS and the various plasticizers successfully, the FT-IR spectrum curve has been divided into four main regions as follows: the first region below 800 cm −1 wavenumbers, the second region between 800 and 1500 cm −1 wavenumbers, the third region ranged from 2800 to 3000 cm −1 , and the last region above 3000 cm −1 wavenumbers. [51] As seen in Figure 3 the FTIR spectra curves were similar because the elemental composition of the plasticized films was based on the starch structure. The broad overlapping peaks in the first and second regions at 758.02 cm −1 and 933.92 cm −1 for the CS-control film were attributed to the vibrations of glucose pyranose units and C-O vibrational stretching of glucose unit, respectively [51] Likewise, the existence of C-O-H group in CS structure showed up the peak at 1076.33 cm −1 , whereas, the coupling of C-C and C-O stretching mode caused the appearance of the sharp band at 1149.01 cm −1 . In the same area, the bending mode of CH 2 produced a peak at 1367.04 cm −1. [52] For the region between 2800 and 3000 cm −1 , the oscillations of water fragments led to the emergence of the wide infrared band at the peak of 1636.44 cm −1. [53,54] Similarly, the occurrence of C-H vibrational stretching resulted in a sharp peak at 2925.27 cm −1 . Last, in the fourth area, the O-H group vibrational stretching generated extreme band at 3266.20 cm −1. [55] As a result, to plasticizers addition, it was noticed that the FT-IR spectra of F and S films presented identical spectra peaks compared to control film. The only difference, for instance, the sharp peaks which resulted from stretching of the O-H group were shifted slightly to lower wavenumbers for both F and S films. While in the case of U films these peaks have been turned into higher wavenumbers than that in control film. The authors [56,57] in earlier studies have attributed this behavior of U films to its substantial moisture content which affected the hydroxyl groups in starch. Furthermore, the U films offered additional double peaks in the high wavenumber area rather than a single peak for the CS-film and F-S films as well. The prominence of these peaks was ascribed to the stability of hydrogen bonds together with O molecules of both O-C glucose ring and C-O-H group within starch structure. [4] The results mentioned above demonstrated that all films displayed absorption peaks within the same regions, regardless of plasticizer sort and content, this indicates that the plasticizers have similar functional groups, and they are all categorized as polyols. [37] Moreover, FTIR analysis proved that the addition of plasticizers to CS-films did not alter the chemical composition of CS. This phenomenon showed that no chemical reaction occurred and that the chemical structure of the resulting CS-films was completely stable.
X-ray diffraction (XRD)
The X-ray diffraction structures of CS-based samples are introduced in Figure 4. The results indicated that the majority of CS particles have been gelatinized and retrograded and thus the crystalline framework of the starch was collapsed. The retrograded CS-film owned diffraction peaks located at 15.14°, 17.4°, 18.6°, 20.11°, and 22.8°; Such peaks are effectively compatible with those presented by [58] however, the addition of plasticizer concentration regardless of the plasticizer type affected on the crystalline structure of films. The F-plasticized films showed a similar pattern to control film, except that the intensity of the diffraction peak at 9.5°increased gradually with increasing fructose concentration from 0% to 55%. Meanwhile, the X-ray diffraction pattern of S-plasticized films revealed a slight increase in the intensity of the peak at 9.4°w hen the concentration was raised from 0% to 25%, but the intensity of the peaks at 14.4°, 17.4°, and 25.4°turned out to be more pronounced by increasing sorbitol concentration from 25% to 55%. Besides, the crystallinity degree of CS-films is strongly affected by the addition of sorbitol concentration (0-55%) which is reflected in their sharp and clearly defined peaks that associated with the non-crystalline region. Based on that, S40% and S55%-plasticized films were classified as highly crystalline films compared to S25% counterparts; which displays lesser crystallinity. For the U-plasticized films, the XRD pattern shows sharp peaks between 2θ diffraction angles 20 • and 30 • , which do not exist on control film. The appearance of these peaks was attributed to the typical diffraction pattern C-type model. [59] In addition, Table 3 presents the crystallinity degree of the samples. A significant reduction in all films relative crystallinity was noticed once the plasticizer concentration has risen to 25%. While the further increase in the concentration (25% to 55%) led to increasing the degree of crystallinity. However, the comparison of the plasticized films demonstrated that the S-films possessed a higher crystallinity degree than the U and F-films counterparts. According to [60][61][62] the increase in crystallinity of starch-based films is strongly associated with the reduced moisture content of the film. Therefore, the increase in crystallinity of S-plasticized films as observed (Table 3) is compatible with the low moisture content of S-films obtained in this study.
Mechanical properties
This test is conducted to measure particularly, tensile stress at yield and Young's modulus for the CS-base films, which plasticized with different plasticizer types and concentrations. From the finding, it is clear that the tensile strength of the tested films decreased as the plasticizer concentration increased from 25% to 55% irrespective to plasticizer type. This conclusion is entirely consistent with that in a previous study conducted by. [12,29] The film with 25% fructose recorded the highest tensile stress (6.8 MPa) which is higher than that recorded with 25% sorbitol (4.52 MPa) and 25% urea (0.62 MPa) counterparts. The expected interpretation of the high tensile stress at low plasticizer content is related to hydrogen bonds, which formed between starch and plasticizer molecules, these bonds are strongly dominated at lower plasticizer content, and become weaker as content increased. [63,64] Therefore, the tensile strength of F-plasticized films reduced from 6.8 to 3.8 MPa, and that of S-plasticized films decreased from 4.52 to 3.04 MPa as plasticizer percentage increased to 40% and then to 55%. Meanwhile, U-plasticized films recorded the lowest values of tensile stress; the observed reduction was from 0.62 to 0.0448 MPa corresponding to the same range of plasticizer percentage. Many authors have detected the reduction in tensile strength of starch-based films due to increased concentration of plasticizers. [63,[65][66][67][68][69] The tensile strength for fructose films in the current study was higher than those found by [29] the researchers fabricated plasticized films by mixing cassava starch with 30% fructose and achieved 4.7 MPa. Similarly, the same authors incorporated cassava starch with 30% urea and obtained 0.68 MPa which is closer to the current study finding. In general, the tensile strength values for CS-fructoseplasticized films were higher than the previously reported values for cornstarch films plasticized by glycerol [70] , cornstarch with stearic acid and glycerol [71] , and cornstarch with xylitol and glycerol. [72] On the other hand, the values of tensile strength for CS films in this study were lower than those were made of oxidized and acetylated cornstarch and glycerol [73] , and plasticized cornstarch in the presence of glycerol, sorbitol, or PVA. [49] In relation to elastic modulus (Young's modulus) which determines the stiffness of materials. The higher value of elastic modulus indicates the optimum stiffness. From Figure 5, it can be noticed that the effect of plasticizer content (25%-55%) on Young's modulus of CS-plasticized films has the same behavior compared with their correspondent tensile stress. The increase of plasticizer concentration from 25% to 55% recorded a noticeable decrease in films stiffness: from 61.15 to 29.91 MPa for F-plasticized films, 32.49-15.91 MPa S-plasticized films and 1.67-0.363 MPa for U-plasticized films. This behavior can be explained through the role of plasticizers in modifying the structure of the starch network. When plasticizers integrated into starch chains, they promote the development of hydrogen bonds between the molecules and weakening the solid intra-molecular attraction within the starch matrix. Thus, Young's modulus of CS-plasticized films is reduced and then became less rigid. [12,29] In short, the mechanical performance of biopolymers based on thermoplastic starches is strongly affected by several parameters such as the botanical origin of starch (amylose/amylopectin ratio), the ambient circumstances (temperature and humidity), processing method as well as plasticizer type and concentration. [74,75]
Conclusion
The effect of different plasticizers types with different concentrations on the physical, morphological, mechanical and thermal characteristics of cornstarch-based films was investigated. The results of the investigation showed that the gradual loading of plasticizers concentration (25%, 40%, 55%) enhanced the performance of the films according to the plasticizer type. The addition of plasticizer concentration increased the thickness, moisture content and water solubility of all films, irrespective of the plasticizer type. However, sorbitol-plasticized films revealed less moisture content than fructose and urea-plasticized films which made sorbitol films achieved the highest degree of relative crystallinity. On the other hand, fructose films recorded lower film thickness and density compared to S and U-plasticized films. Therefore, fructose-plasticized films presented the best performance in terms of physical properties. The study of the morphological structure showed that the fructoseplasticized films provided a homogeneous structure without porosity, which is considerable evidence of their network integrity. Based on that, fructose films obtained the best tensile strength and elastic modulus recording 6.8 and 61.15 MPa, respectively, for the 25% fructose film. In terms of thermal stability, the transition temperatures of all films decreased by increasing plasticizer concentration and moisture content of films as well. While the mass residue at above 350°C increased in case of fructose addition from 24.02% to 29.63%, while it decreased from 24.02% to 16.06 with the addition of sorbitol and from 24.02% to 20.33% with the addition of urea. Hence, the fructose plasticizer evidently enhanced films thermal stability. Overall, the influence of the selected plasticizers at different concentrations on CS base films was verified. The F-plasticized films particularly, 25% fructose film offered the best combination of characteristics make it has the potential to be used for the application and development of biopolymer films. | 6,507.2 | 2019-01-01T00:00:00.000 | [
"Materials Science"
] |
Fabrication of Erbium-Doped Upconversion Nanoparticles and Carbon Quantum Dots for Efficient Perovskite Solar Cells
Upconversion nanoparticles (UCNPs) and carbon quantum dots (CQDs) have emerged as promising candidates for enhancing both the stability and efficiency of perovskite solar cells (PSCs). Their rising prominence is attributed to their dual capabilities: they effectively passivate the surfaces of perovskite-sensitive materials while simultaneously serving as efficient spectrum converters for sunlight. In this work, we synthesized UCNPs doped with erbium ions as down/upconverting ions for ultraviolet (UV) and near-infrared (NIR) light harvesting. Various percentages of the synthesized UCNPs were integrated into the mesoporous layers of PSCs. The best photovoltaic performance was achieved by a PSC device with 30% UCNPs doped in the mesoporous layer, with PCE = 16.22% and a fill factor (FF) of 74%. In addition, the champion PSCs doped with 30% UCNPs were then passivated with carbon quantum dots at different spin coating speeds to improve their photovoltaic performance. When compared to the pristine PSCs, a fabricated PSC device with 30% UCNPs passivated with CQDs at a spin coating speed of 3000 rpm showed improved power conversion efficiency (PCE), from 16.65% to 18.15%; a higher photocurrent, from 20.44 mA/cm2 to 22.25 mA/cm2; and a superior fill factor (FF) of 76%. Furthermore, the PSCs integrated with UCNPs and CQDs showed better stability than the pristine devices. These findings clear the way for the development of effective PSCs for use in renewable energy applications.
Introduction
Solar power is solar energy that has been captured, transformed, and used for electrical and thermal energy.The two major ways that solar energy is used are photovoltaic and photothermal.One of these, photovoltaic, directly generates electricity from solar energy using solar cells [1,2].Crystalline silicon photovoltaics account for 95% of the market share and are the most widely used variety.Mono-Si and Multi-Si exhibit minimal toxicity and have profound industrial applications that were built along with the development of the electronic industry, while c-Si solar cells have a high operating efficiency of about 26.7% [3].The chemistry required to purify, reduce, and crystallize pure silicon from sand, which is extremely energy-demanding and environmentally damaging, is the only thing holding crystalline Si back from becoming the perfect PV material [4].GaAs solar cells, which were developed from second-generation solar cells, are an extremely efficient technology.Their efficiency has exceeded 30%, but they are far too costly for use in largearea terrestrial applications [5].
The cost reduction of modules from first-to second-generation solar cells is a positive move.Still, prices have not been sufficiently affordable for major commercial acceptance by clients due to the inherent difficulties of such devices.Due to their cost-competitiveness and streamlined manufacturing processes, third-generation solar cells have gained increased attention.Among the top contenders, with a significant increase in efficiency, are the recently researched perovskite materials, which appear to be appealing alternatives because to their relatively low prices and high efficiency [6].These perovskites' important characteristics are their simplicity of manufacturing, high solar absorption, and low nonradiative carrier recombination rates for such easily synthesized materials [7,8].Because of the exceptional properties of perovskite materials, solar cells made from perovskite (PSCs) have achieved spectacular, unparalleled breakthroughs, achieving over 26% power conversion efficiency in the last ten years [9].
Organic-inorganic halide perovskites, with the chemical formula ABX 3 , consist of Cs or organic ammonium ions (A), such as Formamidine or Methylamine (CH 3 NH 3 + or NH = CHNH 3 + ); divalent metal cations (B), like Pb 2+ or Sn 2+ ; and halide ions (X), including I207B, Br207B, or Cl207B.The combination of the A-site Cs or organic ammonium ions and X-site halogens forms the foundation of the most stable and influential halide perovskite solar absorbers, resulting in high power conversion efficiency (PCE) due to a suitable bandgap for significant photo-harvesting capabilities, extended charge carrier lifespan (τ), and high mobility (µ) [6,7,[9][10][11].Perovskite solar cells are exceptional in terms of enhanced performance; yet, they have several disadvantages that have delayed their marketing because of their sensitivity to oxygen, moisture, temperature, and UV light.Due to PSCs' narrow absorption cross-section, only the visible spectrum of the sun can be efficiently absorbed, which reduces the efficiency of the solar cells [6].
Studies have been conducted to increase the PCE and stability of PSCs, such as improving their photovoltaic efficiency by either improving the perovskite active layer itself by advanced techniques such as laser-assisted crystallization [12,13] or adding additional light-gathering materials to PSCs in order to make the most of the available sunlight [14,15].For example, upconversion nanoparticles (UCNPs) are one of the most straightforward ways to include NIR active elements in PSCs.Because of their capacity to create a single high-energy photon from two or more low-energy photons, upconversion nanoparticles (UCNPs) are a possible choice to utilize NIR.When exposed to NIR, erbium (Er 3+ )-doped UCNPs release light at green and red wavelengths that are successfully absorbed by the photoactive perovskite material [16][17][18].Furthermore, upconversion nanoparticles (UCNPs) operating as scattering points can expand the path of light and encourage the development of larger, less imperfect perovskite grains, which is helpful for improving photovoltaic performance [15,19].Additionally, several studies have been carried out to dope the electron/hole transport layer with a variety of substances in order to enhance its electrical properties [20][21][22].Giordano et al. demonstrated that Li doping speeds up electron transport inside mesoporous TiO 2 electrodes and showed that PSCs constructed on such electrodes exhibit superior performances compared to undoped electrodes, increasing PCEs from 17.0% to 19.3% [22].
Recent studies have indicated that passivating the grain boundaries of the perovskite layer with CQDs leads to notable enhancements in PSC efficiency and environmental stability.Studies have shown a remarkable increase in the photon conversion efficiency of PSCs when incorporating CQDs into the appropriate layers of the devices.For instance, the efficiency of PSCs based on CQD-modified perovskite films improved from 17.59% to 19.38% [23][24][25].This enhancement is attributed, in part, to the hydrophobic nature of CQD molecules, which effectively shield the perovskite layer from moisture intrusion.Remarkably, even after four months of storage in a non-humidity-controlled environment, a CQD-modified perovskite retained its original black hue, underscoring its exceptional long-term stability [23].
In this study, we introduce a straightforward and highly efficient method for synthesizing lithium fluoride-based crystals doped with lanthanide ions, specifically LiYF 4 , and seamlessly integrating them into the mesoporous layers of perovskite solar cells (PSCs).To examine the impact of incorporating upconversion nanoparticles (UCNPs) into PSCs, we assembled fully functional PSC devices and meticulously evaluated their performance.The photovoltaic performance of PSCs incorporating the synthesized UCNPs exhibited an excellent improvement.Compared to control PSCs without UCNPs, they had a notable increase in power conversion efficiency (PCE) to 16.22% and a fill factor (FF) of 74%.
Furthermore, we explored the optimization of PSCs doped with 30% UCNPs by passivating them with carbon quantum dots (CQDs) at varying spin coating speeds.This optimization strategy led to significant enhancements in photovoltaic performance.For instance, compared to pristine PSCs, a fabricated device incorporating 30% UCNPs that was passivated with CQDs at a spin coating speed of 3000 rpm demonstrated an improved PCE, from 16.65% to 18.15%; a higher photocurrent, from 20.44 mA/cm 2 to 22.25 mA/cm 2 ; and an elevated fill factor (FF) of 76%.These results underscore the potential of a straightforward and effective approach utilizing UCNPs to augment the photovoltaic performance of PSCs, promising substantial advancements in renewable energy technologies.
Results and Discussion
Experimentally, UCNPs with core and core-shell structures were hydrothermally synthesized following a synthesis procedure previously reported in [26] and detailed in the Materials and Methods.To visualize the size and shape of the produced core and core-shell UCNPs, a few drops of each sample were placed on a transmission electron microscope (TEM) with a carbon-coated copper grid.Figure 1a shows well-dispersed core UCNPs with an average size of 18 nm.The particle size of the synthesized core UCNPs was then confirmed to be 18 nm using a dynamic light scattering machine (DLS), as illustrated in Figure 1b. Figure 1c shows the uniform structure of the synthesized UCNP core-shell nanocrystals, with an average size of 25 nm.The average size of the core-shell UCNPs was also confirmed with DLS to be 25 nm, as shown in Figure 1d.
The primary objective of employing core-shell-structured upconversion nanoparticles (UCNPs) in this research was to mitigate surface quenching effects.These effects arose from the interaction between the solvent ligands, specifically hydroxyl (OH) groups, and the upconverting ion, erbium (Er 3+ ), used in this study.This interaction led to an undesirable energy transfer from the intermediate state of Er 3+ to the solvent ligands, which could significantly dampen luminescence efficiency [26][27][28].By strategically designing an inert shell around the core of the UCNPs, it was anticipated that there would be a substantial enhancement of visible upconversion (UC) luminescence when these nanoparticles were excited by infrared light.This enhancement was crucial for improving the optical performance of the perovskite solar cells.The addition of the inert shell effectively isolated the upconverting ions from the external environment, thereby maximizing the emission intensity and stability of the UCNPs.
Next, the synthesized core-shell UCNPs were incorporated into the mesoporous layers of the perovskite solar cells (PSCs) using various mixing ratios with titanium dioxide (TiO 2 ), as illustrated in Figure 2a.This step was crucial for converting the near-infrared (NIR) solar spectrum into visible light, which the perovskite active layer was capable of absorbing.Achieving an optimal alignment between the emission spectrum of the upconverting rare-earth ion and the light-harvesting absorption band of the perovskite was essential for this process.Erbium (Er 3+ ), in particular, has shown considerable promise due to its ability to emit intense red and green light, as recorded from the synthesized core-only UCNPs (18 nm).These emissions effectively matched the visual absorption spectrum of the perovskite active layer, enhancing the overall light absorption capabilities of the PSCs, as depicted in Figure 2b.This alignment was pivotal for maximizing the efficiency of the light conversion within the solar cells [29].The primary objective of employing core-shell-structured upconversion nanoparticles (UCNPs) in this research was to mitigate surface quenching effects.These effects arose from the interaction between the solvent ligands, specifically hydroxyl (OH) groups, and the upconverting ion, erbium (Er 3+ ), used in this study.This interaction led to an undesirable energy transfer from the intermediate state of Er 3+ to the solvent ligands, which could significantly dampen luminescence efficiency [26][27][28].By strategically designing an inert shell around the core of the UCNPs, it was anticipated that there would be a substantial enhancement of visible upconversion (UC) luminescence when these nanoparticles were excited by infrared light.This enhancement was crucial for improving the optical performance of the perovskite solar cells.The addition of the inert shell effectively isolated the upconverting ions from the external environment, thereby maximizing the emission intensity and stability of the UCNPs.
Next, the synthesized core-shell UCNPs were incorporated into the mesoporous layers of the perovskite solar cells (PSCs) using various mixing ratios with titanium dioxide (TiO2), as illustrated in Figure 2a.This step was crucial for converting the near-infrared To optically investigate the upconverted light emission from the UCNPs and the emission of the perovskite light-harvesting layer, we fabricated a PSC device without gold electrodes to allow for light transmission.In this device, UCNPs with a core-shell structure with an average size of 25 nm were mixed with TiO 2 nanoparticles in a mesoporous layer at a mixing ratio of 30%:70% (volume ratio), as this mixing ratio was found to give the highest PSC photovoltaic performance, which will be discussed later in this study.To optically characterize the fabricated layers, we designed and built a custom-made confocal laserscanning microscope equipped with continuous-wave (CW) 532 nm (green) and 980 nm (NIR) lasers, an optical spectrometer, and a single-photon counter, as illustrated in Figure 3a.The PSC device was placed on the optical setup and irradiated with the 980 nm laser on both sides.In the case of Figure 3b, the optical emission of the synthesized core-shell UCNPs (with an average size of 25 nm) was collected by the optical spectrometer, which consisted of two bands in the green region and one band in the red region.Optical luminescence in UCNPs is a consequence of several consecutive transfers of energy between the activator (Er 3+ ) and sensitizer (Yb 3+ ).In the beginning, Yb 3+ absorbs the first NIR photon and excites it to its 2 F 5/2 excited state, since at 950-1000 nm, Yb 3+ exhibits a significant absorption cross-section, and afterwards it uses the energy from Yb 3+ (in the excited state) to push Er 3+ up to the semi-resonant metastable 4 I 11/2 level.A second NIR photon from Yb 3+ boosts Er 3+ to higher excited states because the semi-resonant metastable ( 4 I 11/2 ) level in Er 3+ has a long millisecond life.Moving from the 2 H 11/2 , 4 S 3/2 , and 4 F 9/2 excited states of Er 3+ to the 4 I 15/2 ground state generates two strong and sharp emission lines, with corresponding emission peaks at 527 nm, 553 nm, and 650-680 nm, after several nonradiative relaxations [29,30].
(NIR) solar spectrum into visible light, which the perovskite active layer was capable of absorbing.Achieving an optimal alignment between the emission spectrum of the upconverting rare-earth ion and the light-harvesting absorption band of the perovskite was essential for this process.Erbium (Er 3+ ), in particular, has shown considerable promise due to its ability to emit intense red and green light, as recorded from the synthesized coreonly UCNPs (18 nm).These emissions effectively matched the visual absorption spectrum of the perovskite active layer, enhancing the overall light absorption capabilities of the PSCs, as depicted in Figure 2b.This alignment was pivotal for maximizing the efficiency of the light conversion within the solar cells [29].The overlap of the PSC absorption bands with UCNP emissions demonstrates how the absorption bands of the PSCs align with the emission peaks of the erbium-doped UCNPs (this UCNP emission was recorded from the core-only UCNPs (18 nm) and plotted for illustration purposes, while the optical emission from the core-shell UCNPs used in PSC fabrication will be shown later in this study).Specifically, the green emission peak at 550 nm and the red emission peak from 650 to around 680 nm from the UCNPs correspond closely with the spectral sensitivity regions of the perovskite layers in the solar cells.This alignment ensures that the light emitted by the UCNPs is effectively absorbed by the PSCs, optimizing the overall conversion of solar energy to electricity.The overlap of the PSC absorption bands with UCNP emissions demonstrates how the absorption bands of the PSCs align with the emission peaks of the erbium-doped UCNPs (this UCNP emission was recorded from the core-only UCNPs (18 nm) and plotted for illustration purposes, while the optical emission from the core-shell UCNPs used in PSC fabrication will be shown later in this study).Specifically, the green emission peak at 550 nm and the red emission peak from 650 to around 680 nm from the UCNPs correspond closely with the spectral sensitivity regions of the perovskite layers in the solar cells.This alignment ensures that the light emitted by the UCNPs is effectively absorbed by the PSCs, optimizing the overall conversion of solar energy to electricity.with a green 532 nm laser and a near-infrared 980 nm laser for photoluminescence (PL) measurement of the PSC layers integrated with UCNPs in a core-shell structure.The designed optical microscope is equipped with an imaging system, lasers, a photon counter, and a custom-made spectrometer.(b) The UCNP emission spectrum measured directly from the core-shell UCNP layer (the particles that were integrated into the PSC fabrication), showing green emission peaks centered at 527 nm and 550 nm as well as a weak red emission peak at 650-680 nm.(c) The optical emission from the perovskite material under green (532 nm) excitation.
The optical emission from the perovskite material was investigated under green (532 nm) excitation.The photoluminescence of the perovskite film peaked at 780 nm with 30% doped UCNPs within the mesoporous layer, as shown in Figure 3c.Optical emission at 780 nm from perovskite materials occurs through a series of processes starting with the absorption of photons, which excite electrons from the valence band to the conduction band, creating excitons.These excitons undergo radiative recombination, releasing photons if the material's band gap aligns with the energy corresponding to a wavelength of 780 nm.By tailoring the composition of the perovskite, such as adjusting the halide components or metal cations, the band gap can be optimized to specifically enhance emission at this near-infrared wavelength.This capability allows for precise control over the emission properties, making perovskites highly suitable for applications requiring emissions at specific wavelengths, like optical devices and advanced solar cells.
To investigate the photovoltaic performance of the PSCs with and without UCNPs in the core-shell structure (with an average size 25 nm), we manufactured several PSC The optical emission from the perovskite material was investigated under green (532 nm) excitation.The photoluminescence of the perovskite film peaked at 780 nm with 30% doped UCNPs within the mesoporous layer, as shown in Figure 3c.Optical emission at 780 nm from perovskite materials occurs through a series of processes starting with the absorption of photons, which excite electrons from the valence band to the conduction band, creating excitons.These excitons undergo radiative recombination, releasing photons if the material's band gap aligns with the energy corresponding to a wavelength of 780 nm.By tailoring the composition of the perovskite, such as adjusting the halide components or metal cations, the band gap can be optimized to specifically enhance emission at this near-infrared wavelength.This capability allows for precise control over the emission properties, making perovskites highly suitable for applications requiring emissions at specific wavelengths, like optical devices and advanced solar cells.
To investigate the photovoltaic performance of the PSCs with and without UCNPs in the core-shell structure (with an average size 25 nm), we manufactured several PSC devices with different ratios of UCNPs to TiO 2 in the mesoporous layer.The fabricated PSC devices were named the pristine device, the device with 20% UCNPs, the device with 30% UCNPs, and the device with 50% UCNPs.To ensure reproducibility, we fabricated five devices under each set of conditions and calculated the average photovoltaic performance values.The photovoltaic measurements of the fabricated PSC devices were performed at 1.5 AM under one-sun illumination.The photovoltaic performances of the fabricated PSCs with and without UCNPs are summarized in Table 1 and shown in Figure 4.The J-V results presented in Table 1 and illustrated in Figure 4a-c show intriguing trends in the performance metrics of the fabricated perovskite solar cells.Notably, the solar cell incorporating a 30% concentration of UCNPs exhibited the highest open-circuit voltage (Voc), FF, and PCE values among the devices tested.The enhancement of PCE was particularly remarkable, with a substantial increase of 5%.Furthermore, we also observed that there was a discernible correlation between the increase in the UCNP concentration up to 30% and the increases in FF and open-circuit voltage (Voc).
The improvement in performance observed for the device with 30% UCNPs can be attributed to the direct conversion of NIR light into additional photocurrent.This phenomenon was facilitated by the presence of integrated UCNPs within the mesoporous layer, effectively transforming a considerable portion of the NIR light into visible light that could be absorbed by the perovskite layer.Moreover, the fill factor experienced a notable increase from 71.6% to 74.3% as the UCNP concentration increased (0-30%).This improvement cannot be solely attributed to the light-harvesting ability of the UCNPs; rather, the lithium dopant present in the UCNP crystals also contributed to enhancing electron transport within the TiO 2 layer.This reduction in deep traps enhanced the fill factor and open-circuit voltage.Furthermore, the enhanced performance of the perovskite solar cells can be attributed to the unique optical properties of the core-shell UCNPs.These nanoparticles exhibited superior scattering effects and specialized upconversion luminescence, which augmented the absorption of the perovskite.Additionally, they facilitated the production of larger perovskite grains with fewer imperfections, further enhancing device performance.
We found that the photovoltaic performance of the PSC device doped with 50% UCNPs significantly dropped, which could have been a consequence of excessive light back-scattering, which lowered absorption by reflecting a significant part of the incident light out of the solar cell.Furthermore, a larger UCNP injection resulted in a lack of conductivity in the electron transport layer, as demonstrated by the decreases in fill factor values in Table 1 and Figure 4.The results presented in Table 1 and Figure 4 are not the highest achieved in this study; however, they adequately demonstrate the optimal concentration of UCNPs for use in CQDs passivation, which will be further discussed in the next experiment.
Next, the champion PSCs doped with 30% UCNPs were then passivated with carbon quantum dots at different spin coating speeds to improve their photovoltaic performance, as illustrated in Figure 5a.For this, 100 µL of CQDs were added to the perovskite layer, and in the mesoporous layer the UCNP-to-TiO 2 ratio of 30:70 was fixed.The thickness of the CQD layers was controlled via varying the revolutions per minute of the spin coating procedure.The devices were named as follows: PSC device 30% UCNPs, PSC device 30% UCNPs/3000 rpm CQDs, PSC device 30% UCNPs/5000 rpm CQDs, and PSC device 30% UCNPs/6000 rpm CQDs.To ensure reproducibility, we fabricated five devices under each condition and calculated the average photovoltaic performance values reported.The CQD layer deposited using spin coating was influenced by various factors such as the solution concentration, viscosity, spin time, and ambient conditions (temperature and humidity).To improve reproducibility, we conducted these experiments at different speeds and specified them.The non-electroactive passivation layer materials on the perovskite films were kept minimal to avoid blocking charge carriers.Typically, the passivation layer thickness (5-10 nm) is not visible by SEM and is not very easy to detect.It is therefore included within the error of repeatable experiments.Next, the champion PSCs doped with 30% UCNPs were then passivated with carbon quantum dots at different spin coating speeds to improve their photovoltaic performance, as illustrated in Figure 5a.For this, 100 µL of CQDs were added to the perovskite layer, and in the mesoporous layer the UCNP-to-TiO2 ratio of 30:70 was fixed.The thickness of the CQD layers was controlled via varying the revolutions per minute of the spin coating procedure.The devices were named as follows: PSC device 30% UCNPs, PSC device 30% UCNPs/3000 rpm CQDs, PSC device 30% UCNPs/5000 rpm CQDs, and PSC device 30% UCNPs/6000 rpm CQDs.To ensure reproducibility, we fabricated five devices under each condition and calculated the average photovoltaic performance values reported.The CQD layer deposited using spin coating was influenced by various factors such as the solution 5b-d show the photovoltaic performances of the manufactured PSCs.The device named PSC device 30% UCNPs/3000 rpm CQDs showed the best overall performance and revealed the best photocurrent density (JSC) and PCE, with an increase in the Jsc value from 20.44 to 22.25 (mA/cm 2 ) and a rise in PCE from 16.65% to 18.15%, which was a 9.009% increase when compared to the reference device (PSC device 30% UCNPs).The photovoltaic performances of the fabricated perovskite solar cells (PSCs) integrated with UCNPs and CQDs were significantly improved.This enhancement was primarily due to the ability of the UCNPs to convert low-energy incident photons into highenergy photons (UV and visible light).These high-energy photons were then absorbed and converted to electrons by the active layers of the PSCs, generating more electrons per photon compared to PSCs without UCNPs, thereby boosting the overall photovoltaic performance [15,19].Additionally, incorporating CQDs contributed to the stability and efficiency of the PSCs by passivating the perovskite grain boundaries.The CQDs were also expected to broaden the absorption spectrum of the PSCs by converting high-energy photons, which could potentially damage the PSC structure, into low-energy photons that were more readily absorbed by the perovskite active layer.This resulted in the enhanced power conversion efficiency of the PSCs [24,25].
Table 2 and Figure 5b-d show the photovoltaic performances of the manufactured PSCs.The device named PSC device 30% UCNPs/3000 rpm CQDs showed the best overall performance and revealed the best photocurrent density (JSC) and PCE, with an increase in the Jsc value from 20.44 to 22.25 (mA/cm 2 ) and a rise in PCE from 16.65% to 18.15%, which was a 9.009% increase when compared to the reference device (PSC device 30% UCNPs).The photovoltaic performances of the fabricated perovskite solar cells (PSCs) integrated with UCNPs and CQDs were significantly improved.This enhancement was primarily due to the ability of the UCNPs to convert low-energy incident photons into high-energy photons (UV and visible light).These high-energy photons were then absorbed and converted to electrons by the active layers of the PSCs, generating more electrons per photon compared to PSCs without UCNPs, thereby boosting the overall photovoltaic performance [15,19].Additionally, incorporating CQDs contributed to the stability and efficiency of the PSCs by passivating the perovskite grain boundaries.The CQDs were also expected to broaden the absorption spectrum of the PSCs by converting highenergy photons, which could potentially damage the PSC structure, into low-energy photons that were more readily absorbed by the perovskite active layer.This resulted in the enhanced power conversion efficiency of the PSCs [24,25].Figure 5c,d present the J-V characteristic curves of the PSC devices with different rotational speeds of CQD insertion.The PSC device 30% UCNPs/3000 rpm CQDs exhibited the best J-V characteristic curve, as it showed higher Jsc, FF, and maximum power values.This enhancement was caused by the increased number of photons that were absorbed by the perovskite layer.The Voc (open-circuit voltage) of PSC device 30% UCNPs/5000 rpm CQDs was found to be higher when passivated with carbon quantum dots (CQDs) at a spin coating speed of 5000 rpm.This improvement can be attributed to the enhanced surface passivation provided by the CQDs, resulting in fewer surface defects and reduced charge recombination.Additionally, the optimized optical properties and improved charge extraction efficiency achieved through CQD passivation contributed to the higher Voc.The stability of the perovskite layer also may have been enhanced at this passivation speed, further supporting the observed increase in Voc.
Furthermore, the fill factor (FF) of the PSC device with 30% UCNPs and CQDs spincoated at 3000 rpm exhibited a peak value of 76%, surpassing that of the pristine device (Figure 5d).This outcome suggests that optimal doping of UCNPs within the mesoporous layer, coupled with the addition of 100 µL of CQDs at a spin coating speed of 3000 rpm, facilitated enhanced electron extraction from the perovskite film.Consequently, this improved the overall conductivity of the electron transport layer, leading to significant increases in both the power conversion efficiency (PCE) and fill factor (FF) values.
This study underscores the enhanced performance of perovskite solar cells (PSCs) through the integration of UCNPs and CQDs.Control devices without any doping served as a baseline, achieving a power conversion efficiency (PCE) of 15.5%.Incorporating UCNPs alone led to notable improvements, with the 30% UCNP device demonstrating the highest efficiency at 16.22%, which was attributed to improved light absorption and electron transport.Comparatively, devices with only CQDs, especially those processed at a 3000 rpm spin coating speed, also showed significant gains, achieving a PCE of 16.80%.This highlights the effectiveness of CQDs in enhancing photovoltaic performance by passivating perovskite grain boundaries and improving charge extraction.
When examining the combined usage of UCNPs and CQDs, the results were even more promising.The device with 30% UCNPs and CQDs added at 3000 rpm exhibited the highest performance, with a Jsc of 22.25 mA/cm 2 , an FF of 76.0%, a Voc of 1.075 V, and a PCE of 18.15%, representing substantial improvements over both the pristine device and those with individual doping.This combination outperformed the 30% UCNP device (PCE of 16.22%) and the 3000 rpm CQD device (PCE of 16.80%), demonstrating the synergistic effects of integrating both additives.The comparative analysis revealed that while the UCNPs enhanced the PSCs' performance by converting near-infrared light to visible light and improving electron transport [15], the CQDs contributed by passivating defects and expanding light absorption [24,25].The optimal doping of 30% UCNPs combined with 3000 rpm CQDs maximized these benefits, leading to the highest overall efficiency.This study highlights the potential to significantly improve PSC performance through strategic material integration, surpassing the gains achievable using UCNPs or CQDs alone [15,24,25].
The reported efficiency improvement in the perovskite solar cells (PSCs) from 16.22% to 18.15% is notable, representing an 11.9% relative increase.However, this should be benchmarked against the current highest laboratory efficiencies, which have reached around 25.7% [31] under standard testing conditions (1000 W/m 2 irradiance, 25 • C, and AM 1.5 G spectrum) for small-area cells.While these record efficiencies showcase the potential of PSCs, achieving similar performance in larger, commercially viable modules that maintain stability over time remains a significant challenge.The ongoing research is essential to bridge the efficiency gap and enhance the scalability and real-world applicability of PSC technology.
Finally, to assess the stability of both UCNP-and CQD-modified perovskite solar cells (PSCs) under conditions with unregulated humidity, a thorough investigation was conducted over several days, as shown in Figure 6.This study compared the stability of three types of fabricated devices: PSC devices containing 30% UCNPs and 3000 rpm CQDs, PSC devices with 30% UCNPs only, and pristine devices.Performance was monitored over time, revealing notable differences.Remarkably, devices integrated with both UCNPs and CQDs exhibited superior stability compared to their pristine counterparts.Specifically, the PCE of the PSC devices with 30% UCNPs and CQD passivation at 3000 rpm maintained 92.5% of its initial value, while the pristine devices saw a decrease to 66% efficiency.This enhanced stability is attributed to the UCNPs' ability to convert near-infrared light to visible light, improving electron transport, and the CQDs' role in passivating grain boundaries, reducing defect densities in the perovskite film [15,24,25].Additionally, CQDs interacted with uncoordinated lead ions in grain boundaries, mitigating moisture-induced degradation.
Nanoparticles: Core Preparation
In a two-neck flask, a mixture of 1.0 mmol of LnCl3 (Ln = Y (80.0 wt.%), Yb (18.0 wt.%), and Er (2.0 wt.%)),10.5 mL of 1-octadecene, and 10.5 mL of oleic acid was heated to 150 °C for 40 min at atmospheric pressure under argon flow until it became a clear yellow solution.The solution was brought down to 50 °C.Then, 2.5 mmol of LiOH.H2O with 5.0 mL of methanol and 4.0 mmol of NH4F with 10.0 mL of methanol were mixed and gradually introduced.The mixture was then vigorously stirred for 40 min while the temperature was kept at 50 °C.To remove the methanol and residual water, we raised the temperature to 150 °C for 20 min.The generated LiYF4: Yb,Er UCNPs were then gathered, rinsed three times with ethanol, and reconfigured in 10 mL of chloroform after the solution had cooled
Nanoparticles: Core Preparation
In a two-neck flask, a mixture of 1.0 mmol of LnCl 3 (Ln = Y (80.0 wt.%), Yb (18.0 wt.%), and Er (2.0 wt.%)), 10.5 mL of 1-octadecene, and 10.5 mL of oleic acid was heated to 150 • C for 40 min at atmospheric pressure under argon flow until it became a clear yellow solution.The solution was brought down to 50 • C.Then, 2.5 mmol of LiOH.H 2 O with 5.0 mL of methanol and 4.0 mmol of NH 4 F with 10.0 mL of methanol were mixed and gradually introduced.The mixture was then vigorously stirred for 40 min while the temperature was kept at 50 • C. To remove the methanol and residual water, we raised the temperature to 150 • C for 20 min.The generated LiYF 4 : Yb,Er UCNPs were then gathered, rinsed three times with ethanol, and reconfigured in 10 mL of chloroform after the solution had cooled to room temperature.
Nanoparticles: Core-Shell Preparation
In a two-neck flask, a mixture of 1.0 mmol of YCI 3 , 10.5 mL of 1-octadecene, and 10.5 mL of oleic acid was heated to 150 • C for 40 min at atmospheric pressure under argon flow until it became a clear yellow solution.The solution was brought down to 50 • C.Then, 2.5 mmol of LiOH.H 2 O with 5.0 mL of methanol, 4.0 mmol of NH 4 F with 10.0 mL of methanol, and 10 mL of an upconversion nanoparticle core solution were gradually introduced.The mixture was then vigorously stirred for 40 min while the temperature was kept at 50 • C. To remove the methanol and residual water, we raised the temperature to 150 • C for 20 min.The generated LiYF 4 : Yb,Er CS UCNPs were then gathered, rinsed three times with ethanol, and reconfigured in 10 mL of chloroform after the solution had cooled to room temperature.
Preparation of Ligand-Free UCNPs
We prepared acidic ethanol with a pH of 1 by adding 2.5 µL of hydrochloric acid to 40 mL of ethanol.Then, 1 mL of UCNPs were added to the acidic ethanol and sonicated for 1 h to remove the oleate ligands.After that, ligand-free UCNPs were collected using a centrifugation device and cleaned three times with ethanol.Then, in order to be used again, the oleate-free Ln-UCNPs were reconfigured in pure ethanol.
Carbon Quantum Dot-Based Glucose Preparation
Carbon quantum dots (CQDs) were derived from glucose by dissolving 2 g in 15 mL of distilled water, followed by the addition of 6 mL of a 25% aqueous ammonia solution.This mixture was subjected to hydrothermal synthesis at 180 • C under a pressure of 3 MPa for 1 h, ensuring optimal reaction conditions.After synthesis, the resulting brown suspension underwent purification using dialysis bags with a molecular weight cutoff of 3000 KDa for 12 h, effectively removing impurities.Subsequently, the purified solution was subjected to filtration to eliminate any remaining large particles.Low-speed centrifugation at 6000 rpm for 10 min concentrated the CQDs in the solution.Finally, the concentrated solution was carefully stored in isopropanol for further experimentation or application.
Preparation of Perovskite Solar Cell
The substrates used to fabricate the PSCs had dimensions of 1.6 cm × 2.45 cm (FTO glass (Fluorine-doped Tin Oxide was purchased from Sigma-Aldrich, St. Louis, MO, USA)).Using zinc powder and 4 molar hydrochloric acid (HCl) (Sigma-Aldrich), we etched an FTO layer 0.5 cm from the top side of the substrates to separate the cathode from the anode and create an open circuit.All cleaning was carried out using an ultrasonic bath.The glass substrates were cleaned by sonication in distilled water and Hellmanex soap (Ossila, Sheffield, UK) for 30 min, then distilled water only for 10 min, ethanol (Fisher, Waltham, MA, USA) for 15 min, and acetone for 10 min.Lastly, the substrates were dried with air to evaporate the acetone and placed in a UV-Ozon cleaner for 20 min.
Preparation of Compact Layer by Spray Pyrolysis
In a vial, we prepared a compact solution by adding 600 µL of Titanium diisopropoxide bis(acetylacetonate) (Sigma-Aldrich), 400 µL of Acetylacetone (Sigma-Aldrich), and 900 µL of Ethanol (Fisher).All of the FTO substrates were prepared by placing them on a hot plate at 450 • C for 30 min while covering the anode area.The thin compact TiO 2 layer was sprayed 3 times using a process known as spray pyrolysis.After completing the spray pyrolysis, the substrates were maintained at 450 • C for 30 min.
Deposition of Mesoporous TiO 2 and UCNP-Doped Mesoporous TiO 2 Using Spin Coating
A mesoporous solution was prepared by mixing 30 NR-D titanium dioxide (TiO 2 ) paste (Greatcell, Queanbeyan, Australia) and ethanol at a ratio of 1:6 (v/v).We worked on 4 concentrations of lithium-based UCNPs, and they were 0%, 20%, 30%, and 50% of the previously prepared UCNPs.Adhesive tape was placed on only one side (anode tip) to protect the anode area from the mesoporous TiO 2 layer.The UCNP-doped mesoporous TiO 2 was deposited by spin coating 50 µL of the different concentrations (program: 20 s with an acceleration of 2000 and a speed of 4000).The substrates were placed on the hot plate at 450 • C for 30 min.
Preparing the Perovskite and Spiro-OMeTAD Layers with Doped CQDs for Samples with CQDs
A perovskite solution with a composition of Cs0.05MA0.10FA0.85Pb(Br0.10I0.85)3was prepared inside a Nitrogen glove box along with spiro-OMeTAD.To prepare the perovskite solution, the following materials were dissolved: 18.89 mg of methylammonium bromide (MABr), 247.2 mg of formamidine Iodine (FAI), 722.4 mg of lead iodide (PbI2), 62.04 mg of lead bromide (PbBr2), and 21.54 mg of cesium iodide (CsI).Additionally, 960 µL of dimethyl sulfoxide (DMSO) and 2400 µL of dimethylformamide (DMF) were added.The solution was heated at 90 • C on a hot plate for 30 min.Subsequently, the solutions and substrates were transferred to a dry-air glove box with humidity lower than 2%.Using spin coating, 50 µL of the precursor solution was applied in a two-step method with a lower-RPM mode (acceleration: 200 RPM/s, velocity: 1000 RPM, time 10 s) followed by a higher-RPM mode (acceleration: 2000 RPM/s, velocity: 6000 RPM, time 30 s).At 18 s before the end of spinning, 200 µL of chlorobenzene was applied to the wet film to remove residual DMSO and DMF.The substrates were then heated to 100 • C for 45 min on a hot plate to create crystalline triple-cation perovskite layers [2,3].After that, we deposited 100 µL of dispersed CQDs in Isopropanol above the perovskite films and put them on the middle of the substrates, which rotated at 3000, 5000, or 6000 rpm.The substrates were then taken and placed on the hot plate at 100 • C for 5 min.Next, in the Nitrogen glove box, we mixed 10.72 mg of spiro, 1200 µL of chlorobenzene, 21.36 µL of lithium, and 34.52 µL of 4-tetra-Butylpyridine to create spiro-OMeTAD.After that, the perovskite layer was covered with a hole transfer layer (HTL) by spin coating 50 µL of a spiro-OMeTAD solution for 20 s at 4000 rpm.Finally, a gold layer (80 nm) was deposited using thermal evaporation at a certain pressure to form metal contact electrodes.
Conclusions
In conclusion, this study focused on synthesizing upconversion nanoparticles (UCNPs) doped with erbium ions to facilitate the conversion of ultraviolet (UV) and near-infrared (NIR) light into usable energy.These UCNPs were successfully integrated into the mesoporous layers of perovskite solar cells (PSCs) at various concentrations.The highest photovoltaic performance was achieved by PSCs incorporating 30% UCNPs in their mesoporous layers, yielding a power conversion efficiency (PCE) of 16.22% and a fill factor (FF) of 74%.Subsequently, passivation of the champion PSCs, also doped with 30% UCNPs, with carbon quantum dots (CQDs) at different spin coating speeds further improved their photovoltaic performance.Specifically, the PSC device passivated with CQDs at 3000 rpm exhibited a significantly enhanced PCE of 18.15%, a photocurrent increased from 20.44 mA/cm 2 to 22.25 mA/cm 2 , and a superior fill factor (FF) of 76% compared to the pristine PSCs.Furthermore, the PSCs integrated with UCNPs and CQDs showed better stability than the pristine devices.The reported results show the UCNPs' ability to convert near-infrared light to visible light, improving electron transport, and the CQDs' role in passivating grain boundaries, reducing defect densities in the perovskite film.Additionally, CQDs interacted with uncoordinated lead ions in grain boundaries, mitigating moisture-induced degradation.These findings demonstrate that UCNPs effectively convert near-infrared light to visible light, enhancing electron transport.Simultaneously, CQDs play a crucial role in passivating grain boundaries, thereby reducing defect densities in perovskite film.Moreover, CQDs interact with uncoordinated lead ions in grain boundaries, effectively mitigating moisture-induced degradation.These results will advance the development of efficient perovskite solar cells (PSCs) for various renewable energy applications.
Figure 1 .
Figure 1.Characterizations of the synthesized UCNPs.(a) TEM analysis displays the core UCNPs, showing small, well-dispersed nanoparticles with an average size of 18 nm.(b) DLS measurement provides the size distribution of the core UCNPs, confirming the uniformity seen using TEM.The inset presents a systematic illustration of the UCNP core structure.(c) TEM of core-shell UCNPs reveals well-dispersed core-shell nanoparticles averaging 25 nm in size.(d) DLS confirmation validates the 25 nm average size of the core-shell UCNPs, confirming a consistent core-shell formation.The inset presents a systematic illustration of the UCNP core-shell structure.
Figure 1 .
Figure 1.Characterizations of the synthesized UCNPs.(a) TEM analysis displays the core UCNPs, showing small, well-dispersed nanoparticles with an average size of 18 nm.(b) DLS measurement provides the size distribution of the core UCNPs, confirming the uniformity seen using TEM.The inset presents a systematic illustration of the UCNP core structure.(c) TEM of core-shell UCNPs reveals well-dispersed core-shell nanoparticles averaging 25 nm in size.(d) DLS confirmation validates the 25 nm average size of the core-shell UCNPs, confirming a consistent core-shell formation.The inset presents a systematic illustration of the UCNP core-shell structure.
Figure 2 .
Figure 2. (a) Detailed schematic of the synthesized UCNPs integrated into perovskite solar cells (PSCs).The synthesized YLiF4:Yb,Er UCNPs absorb near-infrared (NIR) photons from sunlight and subsequently convert them into visible light.This conversion is crucial for enhancing the efficiency of the light-harvesting layer in the PSCs, enabling them to utilize a broader spectrum of solar radiation.(b)The overlap of the PSC absorption bands with UCNP emissions demonstrates how the absorption bands of the PSCs align with the emission peaks of the erbium-doped UCNPs (this UCNP emission was recorded from the core-only UCNPs (18 nm) and plotted for illustration purposes, while the optical emission from the core-shell UCNPs used in PSC fabrication will be shown later in this study).Specifically, the green emission peak at 550 nm and the red emission peak from 650 to around 680 nm from the UCNPs correspond closely with the spectral sensitivity regions of the perovskite layers in the solar cells.This alignment ensures that the light emitted by the UCNPs is effectively absorbed by the PSCs, optimizing the overall conversion of solar energy to electricity.
Figure 2 .
Figure 2. (a) Detailed schematic of the synthesized UCNPs integrated into perovskite solar cells (PSCs).The synthesized YLiF 4 :Yb,Er UCNPs absorb near-infrared (NIR) photons from sunlight and subsequently convert them into visible light.This conversion is crucial for enhancing the efficiency of the light-harvesting layer in the PSCs, enabling them to utilize a broader spectrum of solar radiation.(b)The overlap of the PSC absorption bands with UCNP emissions demonstrates how the absorption bands of the PSCs align with the emission peaks of the erbium-doped UCNPs (this UCNP emission was recorded from the core-only UCNPs (18 nm) and plotted for illustration purposes, while the optical emission from the core-shell UCNPs used in PSC fabrication will be shown later in this study).Specifically, the green emission peak at 550 nm and the red emission peak from 650 to around 680 nm from the UCNPs correspond closely with the spectral sensitivity regions of the perovskite layers in the solar cells.This alignment ensures that the light emitted by the UCNPs is effectively absorbed by the PSCs, optimizing the overall conversion of solar energy to electricity.
Figure 3 .
Figure 3. (a) Schematic illustration of a home-made confocal microscope designed and equippedwith a green 532 nm laser and a near-infrared 980 nm laser for photoluminescence (PL) measurement of the PSC layers integrated with UCNPs in a core-shell structure.The designed optical microscope is equipped with an imaging system, lasers, a photon counter, and a custom-made spectrometer.(b) The UCNP emission spectrum measured directly from the core-shell UCNP layer (the particles that were integrated into the PSC fabrication), showing green emission peaks centered at 527 nm and 550 nm as well as a weak red emission peak at 650-680 nm.(c) The optical emission from the perovskite material under green (532 nm) excitation.
Figure 3 .
Figure 3. (a) Schematic illustration of a home-made confocal microscope designed and equippedwith a green 532 nm laser and a near-infrared 980 nm laser for photoluminescence (PL) measurement of the PSC layers integrated with UCNPs in a core-shell structure.The designed optical microscope is equipped with an imaging system, lasers, a photon counter, and a custom-made spectrometer.(b) The UCNP emission spectrum measured directly from the core-shell UCNP layer (the particles that were integrated into the PSC fabrication), showing green emission peaks centered at 527 nm and 550 nm as well as a weak red emission peak at 650-680 nm.(c) The optical emission from the perovskite material under green (532 nm) excitation.
Figure 4 .
Figure 4. Performance parameters of the fabricated PSCs integrated with UCNPs in a core-shell structure.(a) presents the current-voltage (J-V) characteristic curves of fabricated PSCs measured under AM 1.5 G solar stimulation, comparing cells with varying amounts of UCNPs integrated into their mesoporous layers to those without UCNPs.The curves illustrate the impact of UCNPs on the electrical performance of the solar cells.(b,c) present key performance parameters as functions of the UCNP content.They display how the open-circuit voltage (Voc), short-circuit current density (Jsc), fill factor (FF), and power conversion efficiency (PCE) of the PSCs varied with different concentrations of UCNPs within their mesoporous layers.
Figure 4 .
Figure 4. Performance parameters of the fabricated PSCs integrated with UCNPs in a core-shell structure.(a) presents the current-voltage (J-V) characteristic curves of fabricated PSCs measured under AM 1.5 G solar stimulation, comparing cells with varying amounts of UCNPs integrated into their mesoporous layers to those without UCNPs.The curves illustrate the impact of UCNPs on the electrical performance of the solar cells.(b,c) present key performance parameters as functions of the UCNP content.They display how the open-circuit voltage (Voc), short-circuit current density (Jsc), fill factor (FF), and power conversion efficiency (PCE) of the PSCs varied with different concentrations of UCNPs within their mesoporous layers.
Figure 5 .Figure 5 .
Figure 5. Performance parameters of the fabricated PSCs integrated with 30% UCNPs and CQDs at different spin coating speeds.(a) presents an illustration of adding CQDs on top of the perovskite layer of the fabricated PSCs at different spin coating speeds.(b) presents the current-voltage (J-V) characteristic curves of fabricated PSCs measured under AM 1.5 G solar stimulation, comparing Figure 5. Performance parameters of the fabricated PSCs integrated with 30% UCNPs and CQDs at different spin coating speeds.(a) presents an illustration of adding CQDs on top of the perovskite layer of the fabricated PSCs at different spin coating speeds.(b) presents the current-voltage (J-V) characteristic curves of fabricated PSCs measured under AM 1.5 G solar stimulation, comparing cells with varying amounts of UCNPs integrated into their mesoporous layers to those without UCNPs.The curves illustrate the impact of UCNPs on the electrical performance of the solar cells.(c,d) present key performance parameters as functions of the UCNP content.They display how the open-circuit voltage (Voc), short-circuit current density (Jsc), fill factor (FF), and power conversion efficiency (PCE) of the PSCs varied with different concentrations of UCNPs within their mesoporous layers.
Figure 6 .
Figure 6.Power conversion efficiency (PCE) values of perovskite solar cell (PSC) devices incorporating upconversion nanoparticles (UCNPs) and carbon quantum dots (CQDs), compared to those without these additives.Measurements were taken over several days in ambient air, with no control over humidity levels.These data provide insight into the performance stability and degradation patterns of the PSC devices under real-world environmental conditions, highlighting the impacts of UCNPs and CQDs on device efficiency over time.
Figure 6 .
Figure 6.Power conversion efficiency (PCE) values of perovskite solar cell (PSC) devices incorporating upconversion nanoparticles (UCNPs) and carbon quantum dots (CQDs), compared to those without these additives.Measurements were taken over several days in ambient air, with no control over humidity levels.These data provide insight into the performance stability and degradation patterns of the PSC devices under real-world environmental conditions, highlighting the impacts of UCNPs and CQDs on device efficiency over time.
Table 1 .
Photovoltaic performances of the fabricated solar cells with different concentrations of UCNPs within the mesoporous layer.
Table 2 and
Figure
Table 2 .
Summary of the performance of the 30% UCNP-doped mesoporous TiO 2 PSCs (reference device and CQD deposition at 3000-6000 rpm) measured under AM 1.5 G solar stimulation. | 11,002.6 | 2024-05-29T00:00:00.000 | [
"Materials Science",
"Physics",
"Engineering"
] |
Scattering Amplitudes -- Wilson Loops Duality for the First Non-planar Correction
We study the first non-planar correction to gluon scattering amplitudes in ${\cal N}=4$ SYM theory. The correction takes the form of a double trace partial amplitude and is suppressed by one power of $1/N$ with respect to the leading single trace contribution. We extend the duality between planar scattering amplitudes and null polygonal Wilson loops to the double trace amplitude. The new duality relates the amplitude to the correlation function of two infinite null polygonal Wilson lines that are subject to a quantum periodicity constraint. We test the duality perturbatively at one-loop order and demonstrate it for the dual string in AdS. The duality allows us to extend the notion of the loop integrand beyond the planar limit and to determine it using recursion relations. It also allows one to apply the integrability-based pentagon operator product expansion approach to the first non-planar order.
Introduction
The gluon S-matrix of an interacting gauge theory in four spacetime dimensions is an extremely rich and interesting object. A major simplification emerges in the large N 't Hooft limit, in which the amplitude is reorganized in terms of 2-dimensional topologies of the 't Hooft diagrams. The leading order of this expansion is known as the planar amplitude, for which the diagrams have the topology of a disk. In the past ten years there has been major progress in the computation of planar scattering amplitudes in general and in N = 4 SYM theory in particular, see [1] for a recent review.
This progress has been driven, to a large extent, by a surprising duality between planar scattering amplitudes and polygonal Wilson loops in N = 4 SYM theory [2][3][4][5]. This duality is directly tied to the Yangian symmetry of the amplitude [6,7] and has been useful in two major ways. First, the duality allows one to relate the momentum loop integrands of different Feynman diagrams in a physically meaningful way. The corresponding loop integrand can be recursively constructed at any loop order [8]. The second way is the application of nonperturbative integrability methods to calculating amplitudes. It turns out that integrability is most useful for computing the planar amplitudes using their dual representation in terms of polygonal Wilson loops [9,10]. In this approach the Wilson loop expectation value takes the form of a sum over two-dimensional excitations that propagate on top of the Gubser-Klebanov-Polyakov (GKP) flux tube [11,12]. As the dynamics of GKP excitations are well understood at finite coupling [13], the duality between the amplitudes and the Wilson loops opens the door for the finite coupling computation of the amplitude.
Evaluation of the full amplitude also requires being able to calculate the higher order corrections in the 1/N expansion, which, in turn, requires new ideas and techniques. One such idea is presented in this paper. It allows one to apply the techniques outlined above to the non-planar corrections.
Let us outline the main idea. One way of obtaining higher genus Riemann surfaces is to start with a disk and glue different segments of its boundary together. Going in the opposite direction, one may start with a Riemann surface and cut it open along all its cycles back into a disk. This is exactly the approach we will take, but instead of working in position space, we will be working in dual momentum space. 1 The crucial point that allows us to do it is the fact that the spacetime momentum that flows around any cycle of the 't Hooft diagram can be defined in a physically meaningful way. Fixing these cycle loop momenta has the effect of cutting the Riemann surface open. The integration over these momenta is related to the integration over the moduli space of a dual two-dimensional surface. The resulting sum of the disk diagrams has a dual description in terms of a Wilson lines correlator with certain periodicity constraints that correspond to gluing at the disk boundary. Once the amplitude has been mapped onto a disk at the expense of stripping away the cycles' loop integrations, all the planar techniques can be applied to it. This will be done in detail for the leading non-planar correction, for which the 't Hooft diagrams have a cylindrical topology. The generalization to higher orders in the 1/N expansion will be briefly described in the discussion section.
The paper is organized as follows. In section 2 the cylindrical duality is stated and the dashed lines indicate its various cylinder cuts, which correspond to the diagram with fixed momenta flowing around the cylinder, l. The red dashed line wraps around the cylinder once more than the blue one, which implies that the cylinder cut momenta, l, is only defined modulo a shift by the momentum flowing through the cylinder, l l + q.
conventions are set. Section 3 is dedicated to demonstrating how the duality comes about in a simple toy model -the double scaling limit of the γ-deformed N = 4 SYM theory. Next, in section 4 a T-duality transformation is performed on a string in AdS with the topology of a cylinder and the holographic dual is found. In section 5 an explicit check of the duality in N = 4 SYM is carried out at leading order in perturbation theory. In section 6 the BCFW recursion relation for the cylindrically cut amplitude is derived at the Born level. In section 7 the planar loop integrand is generalized for the case of the cylindrically cut amplitude and in section 8 a recursion relation is derived for it. In section 9 the role played by superconformal symmetry is clarified. Finally, in section 10 a discussion of future applications and extensions of the duality is presented.
The cylindrical duality
Here, we put forward a precise conjecture for a duality between double trace amplitude and a correlation function between two infinite null polygonal Wilson lines subject to a quantum periodicity constraint. Let us first outline the reason why these two observables are related to each other. The string dual of the double trace amplitude has the topology of a cylinder. If one considers the univesral cover of the cylinder, which is a strip, then the double trace amplitude appears as a single trace one with infinitely many external particles subject to a periodicity constraint. The standard duality map between single trace amplitudes and null polygonal Wilson loops is then applied to this object.
There are a few important subtleties to consider. The first one has to do with the relative ordering of the traces. We consider the double trace partial amplitude with n ordered particles in one trace and m in the other, denoted A n,m , [16]. In the double line notation, the leading color Feynman diagrams that contribute to this partial amplitude have the topology of a cylinder. The two sets of ordered external momenta from the color traces, k 1 , k 2 , . . . , k n and k n+1 , k n+2 , . . . , k n+m , are inserted on the two boundaries of the cylinder and their orderings are correlated through the cylinder. The coefficient of Tr(T 1 T 2 . . . T n ) Tr(T n+1 T n+2 . . . T n+m ) in the color decomposition of the full amplitude is given by the sum of two different partial amplitudes that correspond to the two different relative orderings. The duality can be demonstrated independently for either relative orderings and therefore only one will be considered throughout this paper, see figure 2.1.
For example, the coefficient of Tr(T 1 T 2 T 3 T 4 ) Tr(T 5 T 6 T 7 ) in the color decomposition of the full amplitude A 4,3 is the sum of two partial amplitudes. In the first, A 4,3 (1, 2, 3, 4; 5, 6, 7), the color ordering 1 → 2 → 3 → 4 → 1 in the first trace is correlated through the cylinder with the color ordering 5 → 6 → 7 → 5 in the second trace. In the second partial amplitude, A 4,3 (1, 2, 3, 4; 7, 6, 5), it is correlated with the reversed color ordering 7 → 6 → 5 → 7. If one of the traces has only two particles in it, there is no way to define the relative ordering and, therefore, there is only one partial amplitude associated with this color structure.
The second subtlety is that we do not establish the duality for the full double trace partial amplitude, but for its cylinder cut, which is defined as follows. At L loops, any Feynman diagram that contributes to the amplitude has L internal momenta, {l 1 , . . . , l L }. We consider a curve on the diagram, γ, that crosses a certain number of internal propagators, {1/P 2 γ(j) }, see the blue dashed line in figure 2.1. It starts between k 1 and k n in one trace and ends between k n+1 and k n+m in the other. 2 The unintegrated Feynman diagram G(l 1 , . . . , l L ) is then multiplied by the δ-function, where the sign of P γ(j) is defined such that P γ(j) is the momentum that crosses the cut in the direction that coincides with the external particle ordering (1, 2, . . . , n), see figure 2.1. Here, l is interpreted as the momentum flow around the cylinder. The cylindrically cut amplitude, A γ (l), is obtained by stripping off the integral over l and a factor of λ/N , and summing over all Feynman diagrams. When integrated over l, it gives the full amplitude, The superscript γ of A indicates that it depends on the choice of the cylinder cut for any diagram. Because of this dependence A γ n,m (l) is not yet a physical object. Due to momentum conservation at the interaction vertices any continuous deformation of the cut of any diagram does not change the δ-function in (1) and hence does not affect A γ n,m (l). For example, the green and blue curves in figure 2.1 result in the same A γ n,m (l). However, the red curve γ in figure 2.1 winds around the cylinder and thus cannot be continuously deformed into γ, resulting in a shift of l by the total momentum flow through the cylinder, q, Similarly, a curve that winds around the cylinder a times more than γ does would correspond to a shift in the momentum flow around the cylinder, l → l + a q. In other words, the momentum flow around the cylinder l is only well-defined modulo a shift by the total momentum flow through the cylinder, This ambiguity in the definition of l is washed off by the integration in (2). To construct the cylindrically cut amplitude A n,m (l) which does not depends on γ one has to sum over all possible shifts of l by an integer number of q's. This is equivalent to summing over all winding numbers of the curves γ around the cylinder, where the superscript γ of A n,m (l) was dropped because the sum is independent of the curve. This sum is convergent because the summand decays at least as 1/a 4 at large a. One may view this sum as part of the l integration in (2). The sum converges because the amplitude is UV finite. Unlike A n,m (l), the cylindrically cut amplitude A n,m (l) is unambiguously defined and hence is a physical quantity. Moreover, it consists only of planar diagrams. If one of the traces has only two particles in it, say n = 2 and m > 2, then there is no relative ordering of the two traces and, therefore, there is only one partial amplitude instead of two. Still, the definition of A remains the same and is unambiguous. In the special case where n = m = 2 there is no ordering at all and, therefore, no orientation on the cylinder. In this case every Feynman diagram is associated with the average over two cylinder cuts. These cylinder cuts are defined as above and are related to each other by turning the cylinder inside-out. One way to derive this prescription is to start with an amplitude with more than two particles in one of the traces and take a soft limit. An example of the n = m = 2 amplitude will be studied explicitly in section 5. Appendix A further expands on this point.
The cylindrically cut amplitude A n,m (l) is sensitive to integration by parts in momentum space. That is, the cylinder cut of a given diagram is not the same before and after momentum integration by parts as these can shift the momentum flow around the cylinder by a total derivative. According to the definition of the cylinder cut, one has to perform the cutting (1) and the summation (5) prior to any integration by parts at the level of the Feynman diagram. 3 To specify the duality with the Wilson lines object, it is convenient to further strip off the momentum and supersymmetry δ-functions and a Parke-Taylor-like factor as The conjectured duality then reads The periodic Wilson lines configuration that is dual to the cylindrically cut double trace amplitude. Each line consists of the ordered gluon momenta in one of the two traces. Because the total momentum in each trace is non-zero, the dual line is not closed. Instead, it is repeated periodically and forms the boundary of the universal cover of the cylinder. The separation between the two Wilson lines is equal to the momentum that flows around the cylinder, l.
where W n,m (l, θ) and the eight Grassmann variables θ A α are defined below. By plugging (7) into (2) one can go back to the full amplitude, for which the duality takes the form We will now define the cylinder Wilson lines correlator W n,m (l, θ) entering the duality (7). The Wilson lines consist of two infinite sets of null edges (. . . , k 1 , k 2 , . . . , k n , k 1 , . . . ) and (. . . , k n+1 , k n+2 , . . . , k n+m , k n+1 , . . . ). Cusps between k i mod n and k i+1 mod n are denoted by x i and cusps between k n+(j mod m) and k n+(j+1 mod m) byẋ j , A convenient notation to use is j =ẋ j∓m . (10) In (8) the momentum flow around the double trace cylinder l is integrated over. This momentum is dual to the separation between the two lines, and is only defined modulo q, as in (3). It is convenient to represent these periodic coordinates using dual momentum twistors. Under the periodicity constraint, the first two components of the twistors Z that is, the helicity weight of Z i (Ż j ) is equated with that of its periodic image Z i+n (Ż j+m ). Note that the geometry of the Wilson lines is invariant under a shift of l by q. Hence, it makes perfect sense to relate it with A(l), as defined by the sum in (5).
We denote the supersymmetric counterparts of the bosonic coordinates of the cusps, x i andẋ j , by θ i andθ j respectively. These supercoordinates are related to the particles' helicities on each of the infinite Wilson lines in the standard way, θ i − θ i−1 = λ iηi anḋ θ j −θ j−1 =λ jηj . It follows from supersymmetry that the total R-charge entering one side of the cylinder is equal to the one exiting from the other, A notation similar to (10) will be used for the supercoordinates, Similarly to the cut momentum l in (11), the total supersymmetry charge that flows around the cylinder (or equivalently, the relative separation of the two Wilson lines in superspace) is defined as Prior to integration, this separation is only defined modulo a shift, (l, θ) (l + q, θ + Q) .
We did not find it useful to strip off the θ integration as we did for the bosonic variable l in (5). Instead, (7) contains the Grassmann integration and there is no need for summing over its Q-shifts. The superperiodicity of the Wilson lines geometry, (10) and (14), can be made manifest using supertwistors, For this periodic external data the δ 8 (Q) for the infinite loop reduces to the one of a single periodic block of the Wilson lines. Similarly, the infinite cover of the Parke-Taylor-like factor in (6) reproduces the Parke-Taylor factor of the corresponding infinite single trace Wilson loop.
The final subtlety is that the periodicity constraint has to be imposed not only on the external data but also at the quantum level -on all the planar diagrams. It is important to note that this constraint is only defined for the leading color diagrams. One particular way the periodically constrained Wilson lines correlator can be defined is to start with N = 4 SYM theory compactified on a circle of radius q/(2π) and consider two closed null polygonal Wilson loops that wrap around the circle with edges (k 1 , . . . , k n ) and (k n+1 , . . . , k n+m ). Then all diagrams that contribute to the expectation value of the correlator between these Wilson loops in the leading order of the 1/N expansion are considered. Every individual propagator in these diagrams is periodic and there is no correlation between the color contractions and a spacetime shift around the circle. Next, these two are correlated by replacing the periodic propagators of the compactified theory with those of the non-compact flat spacetime. After this replacement, the diagrams in position space no longer close. Instead, as one follows the propagators around the cylinder, one finds a mismatch by one period, q. Hence, the new diagrams with non-compact propagators cannot live on the cylinder because they do not respect the cylinder periodicity. Instead, they live on its universal cover. It is important to point out that every propagator in each of these diagrams has infinitely many images but still contributes only once. The result is, by construction, gauge invariant -every gauge transformation g(x) has infinitely many periodic images, g [a] (x) ≡ g(x − a q), under which the images of the propagators transform. Finally, the interaction points are integrated on the full non-compact spacetime. This last step requires regularization of the cusp operators.
An alternative way the periodically constrained Wilson lines correlator can be defined is to start with all the planar Feynman diagrams that contribute to the correlation function between the two infinite Wilson lines in position space, then consider only the diagrams that, prior to integration, are individually invariant under a simultaneous relabeling of all external cusps, x Any propagator in such a diagram, G(y 1 − y 2 ), has infinitely many images, G(y 2 ) = G(y 1 − y 2 ), but should be counted only once. Finally, the interaction points are integrated over. Several examples are given in the next section.
The main differences between the duality (8) and the standard duality between singletrace amplitudes and null polygonal Wilson loops are the presence of the integration over θ and the need to impose a periodicity constraint on both the external data and the internal planar diagrams. Because of this periodicity constraint the Wilson lines correlator will be referred to as a cylinder correlator. The subscript cylinder is added to the definition of the expectation value to indicate the imposition of the periodicity constraint, where are the edge and vertex operators defined in [5]. It is convenient to normalize the Wlison lines without 1/N factor in front of the trace. The periodicity constraint may look unnatural from the point of view of Feynman perturbation theory. However, it has very useful implications. First, it enables the computation of the double trace amplitude at finite coupling using the integrability-based POPE approach [10]. In this framework, it simply becomes the very natural periodicity of a string that has a cylindrical topology, see section 10. Second, it allows one to generalize the notion of the loop integrand to the double trace amplitude and determine it using recursion relations, see sections 7 and 8.
In order to illustrate how perturbation theory works under this constraint, a toy model will be considered in the next section. We will perform explicit perturbative computations of a double trace amplitude and its dual cylinder Wilson lines correlator in the the double scaling limit of the γ-deformed N = 4 SYM theory [17].
A toy model example
Before studying the duality in detail for N = 4 SYM theory, it is helpful to first consider a simple limit of it. In this section, we will consider the duality in the double scaling limit of the γ-deformation of the theory, henceforth referred to as the fishnet model [17]. After taking this limit, of all the N = 4 fields, one is left with a pair of complex scalars, φ 1 and φ 2 . Their dynamics are dictated by the Lagrangian In the planar limit, N is taken to infinity, while keeping the 't Hooft coupling λ = g 2 N fixed, see [17] for details. 4 Being a deformation of N = 4 SYM theory that preserves the Yangian symmetry in the planar limit, this theory also exhibits dual conformal invariance. Hence, planar scattering amplitudes in this theory are expected to have a dual description in terms of Wilson loop like objects, as will now be demonstrated. 5 The Wilson loop dual of the planar N = 4 amplitude contains insertions that depend on the type of particles that are being scattered. In the double scaling limit of [17] the gauge field decouples and one is left with only the scalar insertions along the polygonal loop. However, the corresponding object will still be referred to as a "Wilson loop". This duality can be illustrated by considering the planar amplitude of eight ordered scalars, . There is only one planar diagram that contributes to this process, see figure 3.1. It is given by This amplitude can be obtained from a N 2 MHV amplitude in N = 4 SYM by taking the γ-deformation and the double scaling limit of [17]. By taking the same limit of the dual polygonal Wilson loop one ends up with its representation in dual momentum space. It is a null octagon with its cusps at x i=1,..., 8 , where x i − x i−1 = k i . The octagon has four scalar insertions, see figure 3.1, where we have chosen to dress each scalar insertion by the two-bracket a b = αα λ α a λα b , with k αα a = λ α aλα a for k 2 a = 0, and by c 2 . This choice is made to match the convention in N = 4 SYM. As in the amplitude case, there is only one planar Feynman diagram that contributes to the expectation value of this configuration. It has one interaction vertex at The leading contribution to the eight-point amplitude of y, which is related to the internal momentum in (21) as y − x 1 = l. This leads to the dual representation of the amplitude, W fishnet 8 = λ 1 2 3 4 5 6 7 8 The Wilson loop (24) is invariant under dual conformal transformations that act as regular conformal transformations on the dual x space. 6 The next step is to generalize this duality beyond the planar limit by considering 1/N corrections to the scattering amplitudes. These are not the same as non-planar corrections to the closed polygonal Wilson loop duals, such as (22). To demonstrate the duality in the most simple setting we consider the 4-point amplitude of the scalars This amplitude does not receive any planar contributions. The first non-trivial contribution comes from the double trace color contraction, at order 1/N in the 't Hooft expansion, 6 This integral can be computed analytically [19], The two one-loop Feynman diagrams that contribute to the four-point amplitude of The partial amplitude A 2,2 receives its first perturbative contribution at one-loop order.
There are two Feynman diagrams that have the topology of a cylinder, see figure 3.2. 7 We try to follow the same steps as in the case of the planar eight-point amplitude and the octagon Wilson loop that was presented above. The Feynman diagrams are drawn on a cylinder with the φ 1 's on the left boundary and the φ † 1 's on the right one, see figure 3.2. For the special case considered here, where there are only two particles on each side, there is no ordering of the traces. Therefore, the traces do not induce an orientation on the cylinder. Instead, the orientation is determined by the the φ 2 charge flow, which is chosen to go counterclockwise, as viewed from the left boundary, see figure 3.2. This allows one to distinguish between the interior and the exterior of the cylinder. The ability to do so is related to the fact that the double scaled theory is not CPT symmetric and its corresponding 't Hooft string is not orientable. 8 The two cusps, x 1 and x 2 , are associated with faces of the diagram and are defined such that x 2 − x 1 = k 2 . As opposed to the planar case, these coordinates satisfy x 1 − x 2 = k 1 − q instead of x 1 − x 2 = k 1 , where q = k 1 + k 2 is the total momentum in the first trace. Since q = 0, the dual Wilson line is not closed. Going around the cylinder takes x i to x (25), see figure 3.3. The separation between the two infinite lines is only defined modulo a shift by q, l =ẋ 2 − x 2 mod q, (11). It is equal to the loop momentum flowing around the cylinder.
The cylindrically cut amplitude A 2,2 (l), defined in (2), is constructed by cutting the cylinder across the blue dashed line in figure 3.2. Despite the fact that the cut on the right looks different from the one on the left, both of them start at x 2 and end atẋ 2 . The ambiguity in the definition of the cut is eliminated by summing over the shifts of l by q, according to equation (5). This results in the following expression for the cylindrically cut double trace 2 )/( 4 3 c 2 0 )}, contracted along the edges. Each propagator between x i andẋ j has infinitely many identical images running between x j , but is counted only once. Apart from these two diagrams, there are two infinite sets of diagrams that are related to these by shifting one of the lines by an integer number of periods.
amplitude,
The dual cylindrical Wilson lines correlator W 2,2 (l) has two scalar insertions on each of the lines that are repeated periodically, see figure 3.3, (27) Every propagator in any planar Feynman diagram that contributes to the correlator (27) has infinitely many images. The subscript cylinder of the expectation value indicates that such a propagator is counted only once, with the rest of them being the periodic images of the same propagator.
At tree level this expectation value is given by where the two terms in the sum correspond to the two diagrams in figure 3.3, and the sum accounts for all diagrams that are related to these by a periodic shift of one of the two lines.
One can see that at tree level, where the first (second) term in the sum in (28) corresponds to the first (second) term in the sum in (26). This matching is specific to the fishnet model. In general, a single Feynman diagram cannot be isolated in a physically meaningful way and, hence, neither does its cylinder cut. The duality (29) between the cylindrically cut double trace amplitude and the cylinder Wilson lines correlator has been confirmed at leading order in perturbation theory. It is not hard to show that relation (29) holds to all orders in perturbation theory. At any nonvanishing loop order, there are only two diagrams that contribute to the four-point double trace amplitude. They are obtained from the two diagrams in figure 3.2 by wrapping more φ 2 loops around the cylinder. Each internal φ 2 line that wraps around the cylinder increases the loop order by two. Hence, the next correction appears at three-loop order. This case will now be considered in detail to illustrate how the duality extends to higher loop orders. The corresponding two three-loop diagrams are given in figure 3.4. The cylindrical cut of this amplitude reduces the loop order by one. Specifically, it is a two-loop object that is obtained from the two diagrams in figure 3.4 by cutting them open along the blue dashed lines. Note that the cut now goes through two propagators. Hence, only the sum of the momenta of these two propagators is fixed while the relative momentum is being integrated over.
On the Wilson lines correlator side of the duality there are two new bulk integration points, y 1 and y 2 , along with their periodic images. They are connected by φ 2 propagators that form an additional line that stretches vertically parallel to the Wilson lines (around the dual cylinder), see figure 3.5. One finds that x 1 (l) at two-loop order. Blue (red) dashed lines correspond to φ 1 (φ 2 ) propagators. These two diagrams come with an infinite set of diagrams that are related to them by a shift of one of the two Wilson lines by an integer number of periods, sayẋ j →ẋ The integrations over y 1 and y 2 correspond to the loop integrations of the amplitude diagrams in figure 3.4. The resulting integrals precisely match the 2-loop correction to the cylindrically cut amplitude. There are only two diagrams at any given non-vanishing loop order. On the amplitude side more φ 2 loops that wrap around the cylinder are added. On the Wilson lines correlator side one finds more vertical φ 2 lines parallel to the Wilson lines. These expressions agree on the level of the integrand. The same also turns out to be true for N = 4 SYM theory. This will be discussed in detail in section 7.
Before performing a similar perturbative check for N = 4 SYM theory in section 5, we discuss how T-duality on the holographic string side works when the string has the topology of a cylinder in the next section.
T-duality of the cylindrical string amplitude
The duality between planar scattering amplitudes and closed polygonal Wilson loops was first observed at the strong coupling limit of N = 4 SYM [2]. There, it emerges by performing a T-duality on the string worldsheet in AdS spacetime. T-duality in general is a change of variables that relates strings propagating in different, T-dual, backgrounds. In the context of gluon scattering amplitudes the AdS background is non-compact and, therefore, this duality is restricted to the planar limit, in which the string has the topology of a disk. The planar duality maps the AdS background back to itself and the amplitude with disk topology to the Wilson loop. Here, we will generalize this correspondence to double trace amplitudes, for which the holographic dual string has the topology of a cylinder. Similar considerations were used for studying form factors at strong coupling in [21]. Our discussion will not be restricted to the strong coupling limit (where the string description becomes classical). However, only a simplified bosonic version of the worldsheet theory will be considered.
The worldsheet has cylindrical topology and ends on an IR D3 brane close to the Poincaré horizon. It has n ordered gluon vertex operators on one boundary, (31), and m on the other, (32). Here, the cut γ is a curve on the worldsheet that starts at σ 1 on the left boundary, crosses the cylinder and ends atσ 1 on the right boundary.
Gluon scattering amplitudes are holographically dual to an open string path integral in AdS 5 × S 5 [2]. The open string ends on an IR D3 brane close to the Poincaré horizon of AdS 5 . While for the planar amplitude the string has the topology of a disk, for the double trace amplitude it has the topology of a cylinder. T-duality in a non-compact target space, however, is known to break down beyond the leading disk topology order. This puts the relationship between double trace scattering amplitudes and Wilson loops in question. Indeed, it will soon become evident that double trace scattering amplitudes and Wilson loops in N = 4 SYM theory are not dual to each other. Instead, as in the fishnet model discussed in the previous section, double trace amplitudes can be computed from the correlation function of two Wilson lines in N = 4 SYM theory only once a new periodicity constraint is imposed. In this section this constraint will be derived at the level of the worldsheet path integral for the bosonic string. We leave the generalization of this cylindrical duality to an exact fermionic T-duality for future work [3]. In the rest of the paper we will explain how the constraint can be imposed at the full quantum level of the gauge theory and how it leads to an exact duality between double trace amplitudes and Wilson loops.
We will be working in conformal gauge and parameterizing the Euclidean cylinder by a periodic coordinate σ ∈ [0, 2π] and a coordinate along the cylinder τ ∈ [0, L], where iL/(2π) is the modular parameter of the cylinder, see figure 4.1. The cylinder has two boundaries. A set of n vertex operators is inserted on the boundary at τ = 0, corresponding to a set of ordered gluon asymptotic states, Similarly, on the other boundary, at τ = L, a set of m ordered gluon vertex operators is inserted, where the dot inσ j indicates that it is located on the second boundary of the cylinder. The full double trace amplitude is given by a sum of the two relative orderings of the gluons vertex operators on the two boundaries of the cylinder. Only the ordering for which σ i < σ i+1 anḋ σ j <σ j+1 on these two boundaries will be considered, see figure 4.1. The other ordering is related to this one by a relabelling of the external gluons.
Contrary to the planar case, the total momentum going through each boundary is nonzero, The Euclidean worldsheet action reads where z is the radial AdS direction in Poincaré coordinates.
Using the same manipulations as in [22] one can rewrite the contribution to the action of all the vertex operators on one boundary as where c is an arbitrary constant four-vector and σ n+1 = σ 1 . The only difference from the single boundary case is the new term q · x(0, σ 1 ). A similar contribution comes from the other boundary at τ = L, given by −q · x(L,σ 1 ). One can rewrite the sum of these two new contributions as an integral of a total derivative in τ , where σ = γ(τ ) is an arbitrary curve on the cylinder that stretches between the boundaries and obeys γ(0) = σ 1 , γ(L) =σ 1 . The action is invariant under global translations of x, as is evident from the fact that x now enters it only with derivatives. In order to construct the T-dual action we follow Buscher [23] and gauge this translation symmetry. This is done by introducing a worldsheet gauge field A α that transforms under a local translation The extended action takes the following form, Here, y (τ, σ) is a vector field Lagrange multiplier that sets The vector l is another Lagrange multiplier, which ensures that the holonomy of A α around the cylinder vanishes. Letting the curve γ wrap around the cylinder one more time has the effect of shifting l by q. Since F = 0, this holonomy is independent of τ . Together these two constraints ensure that A α is a flat connection on the cylinder and that the action in (37) is independent of the curve γ(τ ).
Since the connection is flat and x is periodic, the gauge x = 0 can be chosen. In this gauge, the action becomes Integrating over y and l sets A α = −∂ αx , wherex is pure gauge. The action then reduces to the original one from equation (34) with x replaced byx. Hence, the two actions, S 1 in (34) and S in (38), are equivalent.
In order to construct the T-dual action A α is integrated out in S. This is done by first integrating the term y · F by parts, moving the derivatives from A α to y. Then the holonomy term dσ A σ (τ, σ) is evaluated at the τ = L boundary. 9 After an overall rescaling of the 4π z , the action takes the form, Note that the field y can be discontinuous on a line along the cylinder without causing the action to diverge. As a result, a new boundary term arises from the integration by parts in σ. In (39) this new boundary term was placed along the curve γ (τ ) that was introduced in (36).
Integrating out A α in the bulk of the cylinder leads to the string action in the T-dual AdS background, ds 2 = (dz 2 + dy 2 )/z 2 , withz = 1/z. The integration of A τ along the curve σ = γ(τ ) gives the periodicity constraint for the T-dual y field, It implies that going around the cylinder changes the value of y by q. Hence, the image of the cylinder in the T-dual AdS is not a cylinder, but its universal cover. In particular, the quantum fluctuations of the string at (z, y) have an image at (z, y + q) and are not independent. Finally, integrating out the boundary values of A σ gives the following Dirichlet boundary conditions for the T-dual y coordinate, These conditions imply that the T-dual string stretches between two periodic null polygonal Wilson lines. They are constructed from the ordered momenta {k 1 , k 2 , . . . , k n } and {k n+1 , . . . , k n+m }, respectively. The period of each of these lines is q. The vector c corresponds to a simultaneous translation of the two lines and can be set to zero. The vector l is the separation between the two lines and is being integrated over, see figure 4.2. This integration projects the total momentum flow between the two lines to zero. Under T-duality this momentum is mapped to the winding of the string state on the cylinder and this projection is the expected T-dual manifestation of fact that the string state on the amplitude side has zero winding.
The periodic srting path integral obtained above is equivalent to the one of a string in a spacetime with the q direction compactified on a circle of radius q/(2π). Before T-duality the string has momentum number one and winding number zero around the circle. After T-duality, the string ends on two closed null polygons that wrap around the circle with edges (k 1 , . . . , k n ) and (k n+1 , . . . , k n+m ). The string has winding number that is fixed to one and momentum number zero around the circle. The latter projection comes about due to the integration over the component of l in the direction of q.
There is another interesting way of thinking about the integration over l. The vectors q and l span a two dimensional space. An orthogonal basis for this space is {q, l ⊥ }, where l ⊥ = l − q(l · q/q 2 ). The quantity i|l ⊥ |/|q| can be thought of as the spacetime modular parameter of the cylinder. At the semiclassical level the Virasoro constraint relates it to the worldsheet modular parameter iL/(2π). Hence, the integration over l can be converted into the integration over the worldsheet modular parameter. 10 At strong coupling the periodic srting path integral is dominated by its minimal surface area saddle point. Because the boundary conditions are periodic, so is the minimal surface. Hence, the periodic constraint (40) is automatically satisfied. This minimal surface area can The T-dual of a string with a cylindrical topology that is holographically dual to a double trace amplitude. It ends at the boundary of AdS 5 along two periodic lines composed of the null external gluon momenta. The worldsheet is subject to a quantum periodicity constraint that restricts it to be periodic in the bulk as well. The period is equal to the total momentum flow through the cylinder, q in (33). The separation between the two lines, l, is T-dual to the momentum flow around the cylinder and it is being integrated over.
be calculated using a simple generalization of the techniques of [9,21,24,25] and leads to a periodic Y-system. It will be reported on in [26].
One-loop duality test
In this section, the duality will be tested explicitly for the four-point MHV amplitude at the leading order in perturbation theory. All tree-level amplitudes are single trace, so the double trace amplitude receives its first perturbative contribution at one-loop order. The cutting procedure strips away the loop integration and by that reduces the number of loops from one to zero. On the other side of the duality, one has the cylinder expectation value of a Wilson lines correlator with eight η insertions. Similarly to the N 2 MHV Wilson loop, this object starts at tree level. Specifically, the duality tested in this section is where A tree 2,2 (l) = The left hand side of the duality equation (42) is evaluated in section 5.1 and the right hand side in section 5.2.
Cylindrically cut four point double trace amplitude at Born level
The four point double trace amplitude was studied at one-loop order in [16,27]. It can be expressed as a sum of three massless scalar box integrals, where A tree is the Parke-Taylor tree-level partial amplitude, and the massless scalar box integral is In (44) the inner and the outer faces of the box represent the two traces. The box integrals only depend on the distribution of the external momenta on the cusps of the box. Next, this expression is rewritten in terms of the cylindrically cut amplitude, For the special case considered here there are only two particles in each trace and, therefore, no relative ordering and no orientation on the cylinder. As opposed to the fishnet model, N = 4 SYM is CPT invariant and its 't Hooft string is orientable. Hence, when taking the cylindrical cut, one has to average over the two orientations. Consequently, each of the three box diagrams in (44) has two different cylindrical cuts. Appendix A expands on this point. The cylindrically cut amplitude A tree 2,2 (l) is only defined up to a shift of l by q, (5). One representative of this class of amplitudes is where the blue dashed line represents the cut (1). When drawn on the cylinder, the six cut boxes in equation (48) take the following form, . ( Explicitly, The cylindrically cut double trace amplitude M MHV 2,2 tree (l) is obtained by summing (48) over all integer shifts, l → l + a q, and stripping away the factor in (43). These infinite sums can be evaluated analytically in Mathematica.
It is important to point out that the reduction procedure to scalar box diagrams that was used in [27] to derive the representation of the one-loop amplitude (44) does not involve integration by parts. Therefore, this procedure commutes with the cylindrical cutting prescription. Beyond one loop, however, this is no longer the case and one has to first take the cylinder cut of the Feynman diagram and only then apply reduction procedures to scalar integrals. 11
Wilson lines correlator at tree level
Next, the result obtained above will be reproduced on the Wilson lines correlator side of the duality. First, the supercomponents of the Wilson lines will be discussed. These components determine the vertex and edge insertions along the periodic lines (18). This part of the calculation is not limited to perturbation theory and therefore is valid to all orders.
We consider the four-point MHV amplitude (43). All the supercomponents of MHV amplitudes are related by supersymmetry and are accounted for by δ 8 (Q) in (43). Therefore, all theη's inside W 2,2 can be set to zero. This implies that all the cusps of each line share the same supercoordiante, The superseparation of the two lines, θ =θ 2 − θ 2 , still has to be integrated over, Due to the dual supersymmetry, the Wilson lines correlator is invariant under simultaneous shift of the θ i 's and theθ j 's. This symmetry is trivialized by the map between the amplitude and the Wilson lines variables. It can be used to set either the θ i 's or theθ j 's to zero. For example, forθ j = 0, θ i = −θ and one finds a simple expression for the η's of the left line, η i = i θ . For this choice the θ integration (52) takes the form, where W 2,2 (η 1 , η 2 ; η 3 , η 4 |l) is a function of the η variables and W 2,2 (θ 1 , θ 2 ; θ 3 , θ 4 | l) is a function of the θ's. The integration over θ can now be converted into an integration over η 1 and η 2 . For a generic kinematical configuration, λ 1 and λ 2 are independent and hence can be used as a basis for integrating over θ A=1,2,3,4 . That is, In the special case considered above (θ 1 = θ 2 = −θ,θ = 0) this relation becomes Therefore, Alternatively, one could set θ i = 0 and decomposeθ j = θ in the basis {λ 3 , λ 4 }. This results in Another possible choice is a hybrid parametrization for whichθ A=1,2 = 0 and θ A=3,4 = 0. This choice will turn out to be very convenient later on. It leads to the following parametrization of the Wilson loop, For this parametrization the factors outside of the integral in (58) cancel the Parke-Taylorlike factor in (43) and the duality (42) takes the form, So far, the discussion has been valid at any loop order. We now focus on the calculation of the cylinder Wilson lines correlator at the leading order in perturbation theory. Any choice of the parametrization requires performing an independent calculation, all of which give the same result in the end. Here, making a convenient choice of the Wilson lines parametrization can simplify the calculation dramatically. For example, the choice (56) includes the contribution that is shown in figure 5.1.a. For this contribution two intertwined integrations of a gauge field along the k 4 edge result in difficulty isolating the contribution of a single block. On the other hand, for the hybrid parametrization (59) all insertions are scalars and the blocks decouple. The calculation in this case is easier to perform because the result of each individual scalar contraction can be extracted from known single trace amplitudes.
In the hybrid configuration (59) one finds two η's on each edge. The left line has only R-charge components η A i with A = 1 and A = 2, while the right line has only A = 3 and A = 4 components. As follows from the detailed analysis of the vertex and edge operators in [5], only scalar edge and vertex insertions contribute to this configuration. The two scalars on each side appear in three types of configurations that are shown in figure 5.2. We will refer to them as single cusp, double cusp and edge configurations, respectively. To calculate the cylinder Wilson line correlator the scalars on the left line have to be contracted with the scalars on the right line in all possible planar periodic ways.
Every such contraction is repeated with all possible integer period shifts of one line with respect to the other. For example, in figures 5.1.b and 5.1.c two different edge-to-edge contractions that are related to each other by a shift of the right line by a single period are shown. This property allows us to define a building block B({Z i }, {Ż j }), which includes a single representative of every type of contraction. This object is not uniquely defined -any representative from the infinite family of contractions that are related by a shift l → l + a q can be used. In terms of B the cylinder Wilson line correlator is given by The building block B is, in a sense, the dual space analog of the cylindrically cut amplitude A in (5). However, as opposed to the fishnet theory discussed in section 3, where the two sums were matched on a diagram-by-diagram basis, here this seems to be just an analogy. We could not find a choice of representative B and a cylindrically cut amplitude A, such that the two are equal prior to the summations in (60) and (5). The advantage of working with the hybrid configuration is that any specific contraction of the scalars between the two lines, such as the edge one in figure 5.1.b, factors into a product of two planar single trace contractions. Therefore, the periodicity of the Wilson lines configuration becomes irrelevant as it is reduced to a product of expectation values of closed polygons. This allows the amplitude to be expressed through known tree-level single trace amplitudes. Take, for example, any edge-to-edge contraction, represented by the blue dashed lines in figure 5.1.b and 5.1.c. It is equal to the product of two independent scalar propagators ending on two separate edges. Consider one such scalar contraction, say between the i'th edge of the left line, whose twistors are labelled as {. . . , Z i−1 , Z i , Z i+1 , . . . }, and the j'th edge of the right line, whose twistors are labelled as {. . . ,Ż j−1 ,Ż j ,Ż j+1 , . . . }. This single edge-to-edge contraction can be evaluated as the (i, i, j, j) component of the hexagon NMHV amplitude with the following six twistors, The other two types of single scalar contractions are cusp-to-cusp and cusp-to-edge. These two can likewise be extracted from the NMHV hexagons as All the contributions to B can be arranged into nine groups corresponding to all different pairings of the single cusp, double cusp or edge insertions between the lines. Any contraction with a single and double cusp configurations, shown in the left three diagrams in figure 5.2, comes with an extra factor of −1, which appears from rearranging the Grassmann η parameters on the edges. Additionally, the single cusp configuration also generates a factor of 2, due to the fact that there are two different distributions of η parameters that contribute to it.
The first class of contractions is a single cusp contracted with a single cusp. One choice of a representative for this class is 12 (63) 12 Note that the six twistors that are used to evaluate C(i, j) are
Finally, a representative for contractions of edge insertions on both sides is
Combining all the pieces together, one finds, The final step is to plug B from (69) into the sum in (60). We did not perform this sum analytically. Instead, we truncated it after a certain number of blocks and evaluated the result numerically with a generic randomly generated geometry. Summing over 10,000 blocks takes a few minutes on a standard laptop and gives a match with the cylindrically cut amplitude (42) up to the 18'th digit (that is an error of 10 −16 %). It would be interesting to prove this match analytically.
BCFW recursion relation at Born level
To establish the duality of the cylindrically cut one-loop double trace amplitude and the cylinder Wilson lines correlator at Born level we use the BCFW on-shell recursion relation. Since the BCFW recursion relation was not established before for the cylindrically cut amplitude, it will have to be studied on both sides of the duality. The aim is to prove that M tree n,m (l) = W tree n,m (l) , by showing that both sides satisfy the same recursion relation with the same initial conditions. We begin by studying the poles in l of both sides. These poles can be thought of as a specific type of Born level unitarity cuts. First, we consider the pole at l 2 = 0 of M tree 2,2 (l) and W tree 2,2 (l) and then generalize.
The l 2 = 0 pole
At l 2 = 0 the cylindrically cut one-loop double trace amplitude has a multi-particle factorization pole. Kinematically, it corresponds to a single propagator carrying momentum l around the cylinder going on-shell. In this limit (48) becomes where the conventionη −l = −η l and | − l = |l is used. Note that l drops out of the δ 8 (Q) on the r.h.s. Hence, it can also be written as where M is the amplitude in theη variables. One can now switch to the η variables using Since onlyη l = −η −l are non zero in M NMHV , it is possible to set η l = η −l = η 3 = η 4 = 0. For this choice, where M is the amplitude in the η variables.
Similarly to the cylindrically cut amplitude, the tree-level Wilson lines correlator W 2,2 in (56) has a pole at l 2 = 0. At this pole the cusps at x 2 and atẋ 2 become null separated. The factorization of the Wilson lines correlator happens inside one building block and its dynamics are therefore the same as in the single trace Wilson loop case that has been worked out in detail in [5]. In this kinematical limit the correlator becomes where W 6 is the single trace hexagon super Wilson loop, W 2,2 has been defined in (53) and are super momentum twistors (17) We conclude that both M tree 2,2 (l) and W tree 2,2 (l) have a pole at l 2 = 0 with the same residue, The same manipulations directly apply to any kinematical point where a cusp on the left line becomes null separated from a cusp on the right line. These are located at (l + a q) 2 = 0, (l +k 2 +a q) 2 = 0, (l −k 4 +a q) 2 = 0 and (l +k 2 −k 4 +a q) 2 = 0. In order to generalize (82) for these poles one simply has to redefine l accordingly. It is also straightforward to make this generalization for any number of particles in the traces. For example, for n particles in the first trace, instead of { θ 1 , θ 2 } in (75) and {η 1 , 2 l 1 l η 1 } in (81), one has { θ 1 , . . . , θ n } and {η 1 , 2 l 1 l η 1 , . . . , n l 1 l η 1 }, respectively.
BCFW recursion relation at Born level
We choose the BCFW deformation, where n is the number of particles in the first trace and the superscript represents the block shift as in (10). Clearly, the momentum flow through the cylinder q is independent of this deformation and, therefore, it is not a function of z. The momentum l, as defined in (11), is also independent of z. Of all the Wilson lines' cusps, only x 1 →x 1 (z) is a function of z. As a result, distances such as l − k 1 =ẋ m − x 1 do depend on z.
Any representative of the unsummed cylindrically cut one-loop amplitude A(l) is a manifestly rational function. Shifting l by q shifts all the poles of A(l). Hence, the summation over n in (5) produces a meromorphic function of z with an infinite set of well separated poles. A dispersion integral can be written for such a function, where the contour encircles the pole at z = 0, but no other pole. Similarly, the tree-level Wilson lines correlator can be written as an infinite sum of rational functions, with l shifted by an integer multiple of q. Such a shifted block decomposition of W 2,2 has been worked out explicitly in section 5. It is a meromorphic function of z with an infinite set of well separated poles and hence, This integral can be evaluated by summing over the residues at the poles of W tree n,m (l; z). There are two types of poles. First, there are the factorization poles for which the cuspx 1 on the left Wilson line becomes null separated from a cusp on the right Wilson line, see figure 6.1.a. These poles are located at The match of the l 2 = 0 pole (82), shown in section 6.1, generalizes to all the poles in (86), as explained in section 6.1. On the amplitude side of the duality, these correspond to the kinematical points were l + k 2 + a q or l + k 2 − k 4 + a q go on-shell. The second type are the poles that correspond to two cusps on the same Wilson line becoming null separated, see figures 6.1.b and 6.1.c. These poles can be divided into three groups as follows, • Two cusps on the same Wilson line that are more than one block away from each other become null separated. That is, (x 1 − x 1+r ) 2 → 0 where |r| > n is not a multiple of n. Such factorization poles are absent on the amplitude side. On the Wilson lines side they are inconsistent with planarity and the periodicity constraint. That is, the color contractions of such poles cross one of their periodic images, making their contribution non-planar. • The sum of all momenta in one trace goes on shell, that is, q 2 → 0. Since q =x 1 −x 1+n is independent of z, this pole does not contribute to the integral (85). For the four-point amplitude one may naively expect the pole at z = ∞ to be of this type. However, since q is independent of z, such a pole at infinity cannot be present. We have confirmed this by an explicit calculation, see appendix B. Note that individual building blocks of the Wilson lines correlator with n = m = 2 do have poles at z = ∞, but these cancel out after summation over all possible shifts of l by q.
• Finally, there are the factorization poles for which two cusps that fit inside a single building block become null separated. They are located at (see figures 6.1.b and 6.1.c) Such factorizations are localized inside each block separately and therefore are insensitive to the periodic arrangement. Hence, the factorization on these poles works in the same way as in the single trace case and the matching between the poles is automatic. Summing up these two types of residues results in the following representation of the cylindrically cut amplitude, W n,m (1, . . . , n; n + 1, . . . , n + m; l) m (2, 3, 4, . . . , n; n + 1, . . . , n + m; l) (88) , 2, . . . , i; n + 1, . . . , n + m; l) . . , n; n + 1, . . . , n + m; l) , and W is the super Wilson loop from [5] or its dual single trace amplitude. The first line on the right hand side of (88) corresponds to the factorization pictured in figure 6.1.a, the second and third lines to figure 6.1.c and the last line to figure 6.1.b. We checked this relation explicitly in Mathematica for the case of the four-point correlation function W 2,2 , for which equation (88) reads The l 2 = 0 pole of W 2,2 has been specifically examined in section 6.1. The same recursion relation holds for M, with the single trace amplitude M n instead of W n (the two are, of course, equal). One can use it to reduce the number of particles (edges) in each of the traces (Wilson lines) up to n = m = 2, for which the duality has been checked explicitly. This process cannot be continued up to the point where there is only one particle (edge) in one of the two traces (lines) because q is independent of the BCFW deformation (83). 13 13 One may imagine a more complex deformation for which q is a function of z and use it to go all the way up to the point where there is only one particle (edge) in one of the two traces (lines). On the amplitude side such an amplitude is proportional to the trace of a single SU (N ) generator and, therefore, vanishes (one can only exchange an on shell color singlet that is dual to a closed string state). On the Wilson lines side one would find an infinite null line. We expect that any propagator ending on it will result in an integral over a total derivative. The boundaries of such integrals are infinitely far away and therefore the integral vanishes.
The cylinder loop integrand
The loop integrand has been defined for planar amplitudes [8] and their dual Wilson loops [5]. It is a rational function which upon integration and regularization gives the loop amplitude. It is important to point out that it is fully determined by its poles and asymptotic behaviour, which allows one to find a recursion relation that enables a systematic construction of the integrand at any loop order [5,8].
We introduce a new class of objects that, we claim, are the natural generalization of the planar integrands, henceforth referred to as cylinder integrands. Upon integration and regularization, the cylinder integrands give the cylindrically cut double trace amplitude M n,m (l) and its dual cylinder Wilson lines correlator W n,m (l). As opposed to the planar case, it is not a rational function. Instead, it is an infinite sum of rational functions that arises from the universal cover of the cylinder. Similarly to the planar loop integrand, it is fully determined by its poles and asymptotic behaviour. In the next section both cylinder integrands, the amplitude one and the Wilson lines one, will be shown to satisfy the same recursion relation and hence to be equal.
The existence of a loop integrand is made possible by the fact that both the cylindrically cut double trace amplitude M n,m (l) and the cylinder Wilson lines correlator W n,m (l) are effectively planar objects. Therefore, there is a meaningful way of identifying integrand loop momenta of different Feynman diagrams. In fact, the cylindrical loop integrands have already made an appearance in the fishnet model, see (28). The same construction that was applied to a single Feynman diagram in the fishnet model generalizes to the corresponding sum of diagrams in N = 4 SYM theory.
The cylinder integrand of the Wilson lines correlator
The cylindrical Wilson lines integrand can be constructed by following the procedure in [5] in a straightforward way. The Wilson lines correlator is computed using chiral Lagrangian insertions with the periodicity constraint imposed on them. Every Lagrangian insertion of the form, comes with infinitely many images shifted by an integer multiple of q, on-shell (y + a q) = L on-shell (y) .
These images come about as a diagram in the theory compactified on a circle is lifted to a periodic flat space diagram as described in section 2. At L-loop order one finds L such Lagrangian insertions with their periodic images, · · · × L on-shell (y 1 ) on-shell (y 1 + q) on-shell (y L + q) λ 2 × . . . . Similarly to l in equation (4), the dual coordinates are only defined modulo a shift, Finally, these Lagrangian insertions are contracted at tree level, keeping only planar diagrams that respect the quantum periodicity constraint and taking the contribution of one period. The cylinder integrand is then defined as the sum of all such periodic diagrams for a fixed set of points, {y 1 , . . . , y L }. Shifting any of these points by one period results in the same cylinder integrand. In other words, the set {y 1 , y 2 , . . . , y L } and the set {y 1 + q, y 2 , . . . , y L } correspond to the same cylinder integrand.
Each specific periodic contraction, such as the one in figure 7.1, contains a finite number of propagators and is, therefore, a rational function. The cylinder integrand, on the other hand, is not a rational function. It has poles corresponding to Lagrangian insertion points becoming null separated from cusps of the periodic Wilson lines, 1/(y − x i − a q) 2 . Since a periodic null polygonal Wilson line has infinitely many cusps that are separated by the periodic shift and each insertion point has infinitely many images, the integrand has infinitely many poles at 1/(y − x i ) 2 , 1/(y − x i − q) 2 , ... . These poles are related by a constant periodic shift and are therefore well separated.
We denote the cylinder integrand of the cylinder Wilson lines correlator at L-loop order by W n,m;L (k 1 , . . . , k n ; k n+1 , . . . , k n+m ; l; {y i } L ) .
When discussing the loop integrand and, in particular, its dual conformal transformations, it is useful to change variables from the chiral Lagrangian insertion points {y i } to their corresponding lines in twistor space (A i B i ), It is important to point out that when translating from d 4 y i to d 4 Z A d 4 Z B vol[GL (2)] one has to absorb the Jacobian, A B 4 = λ A λ B 4 , into the definition of the integrand, see [5,8] for details. The periodic shift by q acts in twistor space as an SL(4) matrix P(q), The corresponding cylinder integrand is denoted by Here, 1, . . . , n and n + 1, . . . , n + m represent the twistors Z 1 , . . . , Z n andŻ 1 , . . . ,Ż m , respectively. Under the periodic shift, they transform as Shifting all the Z i s or all theŻ j 's by one period results in the same integrand. Let us consider, for example, the cylindrically cut two-loop amplitude in the fishnet model (30). The corresponding integrand is given by the sum, see figure 3.5, W fishnet 2,2;2 (k 1 , k 2 ; k 3 , k 4 ; l; {y 1 , y 2 }) (100) where the factor of 1/2! comes from the symmetrization in y 1 and y 2 . Here, the sum over a and b accounts for all shifts of y 1 and y 2 by an integer multiple of q. The sum over c corresponds to all shifts of l by an integer multiple of q. The cylindrically cut two-loop amplitude is then given by Note that even though the cylinder integrand (100) is not a rational function, it is determined by summing over images of a rational function, which can be chosen to be any term in the sum (100).
In order to construct the recursion relation in section 8 a new object, W n , has to be introduced. It is constructed by summing over all periodic shifts of Lagrangian insertion points of the L-loop single trace integrand, W n , or, in terms of twistors,
The cylinder integrand of the cylindrically cut double trace amplitude
Let us consider the cylinder cut of a leading color Feynman diagram that contributes to the double trace amplitude, as defined in (1), (2). If there are more than two particles in each trace, there are two relative orderings of the traces. The cylinder integrand is defined independently for each of these by following the same procedure as in the planar case. After a change of variables, the cylindrically cut diagrams are written in terms of the dual coordinates {x i } n i=1 instead of the external momenta, see (9), (11). Similarly, a set of loop integration The dual y i variables correspond to the faces of the cut leading color diagrams. They are defined such that the propagator between the faces y j and y k has momentum P j k = y j −y k . Similarly, a propagator on the boundary of the leading color diagram carries momentum P boundary j k = x j − y k . After going around the cylinder and returning to the same face, the corresponding dual y-coordinate is shifted by q. Hence, the dual coordinates live on the universal cover of the cylinder, see figure 7.2. For any diagram, the set of points {y i } L i=1 depends on the choice of the cut. To obtain the cylinder integrand that is cut independent and hence physical, the integrand must be symmetrized in {y i } L i=1 and summed over all possible shifts of each of the points {y i } L i=1 and {ẋ j } m j=1 by an integer multiple of q. The summation over integer shifts of the integration points can be viewed as performing part of the loop integration. That is, shifting a point y i → y i + q amounts to shifting the It is equal to (103) because the single trace amplitude and Wilson loop integrands are identical.
BCFW recursion relation at loop level
It has been shown in [5] that the Wilson loop integrand satisfies the same recursion relation as the loop integrand of the planar amplitude [8] and that the two are equal. To promote this into a proof of the planar amplitude -Wilson loop duality, one has to integrate the integrand, construct a regularization independent ratio and match the two. Here, we will generalize the integrand construction of [5] to the double trace duality. The arguments of [5] apply almost unchanged to the cylinder integrands. Namely, the cylinder integrands of the cylindrically cut amplitude and of the cylinder Wilson lines correlator satisfy the same recursion relations that determine them uniquely. At tree level M n,m (l) and W n,m (l) satisfy the same BCFW recursion relation as was found in section 6. This recursion relation is generalized to loop level, starting with the periodic Wilson lines picture.
The Wilson lines recursion relation
Borrowing the arguments of [5] is straightforward. As explained in section 7.1, the chiral Lagrangian insertions are used to construct the Wilson lines cylinder integrand. It is an infinite sum of rational functions of the insertion points and external data with well separated poles. Therefore, the BCFW deformation prescription used at tree-level is still applicable, W n,m;L (l) = dz 2πi z W n,m;L (l; z) , where the BCFW deformed integrand W n,m;L (l; z) is evaluated on the deformed external supertwistors (83).
The integral is evaluated by summing the residues of all the poles of W n,m;L (l; z). The two types of contributions discussed in section 6.2 remain unscathed. They produce either single trace integrands or products of lower-point double trace integrands and single trace ones, see figure 6.1. At loop level the Lagrangian insertions have to be distributed between the two integrands in each product in all possible ways. Every single trace integrand with Lagrangian insertions has to be summed over all its periodic images. Hence, these single trace loop integrand factors are all of the form (103).
The new feature of the recursion relation at loop level is the possibility of the deformed cuspx 1 becoming null separated from a Lagrangian insertion point y i . All the periodic images of this cusp,x i . These contributions are referred to as single-cut terms and coincide with the forward limit of lower-loop higher-point cylinder integrands. All the images of this forward limit are well separated in dual coordinate space, see figure 8.1. 14 Hence, the analysis 14 Here we take q to be a generic non-null momentum. Hence, the forward limit cannot align with q.
of [5] is still applicable with no modifications. It leads to the following recursion relation, , 2, . . . , i; n + 1, . . . , n + m; l; {AB} R ) . . . , n; n + 1, . . . , n + m; l; where the shifted momentum supertwistors are given by (89) and By repeatedly using the recursion relation (108) the double trace integrand can be reduced to a linear combination of products of single trace integrands and W 2,2 . In a sense, the perturbative double trace amplitude can be constructed by gluing together dual conformal invariant single trace objects. It turns out that a similar structure is also present at finite coupling. There, one can compute the cylindrically cut double trace amplitude using the dual conformal invariant single trace pentagon transitions, see discussion section and [26].
As opposed to the planar integrand recursion relation, the right hand side of (108) contains an infinite sum. The terms in this sum are all related by the periodicity constraint and therefore, they can be generated by a single term in the sum. In other words, the cylinder integrand is generated by summing over all the periodic images of a single rational function, see (102) for example. However, there is no unique way of choosing that rational function for the same reason that a cylindrically cut Feynman diagram is not well defined prior to summation over images, see (5). have been shown to be equal in section 6. Hence, we conclude that the loop level cylinder integrands of the cylindrically cut double trace amplitude and the cylinder Wilson lines correlator are the same.
The role of broken dual superconformal symmetry
The planar S-matrix of N = 4 SYM and its dual description in terms of polygonal Wilson loops possess a large amount of symmetries. Some of these are anomalous, but the anomaly is well understood and is under control [7,30]. These symmetries are so powerful that they fix the result uniquely for any value of the cusp anomalous dimension [7]. It is therefore useful to understand which of these symmetries are preserved when considering the double trace amplitude. Let us consider, for example, the dual conformal generators. Their action on the single trace amplitude is sensitive to the ordering of the external particles in the trace (it is a levelone generator of the Yangian algebra). Hence, the generalization of this type of symmetries to the double trace amplitude is quite interesting. Specifically, in this section we will focus on extending the dual superconformal symmetry. Other symmetries of the planar S-matrix can be similarly extended to symmetries of the cylinder Wilson lines correlator and will not be discussed here.
The dual conformal generators act locally in dual coordinate space. That is, they are represented by a sum over generators that act on a single vertex of the polygon or a single dual momentum supertwistor. Therefore, the map between the cylindrically cut double trace amplitude and the Wilson lines correlator gives a generalization of dual conformal transformations. Two periodic Wilson lines can be viewed as if they were a single infinite Wilson loop acted on with the standard dual conformal generators. The single trace Wilson loop is, of course, invariant under such transformations (up to the well understood dual conformal anomaly localized at the cusps). One has to keep in mind, however, that in the double trace case the Wilson lines correlator is subject to the quantum periodicity constraint P. This constraint is not invariant under dual conformal transformations. Instead, under a dual conformal transformation K it transforms into a new "twisted" periodicity constraint, P → P = K · P · K −1 . (111) As a result, dual conformal transformations map one periodic Wilson lines correlator to a new Wilson lines correlator that is subject to the twisted periodicity constraint P. We will argue that this transformation is a symmetry of the cylinder Wilson lines correlators. The original cylinder Wilson lines correlator and the twisted one correspond to the same double trace amplitude. In other words, this symmetry associates different periodic Wilson lines correlators with the same double trace amplitude and, in this sense, can be thought of as a sort of gauge symmetry of the amplitude instead of a global symmetry. Similarly to the planar case, we expect this set of symmetries to uniquely determine the cylinder correlator for any value of the cusp anomalous dimension.
Wilson lines correlators with twisted periodicity
The definition of the cylinder Wilson lines correlator (15) contains integration over θ, the superseparation between the lines (15). Integrating over it extracts a θ component of the Wilson lines correlator (18). Like in the case of the single trace Wilson loop the dual conformal invariant objects are the η components of the Wilson lines correlators. The η and the θ components are related by a simple Jacobian, see section 5.2. 15 For example, changing variables from the eight θ's to η A i and η A k results in the following expression for the θ integration in (7), where W n,m is a function of the η's and the integration over η i and η k amounts to extracting a specific component of it, see section 5.2. It is a function of the supertwistors {Z i } n i=1 , {Ż j } m j=1 and the superperiodicity constraint P, where P is the 6 × 6 matrix whose upper 4 × 4 block is P, Here Q is the total supercharge going through the cylinder (13). Similarly to the cylinder Wilson lines correlator in (113), the corresponding integrand will also be denoted by W n,m with added dependence on the loop integration points, It is related to the loop integrand discussed in section 7 by the same relation as in (112). The cylinder Wilson lines correlator and its integrand can now be generalized to the "twisted" ones. Under a dual superconformal transformation K the twistors and the periodicity constrint P transform to their twisted counterparts, The twisted cylinder Wilson lines correlator and its integrand are defined as their untwisted versions, with the periodicity constraint P replaced by P. This implies that the periodic images of any external twistor and any twistor that parametrizes a chiral Lagrangian insertion point are given by Z [a] = P a · Z.
We conjecture that the cylinder integrand (115) is invariant under the transformations (116), For example, the right hand side of (102) only depends on four-brackets and is, therefore, manifestly invariant under dual conformal transformations (116). It coincides with the twoloop cylinder integrand W fishnet 2,2;2 (Z 1 , Z 2 ;Ż 3 ,Ż 4 ; {A, B} 2 ; P) in the fishnet model. In general, for (117) to hold true all factors of the form λ A λ B must cancel. These spinor helicity two-brackets can be written as λ A λ B = Z A Z B I ∞ , where (I ∞ ) KL is the so-called infinity twistor, which is a block diagonal matrix with the only non-zero diagonal element equal to ab , see for example [31]. It is important to point out that the left and right hand sides of (117) are to be computed with the same infinity twistor. 16 A violation of dual conformal symmetry can be characterised by an explicit dependence on the infinity twistor.
Any specific realization of the matrix P, such as the one in (114), can be thought of as fixing a gauge. Different gauges are related by dual superconformal transformaions (116). The non-trivial statement in equation (117) is that it relates the integrands of two different cylinder Wilson lines correlators that differ both in their geometries and in the periodicity constraint imposed on them. The similarities to gauge theories can be further illustrated by the following argument. Given a specific form of P, such as the one in (114), one can promote any Lorentz invariant function into a function that is invariant under the dual conformal transformation (116). A similar situation arises for any gauge symmetry. Any observable in a gauged fixed form of the theory can be promoted to a gauge invariant observable in the unfixed theory. For example, the ratio of two-brackets i i + 1 2 / k k + 1 2 depends on the infinity twistor and is therefore not dual conformal invariant. However, provided that the gauge has already been fixed by choosing P to be the one in (114), this ratio can be promoted to the dual conformal invariant ratio i i + 1 2 / k k + 1 2 → i i Similarly to the cylinder integrand, we conjecture that the cylinder Wilson lines correlator W n,m in (113) is invariant under dual conformal transformations, up to the dual conformal anomaly [30]. Considering an infinitesimal dual conformal transformation, results in the following anomaly equation, where W finite n,m is the regulator-independent part of the Wilson lines correlator W n,m . The subscript α labels the supercomponents. This equation implies that the dual conformal invariance is only broken by local anomalies at the cusps. These anomalies are insensitive to the periodicity constraint that only affects well-separated points. Equation (119) can be thought of as a gauge anomaly. It is therefore useful to consider anomaly free, and hence dual conformal invariant, ratios instead of W n,m , see [9,26]. Such ratios only depends on 3M − 7 independent conformal cross ratios, where M is the total number of particles.
To summarize, we conjecture that the cylindrically cut amplitude is dual to a family of cylinder Wilson lines correlators, with the generalized periodicity constraint (111). Different Wilson lines correletors with different periodicity constraints that are related by a dual conformal transformation are all different representations of the same cylindrically cut double trace amplitude. In this sense dual conformal symmetry is gauged.
Discussion
In this paper we have extended the duality between planar scattering amplitudes in N = 4 SYM theory and polygonal Wilson loops to the first 1/N correction to the amplitude (8). This correction corresponds to the leading color double trace contribution to the amplitude. On the other side of the duality one finds the correlation function of two periodic null polygonal Wilson lines subject to a quantum periodicity constraint. Two ideas were necessary to establish this new duality. The first was the recognition of the momentum flow around the cylinder l as a well defined physical quantity that arises in the 't Hooft limit, see (1). The second was cutting open the cylinder by considering its universal cover for any given value of l. In particular, the second step allowed us to map the single trace planar duality into a new double trace one. Under this duality, the cylinder momentum l is mapped to the separation between the two periodic Wilson lines. Some of the applications and extensions of this duality will now be discussed.
One application is the extension of the loop integrand to the double trace amplitude discussed in section 7. The cylinder loop integrand is given by a sum over all periodic images of a rational function. Similarly to the planar loop integrand, the cylinder integrand satisfies a recursion relation. This relation, along with the planar loop integrand, uniquely determines the cylinder integrand. As opposed to the planar loop integrand, the cylinder loop integrand is not dual conformal invariant. However, the dual conformal invariance of the planar loop integrand has implications on the cylinder integrand. As explained in section 9, these implications can be thought of as gauging of the dual superconformal symmetry in the Figure 10.1: The cylinder Wilson lines correlator can be computed at finite coupling using the pentagon operator product expansion. In this approach the correlator is decomposed into a sequence of GKP flux-tubes that correspond to the null squares in the figure. The quantum periodicity constraint is imposed by identifying the flux-tube state Ψ 1 in two channels that are related by the periodicity. This identification corresponds to cutting and gluing the holographic dual string in AdS 5 × S 5 spacetime.
non-planar case. Similarly to the planar case, the cylinder Wilson lines correlator satisfies a dual conformal anomaly equation (119).
Another application of the double trace duality is the extension of the pentagon OPE finite coupling approach to the double trace amplitude [9,10,12]. This extension, which was the original motivation for this project, will be reported on in a future publication [26] and briefly summarized here. The most unusual feature of the calculation of the Wilson lines correlator in perturbation theory is the fact that the periodicity constraint has to be imposed not only on the geometry of the Wilson lines, but also at the quantum level on each Feynman diagram, see section 2. In the POPE approach imposing the same periodicity constraint becomes a natural and simple process. The POPE approach requires one to sum over all possible flux-tube excitations, which can be interpreted as inserting a complete basis of states of the planar flux-tube. The correlation function between two null polygonal Wilson lines can be decomposed into a sequence of flux-tubes, see figure 10.1. In this case imposing the quantum periodicity constraint amounts to identifying the flux-tube state in a given channel with its periodic image, resulting in a periodic sequence of OPE channels. This is in contrast to the POPE of a single trace null polygonal Wilson loop, for which the sequence of OPE channels starts with the vacuum at the bottom of the polygon and ends with the vacuum at the top.
Let us consider, for example, the case of the four point double trace amplitude, n = m = 2. The periodic Wilson lines correlator is decomposed into four POPE channels in a way that is consistent with the periodicity constraint, see figure 2.2. 17 The POPE decomposition is then wrapped on a cylinder by identifying the flux-tube state in the next, fifth, channel with the one in the first channel, |Ψ 5 = |Ψ 1 in figure 10.1. Each POPE channel comes with three independent conformal cross ratios, {τ i , σ i , φ i } 4 i=1 , that are repeated periodically [10]. However, due to the periodicity constraint, only five of these are independent. 18 l 1 l 2 T-duality l 1 l 2 Figure 10.2: An extension of the double trace duality to a duality between the 1/N 2 correction to the single trace partial amplitude (on the left) and a periodic correlation function of a Wilson loop-like object (on the right). In this case there are two cuts γ 1 and γ 2 with momenta l 1 and l 2 , that correspond to the two cycles of the torus. They are non-self intersecting cuts of the Feynman diagrams that have intersection number one with each other. The generalization of the cylinder cut (5) now take the form A(l 1 , l 2 ) = g∈SL(2,Z) A γ 1 γ 2 (g.(l 1 , l 2 ) T ). On the Wilson loop side one finds a lattice of closed polygonal Wilson loops, which is the universal cover of the torus and has two periods, l 1 and l 2 .
In addition to these applications, this work can be further extended in several ways, two of which will be discussed below. First, while only the first 1/N correction to the planar amplitude in the 't Hooft large N expansion was considered in this paper, the same idea can be applied to any 1/N L order. Consider, for example, the next correction at order 1/N 2 , where there are two types of contributions -the triple trace pants amplitude and the single trace torus amplitude. The duality for the triple trace is similar to the double trace one, so we will focus on the single trace torus contribution. It is given by the sum of all large N 't Hooft Feynman diagrams with torus topology. Hence, it requires two cuts with momenta l µ 1 and l µ 2 that correspond to two independent cycles of the torus, see figure 10.2. Similarly to the double trace case, l 1 is only defined modulo a shift by l 2 and l 2 is only defined modulo a shift by l 1 . In other words, the vectors (l 1 , l 2 ) parametrize a torus in the dual coordinate space. They span a plane that can be parametrized by a complex coordinate. In this two dimensional coordinate system, the modular parameter of the T-dual torus is τ = l 2 /l 1 . (120) Performing a T-duality transformation gives a two-dimensional lattice of identical closed polygonal Wilson loops, see figure 10.2. There are two quantum periodicity constraints that correspond to the modular transformations of the T-dual torus. The spacetime integration over l 1 and l 2 can be rearranged into an integration over the modular parameter of the Tdual torus τ , an overall rescaling of the torus, and the spacetime rotations of the torus in four dimensions. In general, one may think of the cut amplitude as the integrand of the string loop expansion in 1/N . It would be interesting to find out if it satisfies any recursion relations.
The ideas discussed in this paper can also be extended to the computation of form factors in the planar limit. This will be reported on in detail in a future publication [32] and briefly of particles. outlined below. Similarly to double trace amplitudes, planar form factors live in momentum space and are evaluated by summing over diagrams with cylinderical topology. Therefore, the same ideas discussed in this paper can be applied to their computation. For example, we claim the form factor of the Lagrangian to be T-dual to the expectation value of a single periodic null polygonal Wilson line subject to a quantum periodicity constraint. 19 This duality would allow us to compute form factors at finite coupling using an extension of the integrability based POPE approach [32].
A Special self symmetry for n = m = 2 The four-point double trace amplitude contains a special self-symmetry, resulting in a subtlety that warrants clarification. For any n and m the cylinder can be inverted, turning it inside-out. This has the effect of reversing the ordering of the particles in the two traces and the sign of l. Therefore, only the relative ordering of the particles in the trace is physical. On the Wilson lines side of the duality reversing the orderings of the edges on the two periodic lines and the sign of l maps the original configuration to a different one that is related to it by the CPT symmetry of the theory. 20 For the special case of n = m = 2 there is no distinct ordering of either of the two traces. As a result, the flipping of the two traces' orderings and the CPT symmetry of the Wilson lines correlator become self-symmetries. This may lead to confusion regarding the definition of the cutting procedure that will now be clarified.
Consider a leading color Feynman diagram in the case of n = m = 2. The cut γ is defined such that it starts between particles 2 and 1 on one trace and ends between particles 4 and 3 on the other, where the orderings · · · → 1 → 2 → 1 → . . . and · · · → 3 → 4 → 3 → . . . are correlated through the cylinder. Alternatively, one may consider a cut γ that starts between particles 1 and 2 on one trace and ends between particles 3 and 4 on the other. By turning the cylinder inside-out, the cut γ at a given value of l is mapped to the cut γ at −l. Therefore, they are equivalent. The left Wilson line has the sequence of cusps x 2 . The cut γ corresponds to the configuration with l =ẋ 2 − x 2 while the cut γ at −l corresponds to the configuration with −l =ẋ 1 − x 1 .
Despite the fact that these are two different geometrical configurations, they are related by CPT and therefore lead to the same result. | 21,905.8 | 2018-02-26T00:00:00.000 | [
"Physics"
] |
Search for $C=+$ charmonium and XYZ states in $e^+e^-\to \gamma+ H$ at BESIII
Within the framework of nonrelativistic quantum chromodynamics, we study the production of $C=+$ charmonium states $H$ in $e^+e^-\to \gamma~+~H$ at BESIII with $H=\eta_c(nS)$ (n=1, 2, 3, and 4), $\chi_{cJ}(nP)$ (n=1, 2, and 3), and $^1D_2(nD)$ (n=1 and 2). The radiative and relativistic corrections are calculated to next-to-leading order for $S$ and $P$ wave states. We then argue that the search for $C=+$ $XYZ$ states such as $X(3872)$, $X(3940)$, $X(4160)$, and $X(4350)$ in $e^+e^-\to \gamma~+~H$ at BESIII may help clarify the nature of these states. BESIII can search $XYZ$ states through two body process $e^+e^-\to \gamma H$, where $H$ decay to $J/\psi \pi^+\pi^-$, $J/\psi \phi$, or $D \bar D$. This result may be useful in identifying the nature of $C=+$ $XYZ$ states. For completeness, the production of $C=+$ charmonium in $e^+e^-\to \gamma +~H$ at B factories is also discussed.
We calculate the production of C = + charmonium at e + e − annihilation at BESIII to test the nature of C = + XY Z states. Our paper is organized as follows. The calculation framework is given in Sec. 2. The numerical results of the cross-sections of C = + charmonium are discussed in Sec. 3. A discussion of X(3872) and other C = + XY Z states is given in Sec. 4. The summary is given in Sec. 5.
The frame of the calculation
In the NRQCD factorization framework, we can express the amplitude in the rest frame of H as [28,30,31] A(e − (k 1 )e + (k 2 ) → H cc ( 2S+1 L J )(2p 1 ) + γ) = LzSz s 1 s 2 jk where 3j;3k | 1 = δ jk / √ N c , s 1 ; s 2 | SS z is the color Clebsch-Gordan coefficient for cc pairs projecting out appropriate bound states, and s 1 ; is the quark level scattering amplitude. In the rest frame of H, q = (0, q), and p 1 = ( m 2 c + q 2 , 0, 0, 0). Φ H cc ( q) is the cc component wave function of hadron H in momentum space. For v 2 = q 2 /m 2 c ≪ 1 [50], we can expand Eq.(2.1) with v 2 : Here A(q) = A e − (k 1 )e + (k 2 ) → c s 1 j (p 1 + q) +c s 2 k (p 1 − q) + γ(k) . We consider the Fourier transform between the momentum space and position space as: [50,94], Here Z H cc is the possibility of cc component in hadron H. R cc (0) is the radial Schrodinger wave function at the origin. R l cc (0) is the derivative of the radial Schrodinger wave function at the origin is also written as long-distance matrix elements (LDMEs) as discussed in Ref. [94]. For example, We calculated the relativistic corrections for the S wave and P wave states and obtain two LDMEs for η c , four LDMEs for χ cJ , and one LDMEs for 1 D 2 states. To simplify the discussion of the numerical result, we assumed that Then there is only one LDME for S wave, P wave, and D wave respectively. More details can be found in Ref. [94]. The relativistic correction K factor is where r = 4m 2 c /s. − rv 2 1−r is the relativistic correction of the phase space. If we select r → 0, the K v 2 factor is consistent with the K factor at large p T in Ref. [94].
We can obtain a similar amplitude for the DD component in the molecule model. We can estimate the off-resonance amplitude of e + e − → H + γ from the DD component. The parton-level amplitudes may be compared with the hadron-level amplitudes: with the S wave l = 0 and P wave l = 1 for the binding energies of cc and DD are several hundreds of MeV and several MeV, respectively. If Z H cc ∼ Z H DD , we can consider the cc contributions only. In the numerical calculation, we consider the charm quark mass as half of the hadron mass consistent with the physics phase space. With a large charm quark mass, the wave functions at the origin are identified as the Cornell potential result in Ref. [96]. The sellected parameters are as follows: The wave functions at origin for higher states are estimated as In the numerical result, "σ LO " is the LO cross-section, "σ v 2 " is the cross-section including the LO and the relativistic correction, "σ αs " is the cross-section including the LO and the radiative correction, and "σ αs,v 2 " is the cross-section including the LO, the relativistic correction, and the radiative correction. In addition, "LO" is the LO cross-section, "RC" is the relativistic correction, "QCD" is the radiative correction, and "Total" is the cross-section including the LO, the relativistic correction, and the radiative correction.
For the LO, the cross-section is O(α 0 s v 0 ). As α s = 0.23 ± 0.03 and v 2 = 0.23 ± 0.03 are reasonable estimates, we can estimate that the uncertainty of the numerical result from α s and v 2 is < 10%.
Pure C = + charmonium states
We can estimate the cross-sections for pure C = + charmonium states H in e + e − → γ + H at BESIII with H = η c (nS) (n=1, 2, 3, and 4), χ cJ (nP ) (n=1, 2, and 3), and 1 D 2 (nD) (n=1 and 2). The mass of the lower states can be found in Ref. [24], and the mass of the higher states is selected from Ref. [17]. Σ e e ΓΗ c2 nD fb The cross-section of e + e − → η c + γ as a function of √ s is shown in Fig.1. The crosssections of e + e − → η c2 (1D, 2D) + γ as a function of √ s are shown in Fig.2. The numerical results for nS with n = 1, 2, 3, 4 and nD with n = 1, 2 are listed in Table 2. We determined Table 2. The cross-sections of e + e − → H + γ for η c (nS) with n = 1, 2, 3, 4 and η c2 (nD) for n = 1, 2 charmonium states in fb. The labels LO, RC, QCD and Total are defined near the end of Section 2. The mass of η c (3S), η c (4S), η c2 (1D), and η c2 (2D) are selected from Ref. [17]. The other mass can be found in Ref. [ that the radiative and relativistic corrections are negative and large for η c (nS), respectively. The LO cross-sections for η c2 (1D, 2D) is very small at BESIII; hence, the high order corrections are ignored.
The cross-sections of e + e − → χ cJ + γ as a function of √ s are shown in Fig.3, Fig.4, and Table 3, Table 4, and Table 5 for J = 0, 1, 2, respectively. We determined that the QCD corrections are large but negative and the relativistic corrections are large and positive. Hence, many P wave states can be searched at BESIII. The NRQCD requires that the energy of photon at the center of the mass frame of e + e − be larger than Λ QCD ∼ 300 MeV ∼ m c v 2 . Although this process is a QED process, the prediction is not reliable and only a reference value if this requirement is not satisfied. If we replace photon with gluon, the soft photon contributions correspond to the long-distance color octet contributions [31,50]. Σ e e Χ c0 Γ fb Figure 3. The cross-sections of e + e − → χ c0 + γ as a function of √ s in fb. The cross-section "σ LO ", "σ v 2 ", "σ αs ", and "σ αs,v 2 " are defined near the end of Section 2. Table 3. The cross-sections of e + e − → χ c0 (nP )+γ with n = 1, 2, 3 in fb. The labels LO, RC, QCD and Total are defined near the end of Section 2. The χ c0 (2P ) is considreed as X(3915)(X(3945)/Y (3940)) [1,33]. The mass of χ c0 (3P ) are selected from Ref. [17]. The other mass can be found in Ref. [ Σ e e Χ c1 Γ fb Figure 4. The cross-sections of e + e − → χ c1 + γ as a function of √ s in fb. The cross-section "σ LO ", "σ v 2 ", "σ αs ", and "σ αs,v 2 " are defined near the end of Section 2.
To clarify the nature of X(3872), we also give the numerical calculation of e + e − → Σ e e X 3872 Γ J ΨΠΠ Γ Figure 6. The cross-sections of e + e − → χ c2 + γ as a function of √ s in fb. The cross-section "σ LO ", "σ v 2 ", "σ αs ", and "σ αs,v 2 " are defined near the end of Section 2. The uncertainty bind of σ αs,v 2 is from the uncertainty of k = 0.018 ± 0.04. The cross-sections as a function of √ s is shown in Fig.6. Many 1 −− states with M H < 5 GeV are also observed. We can predict the cross-sections from continuous contributions at this point, and the result is listed in Table 6. We ignore the 1 −− resonances contributions here. We emphasize that if we select √ s = 4.009GeV, the energy of photon E γ = 134 MeV and smaller than Λ QCD ∼ m c v 2 ∼ 300 MeV. Hence, NRQCD cannot accurately predict the cross-sections with a soft photon with √ s = 4.009GeV [50]. If √ s = 4.160GeV, the energy of photon is E γ = 270MeV. Although this process is a QED process, the prediction is not reliable and only a reference value [31]. We determined that the NRQCD prediction of the continuous contributions can be compared with the BESIII data of the cross-sections of e + e − → γX(3872) [46,47] in Eq.(1.1). When we only considered the continuum production, the resonance contributions can be estimated as that: We take into account only one resonance here and ignore continuum and other resonances here.
If we ignore the interference between one resonance and continuum and other resonances, the gamma energy dependence of the Γ[Res → γX], and DD contributions of decay of Res → γX, we can estimate the resonance contributions. With X(3872) considered as 2P states, the largest decay widths are ψ(4040) and ψ(4160), which are considered as the mixing of ψ(3S) and ψ(2D) [97,98]. The Γ[Res → γX] for other states will be less than 1 keV [98], [43]. η c and χ c0 are recoiled with J/ψ, but χ c1 and χ c2 are missed [43]. The theoretical predictions are consistent with the experimental data [61,69,99,100]. So there should be large η c (nS) and χ c0 (nP ) component in X(3940) and X(4160), respectively. The mass of η c (3S) and χ c0 (3P ) are predicted as 3994 MeV and 4130 MeV respectively [17]. Compared with Table 2 and Table 3, we can found that the cross-sections of η c (3S) is small even negative at √ s < 5 GeV. But χ c0 (3P ) is large. The cross-sections as a function of √ s is shown in Fig 7. Here Z X cc ≤ 1 is the possibility of η c (3S) and χ c0 (3P ) component in X(3940) and X(4160) respectively. The BESIII collaboration can search X(3940) and X(4160) in the process e + e − → γ + X(DD). The result may be useful in identifying the nature of X(3940) and X(4160).
X(4350)
X(4350) are found in γγ → H → φJ/ψ at B factories [45]. And J P C is 0 ++ or 2 ++ . So there should be large χ c0 (nP ) or χ c2 (nP ) component in X(4350). In Ref. [17], The mass of χ c2 (3P ) is 4208 MeV. Ignore more detail of the mass, we considered it as χ c0 (nS) or χ c2 (nP ), the wave function at origin are estimated as The cross-sections of e + e − → X(4350) + γ as a function of √ s is show in Fig.8. Here Z X cc is the possibility of χ c0 (nP ) or χ c2 (nP ) component in X(4350). The cross-section for χ c2 (nP ) is larger than χ c0 (nP ) by a factor of 6. The result may be useful in identifying the nature of X(4350).
Summary and discussion
While BESIII and Belle have collected a large amount of data, some final states may be searched by the experimentalists. We can estimate the possible event number at BESIII and Belle. The possible event number is Σ X 4350 Γ Z c c x fb Figure 8. The cross-sections of e + e − → X(4350) + γ as a function of √ s in fb. The cross-section "σ LO ", "σ v 2 ", "σ αs ", and "σ αs,v 2 " are defined near the end of Section 2. And Z X cc is the possibility of χ c0 (nP ) or χ c2 (nP ) component in X(4350). | 3,204.6 | 2013-10-01T00:00:00.000 | [
"Physics"
] |
The spatial phenotype of genotypically distinct meningiomas demonstrate potential implications of the embryology of the meninges
Meningiomas are the most common primary brain tumor and their incidence and prevalence is increasing. This review summarizes current evidence regarding the embryogenesis of the human meninges in the context of meningioma pathogenesis and anatomical distribution. Though not mutually exclusive, chromosomal instability and pathogenic variants affecting the long arm of chromosome 22 (22q) result in meningiomas in neural-crest cell-derived meninges, while variants affecting Hedgehog signaling, PI3K signaling, TRAF7, KLF4, and POLR2A result in meningiomas in the mesodermal-derived meninges of the midline and paramedian anterior, central, and ventral posterior skull base. Current evidence regarding the common pathways for genetic pathogenesis and the anatomical distribution of meningiomas is presented alongside existing understanding of the embryological origins for the meninges prior to proposing next steps for this work.
Introduction
Meningiomas are the most common primary brain tumor, representing 37% of all intra-cranial tumors with an annual incidence of 4.5 per 100,000 people with a lifetime risk of around 1 in 280, and their incidence and prevalence is increasing [1][2][3]. Incidence over a 14-year period (1999-2013) of diagnoses and surgical resection of meningiomas have increased by 52% and 58% respectively [4]. Skull base meningiomas represent up to half of all meningiomas requiring surgery [5]. Due to their proximity to cranial nerves, brainstem, upper cervical spinal cord, and critical cerebral vasculature, they are challenging to resect completely (Fig. 1); consequently recurrence rates can be as high as 29% [6,7]. Around a third of recurrences are of a higher tumor grade (World Health Organization (WHO) grade II and III [8]. Patients with atypical (WHO grade II) and malignant (WHO grade III) meningioma suffer from a high morbidity and mortality, with reported 10-year survival of 63% and 15% respectively, in spite of a relatively young mean age at diagnosis [1,4]. Aside from radiotherapy which has a limited evidence base, there are scarce alternative therapies with proven efficacy [9].
The cell of origin of a meningioma is frequently reported to be the arachnoid cap cell, primarily due to cytological similarity [2]. It is however more probable meningiomas develop both from dural border cells and arachnoid barrier cells based on the shared expression of prostaglandin D2 synthase (PGDS) in these cellular layers and meningiomas [10,11]. This may also explain the broad spectrum of histologically distinct variants in the classification of meningiomas, which remain classified solely according to histological appearances (Table 1) [2]. Robust epidemiological data of the incidence of meningioma by histological subtype has not been reported. Based on genomic data from a recently published cohort, meningothelial (41%) and transitional (17%) subtypes represent the most common variants [12]. A study of registry data reported 80.6% of meningioma were WHO grade I, 17.6% WHO grade II, and 1.7% WHO grade III [1].
Cranial meningiomas most commonly develop in the convexity and parasagittal regions and in the skull base in relation to the sphenoid (Table 2) [13]. Registry data from Fig. 1 The spatial phenotype of genotypically distinct meningioma and embryology of the meninges. A Anatomical depiction of meninges with brain and spinal cord removed displaying skull base, sagittal, and convexity regions including tentorium cerebelli on the right side. B distribution of meningioma by pathogenic variant gene pathway. C meningeal embryonic development by the tissue of origin.
the USA reported that 79.8% of meningiomas were located in the cranial meninges compared to 4.2% located in spinal meninges (the remainder were unknown) [1]. In an alternative study of 25,694 surgically treated meningiomas in England, 92.3% were located in the cranial meninges compared to 7.7% located in the spinal meninges [4].
This review presents existing evidence of the relationship between the histological subtype, WHO grade, and genomic alterations of meningioma and the location of their development. The embryology of the meninges is presented, summarizing the hypothesis that the cephalic mesoderm contributes to the meninges of the midline and paramedian ventral posterior and central skull base. A synthesis is summarized including evidence of meningioma pathogenesis through interruption of genes in key developmental pathways. Implications for the utility of therapies used in other tumor types, development of in vitro and in vivo modeling of meningioma genetic pathogenesis, and patient selection for trials, are discussed.
Meningioma and its location-genomics and histology
With the advent of next-generation sequencing, significant advances have been made in the last decade in identifying pathogenic variants in meningioma tumorigenesis. These have been categorized into major gene pathways which demonstrate striking mutual exclusivity across multiple studies [3,19]. These are summarized in Table 4. Reproduced from Youngblood et al. [12]. a Criteria: brain invasion, mitotic count 4-19/10 HPF (high-power field), 3 of the following: spontaneous necrosis, loss of whorling or fascicular architecture, prominent nucleoli, high cellularity, and high nuclear to cytoplasmic ratio. Criteria: overtly malignant cytology, 20 or more mitoses/10 HPF. Reproduced from Magill et al. [5]. Reproduced from Youngblood et al. [12].
MU mutation unknown, HH Hedgehog pathway genes.
The spatial phenotype of genotypically distinct meningiomas demonstrate potential implications of the. . .
22q deletion (NF2, SMARCB1)
Pathogenic variants in NF2 are associated with somatic loss of the second chromosome 22 allele [27,29,30], and are strongly though not exclusively associated with fibrous, psammomatous, transitional, atypical, and anaplastic meningioma. [27,[31][32][33] NF2-mutated meningioma constitute the majority of meningiomas located in the falx cerebri, tentorium cerebelli, and cerebral and cerebellar convexities [12,18,32]. In large-scale genomic studies of meningioma, higher grade (WHO grade II and III) meningioma were in some studies exclusively related to pathogenic variants in NF2, associated with mutations in the TERT promoter, and deletion of 1p and CDKN2A [30,34]. SWI/SNF related, matrix associated, actin dependent regulator of chromatin, SMARCB1, adjacent to NF2 on chromosome 22q, has been identified to contribute to meningioma tumorigenesis with somatic missense mutations identified in exon 9 [24], and a germline variant in exon 2 [35]. In patients with NF2, those with truncating NF2 mutations towards the 5′end of the gene were associated with a higher prevalence and lifetime risk of meningioma [15]. A four-hit mechanism has been proposed resulting in tumor suppressor gene inactivation and the development of familial multiple meningiomas [25]. It is of interest that few variants associated with SMARCB1 related schwannomatosis have been associated with meningioma risk and overall the chances of developing meningioma in SMARCB1 related schwannomatosis without these specific missense variants is low [23,36].
Hedgehog signaling pathway (SMO, SUFU, PRKA-R1A) SMO, a G-protein coupled receptor and key transmembrane protein member of the Hedgehog signaling pathway, was identified in several studies of non-NF2 meningiomas [18,27]. SUFU (suppressor of fused homolog) protein acts downstream of SMO and loss of SUFU function has been implicated in familial multiple meningioma [28]. Pathogenic variants in PRKA-R1A have additionally been identified in a small proportion of meningiomas [19]. PRKA-R1A is a critical component of type I protein kinase A (PKA) and pathogenic variants result in increased PKA activity and subsequently increased SMO cell surface accumulation thus contributes to Hedgehog signaling [19,38]. Meningiomas harboring pathogenic variants in the Hedgehog signaling pathway are more likely to develop as a WHO grade I meningothelial subtype in the midline anterior fossa floor of the skull base [12,18,27]. Across multiple studies it has been additionally demonstrated that meningiomas with pathogenic variants in the Hedgehog signaling pathway are not associated with genomic instability [18,30,34].
Other pathogenic variants (KLF4, TRAF7, SMARCE1, BAP1) KLF4 is a transcription factor known to induce pluripotency in adult fibroblast cultures [39]. The role of KLF4 is context-specific, with evidence of its function both as an oncogene and tumor suppressor in cancer [40]. A highly recurrent p.Lys409Gln mutation was identified in the first of three zinc fingers pivotal for DNA binding [41]. Meningiomas with pathogenic variants involving KLF4 were more commonly identified in the skull base away from the midline [12]. Secretory meningiomas have been defined based on combined pathogenic variants of KLF4 and TRAF7 mutually exclusive of the PI3K pathway or NF2 [41].
Hallmarks of secretory meningioma, hyaline periodic acid-Schiff-positive globules, and peritumoral edema, are suspected mechanistically to be associated to KLF4 signaling as a result of regulatory of cytokeratins 4 and 19 and activation of the bradykinin B2 receptor [41]. TRAF7 has been reported as the most common pathogenic variant identified in non-NF2 meningioma [18,19,22]. Pathogenic variants in have been reported to occur in combination with AKT1, PIK3CA, PIK3R1, and KLF4 [12,18,19,41]. Multiple somatic mutations were identified in an intronic hot spot of TRAF7 related to the first WD40 domain which plays an important regulatory role in the NF-κB pathway [18,42]. Meningiomas with TRAF7 pathogenic variants alone or in combination commonly develop in the skull base, with isolated TRAF7mutated meningioma associated with a microcystic histological subtype [12].
Recurrent pathogenic variants in polymerase (RNA) II (DNA directed) polypeptide A (POLR2A) are characterized by mutations localized to the dock domain involved in formation of the pre-initiation complex [19]. Meningiomas with identified pathogenic variants in POLR2A are most commonly mutually exclusive, genomically stable, and associated with benign meningiomas in the midline skull base, in particular the region of the tuberculum sellae [19].
BAP1 is involved in the response to DNA damage as a tumor suppressor gene functioning as a ubiquitin carboxyterminal hydrolase [21]. Germline mutations in BAP1 result in a cancer syndrome involving the development of BAP1mutated melanocytic skin tumors and a high incidence of mesothelioma [43]. All tumors share a common histological rhabdoid morphology. Of the six tumors with BAP1 mutations and BAP1 loss on immunohistochemistry, four were located in the convexity regions and two in the skull base [21]. There is currently insufficient evidence to demonstrate a spatial phenotype of these tumors.
SWI/SNF related, matrix associated, actin dependent regulator of chromatin, SMARCE1 pathogenic variants have been specifically associated with heritable clear cell meningiomas [26,44,45]. Initially suspected to present exclusively as multiple spinal meningiomas [26], cases of cranial meningiomas with pathogenic variants in SMARCE1 have subsequently been identified [44]. Clear cells are characterized by vacuolated cytoplasm and bland nuclei in a whorled, syncytial architecture, a likely consequence of SMARCE1 protein loss [46]. While the histology is diagnostic of WHO grade II clear cell subtype, of the few cases reported they have included meningiomas of the spine, convexity, and skull base regions without a propensity to a specific location [45]. A simplified summary of the above pathogenic variant categories and their relationship with histological subtype and the location of meningioma tumorigenesis is shown in Fig. 1B.
Meningioma and its locationembryological origin of the meninges
The most comprehensive early study of the development of the meninges was undertaken by O'Rahilly and Muller in 1986 [47]. This study of cranial meninges involved the serial sectioning of 61 human embryos. At Carnegie stage (hereafter stage) 11 (24 postovulatory days), the pia mater is first identified at the caudal medulla while elsewhere a thick mesenchyme surrounds the developing brain. This thick mesenchyme is derived from a combination of neural crest cell mesoectoderm and neurilemmal cells, the prechordal plate, the unsegmented paraxial mesoderm, and the segmented paraxial (somitic) mesoderm. By stage 15 (33 postovulatory days) this mesenchyme surrounds most of the brain and is called the primary meninx. Subsequently, the primary meninx differentiates into the pachymeninges (later dura mater) and leptomeninges (later arachnoid and pia mater) [47,48]. Similar work was undertaken by Sensenig in characterizing the embryological origin of the spinal meninges, where paraxial somitic mesodermal and neural crest cells were concluded to contribute to the dura and arachnoid mater (mesodermal) and pia mater (neural crest), respectively [49].
The development of quail-chick chimeras resulted in the ability to track the migration of neural crest cells, demonstrating the contribution of the neural crest to the meninges of the forebrain while the meninges of the brainstem derive from cephalic mesoderm [50,51]. HNK1 expression was used to further demonstrate a contribution of the neural crest to the spinal meninges, in contrast with earlier studies demonstrating an exclusively mesodermal contribution [52,53].
The use of permanent molecular markers for neural crest cells and developmental stage-specific conditional knockout mice has resulted in significant progress with characterization of the embryonic origin of the cranial bones and meninges [10,[54][55][56][57]. The use of X-gal staining and Dil labeling has been used in transgenic mice with in vivo permanent labeling of neural crest and mesoderm (Wnt1-Cre/R26R and Mesp1-Cre/R26R strains, respectively) [54,56]. A further PGDS transgenic Cre strain was developed based on PGDS representing a specific marker of arachnoidal cells. The PDGS positive meningeal cell was identified as a common precursor to both the dural border cells and arachnoid border cells [11,58]. Collectively, these models have demonstrated that the meninges at the skull base derive from mesoderm, while the meninges covering the cerebral and cerebellar hemispheres derive from the neural crest [10,54,55,59].
Most recently, single-cell transcriptomic analyses of meningeal fibroblasts in the forebrain have identified fibroblast populations that are transcriptionally distinct between brain regions, particularly in pia mater [60]. The authors state that anterior meninges arise from the neural crest, whereas posterior meninges originate from the mesoderm, and conclude that due to this mixed contribution there is regionalization of gene expression. Of particular relevance is the M3 subcluster, an arachnoid cell cluster, where in the embryonic day 14.5 (E14.5) mouse embryo in in situ validation there was patchy expression of Ptgds, the gene encoding PGDS, in the dorsal telencephalon contrasting with high expression throughout the skull base surrounding the midbrain and hindbrain regions [60].
Notable similarities have been identified between the development of the meninges and the skull bones. Animals with mutations in Foxc1, an identified gene crucial in meningeal development, develop significant meningeal and calvarial defects [61,62]. Furthermore, intramembranous ossification of mesodermal bone requires interaction with neural crest-derived meninges [56,63]. In the transgenic mouse models, the frontal, ethmoid, presphenoid, squamous temporal, and interparietal bones were identified as neural crest derived. Conversely, parietal, non-squamous temporal, and basioccipital bones are derive from mesoderm [55,56,59]. The middle of the basisphenoid, corresponding to the sella turcica in the adult skull base, marks the demarcation between bone derived from neural crest and mesoderm with the notable exception of the post-optic root of the presphenoid bone which is derived from mesoderm [55,64].
The identified junction at which cranial bones are derived from mesoderm and neural crest are species-specific, and it is probable that this is also observed in the meninges [65]. Overall, the likely development of the adult human meninges is a complex interplay between neural crest and mesodermally derived cells resulting in differentiation into cytologically similar meninges in the adult (Fig. 1).
Meningioma genetic pathogenesis and embryological development-a synthesis
The above evidence demonstrates a clear, reproducible correlation between the location of a meningioma and the types of underlying pathogenic variants identified as driving tumorigenesis. There is evidence that the spatial contribution of the mesoderm and neural crest to the meninges correlates with the locations of commonly identified pathogenic variants in meningioma. In molecular profiling of 86 sequenced, spatially distinct meningiomas, expression of neural crest genes have been implicated in meningeal tumorigenesis [66], suggesting that meningioma tumorigenesis capitalizes on gene regulatory networks with subsequent misactivation of a developmental cell population [67,68]. However, there still remains a great deal that is unknown; in the largest study of 1970 meningiomas with targeted and/or whole exome sequencing, 667 (26.1%) did not have an identified mutation [12]. While mechanistic explanations have now been provided for the development of multiple histological subtypes, the underlying genomic characteristics of microcystic grade I and chordoid grade II meningioma also remain unknown [12,69]. This section reviews the underlying mechanisms of tumorigenesis in the meninges and its relationship to developmental pathways.
The neural crest, NF2-Hippo, and the SWI/SNF complex The development of neural-crest-derived tissues is dependent on a process balancing proliferation, migration, and pluripotency that share many characteristics with tumorigenesis [70]. The development of NF2 knockout mice resulted in greater understanding of the role of the gene during development. NF2 null mice die during embryonic development due to a failure to initiate gastrulation [71], while heterozygous models resulted in widespread tumor development [72], and conditional NF2 gene inactivation in leptomeningeal cells resulted in the development of meningiomas [73]. The use of a β-gal reporter under the control of an NF2 promoter in transgenic mice identified intense β-gal staining in forebrain and telencephalon extending caudally in the later covered by pia mater, consistent with meningeal layers derived from the neural crest and the most common locations for the development of NF2-mutated meningioma [74].
Merlin is known to have multiple functions, but with respect to meningioma pathogenesis it notable for its role as a tumor suppressor regulating proliferation and apoptosis through Hippo signaling [75], and in cellular motility, spreading and attachment through mediation of the actin cytoskeleton [76]. Yap, a component of the Hippo pathway and inhibited upstream by Merlin, has been implicated in neural crest cell fate and migration [77,78]. SMARCB1 loss in the early neural crest results in the development of human rhabdoid tumors, while induced loss at a later stage results in Schwannomatosis [79]. Overall, the SWI/SNF complex is strongly linked to mammalian differentiation and is a critical regulator of pluripotency in embryonic stem cells, with SMARCB1 essential for neural induction but nonessential for mesodermal differentiation [67,80].
The relationship between Hedgehog signaling and meningioma pathogenesis is perhaps the most convincing. In a zebrafish model, Hedgehog signaling is required for cranial morphogenesis and chondrogenesis in the midline of the zebrafish skull [81]. Dysregulation results in craniofacial defects including holoprosencephaly and hypotelorism [82]. SMO-mutated meningiomas occur predominantly in the midline anterior skull base. Conditional activation of SMO in developing mouse embryos resulted in the development of meningothelial meningiomas in the ventral skull base, with similar location and histological appearances to SMOmutated meningioma [83].
The role of the cranial neural crest and mesoderm in craniofacial development is not mutually exclusive, with interdependence identified in the patterning of facial tissues and chondrogenesis [56,63,84]. Manipulation of migratory and proliferative behaviors reveals crucial interactions between the two cell populations in normal embryogenesis.
Although a simplification, on review of the embryological origin of the meninges, there is reason to hypothesize that the relative contribution of cell types to different layers and regions of the meninges may contribute to an explanation for these spatial phenotypes.
Future directions
Existing in vitro models use highly malignant immortalized meningioma cells that do not represent the diverse genomic characteristics of meningiomas, particularly of the skull base [33]. Challenges exist due to the senescence of in vitro cell lines of benign meningiomas [2]. The different developmental progenitors of meninges of the skull base should be considered in any future meningioma models including pathogenic variants commonly found in this region. Pathogenic variants associated with specific histological subtypes will likely be incorporated into future classification guidance, and it is recommended that location is included as part of this. Where whole genome sequencing is not possible for every patient, targeted sequencing of pathogenic variants corresponding to the location of the resected meningioma will be crucial to facilitate participation in trials with targeted therapy and for prognostic information for patients. The development of a molecularly driven trial of patients given targeted therapy based on their AKT1, SMO, and NF2 pathogenic variant status is an ideal example of the future for clinical trials in patients with meningioma (NCT02523014) [85].
There are still conflicting accounts regarding the cell of origin of the meningioma, with arachnoid cap cells, arachnoid barrier cells and dural border cells candidates. There has been limited consideration to tumor heterogeneity although limited unpublished evidence suggests this could be substantial [66]. The tumor microenvironment should be examined in the context of extended understanding of the heterogeneity of meningiomas, and the gene regulatory networks underlying meningeal development given its correlation with the genomic signatures of meningiomas. DNA methylation profiling has been used successfully to predict risk of recurrence and prognosis across multiple studies, and demonstrates the significance of epigenetic modifications in tumor pathogenesis, particularly given the relative lack of chromosomal instability in meningiomas of the skull base [30,86]. Capitalizing on spatial and temporal transcriptomics will facilitate greater understanding in both animal models and tumor samples of remaining candidate pathogenic variants responsible for meningioma pathogenesis and identify developmental pathways that could be modulated resulting in more effective targeted therapies for these tumors.
With the advent of immunotherapy, understanding of the tumor microenvironment has become pivotal in identifying potential immune-mediated mechanisms for treatments. Understanding of the tumor microenvironment in meningioma is comparatively understudied, with no existing single-cell transcriptomic immune cell profiling currently. T cell repertoire characterization of 28 meningiomas of all grades identified populations of CD4+ and CD8+ T cells, regulatory T cells, and T cells expressing PD-1 (Programmed cell death protein 1) indicative of exhaustion [87]. A study of bulk transcriptomic data from 107 meningiomas identified immune processes to be the sole biological mechanism correlated with anatomical location after correcting for the WHO grade in the tumor [88]. Whereas oncolytic gamma-delta T cells dominate skull base meningiomas, mast cells and neutrophils were more prominent in convexity meningiomas [88]. Conversely, a study of tumor-associated macrophage infiltration in meningioma found no significant differences in the macrophage number or ratio of M1 to M2 phenotype between the skull base and convexity meningioma samples [89].
Given the increasing importance of location in the understanding of tumor biology and immune microenvironment, biologically and clinically meaningful and accurate classification is essential. Despite the numerous surgical classifications of meningioma subtypes [90][91][92][93], and classifications in studies including genomic characteristics [12], there is currently no international consensus regarding the reporting of a location of a meningioma, particularly classification of meningiomas of the skull base. Reaching a consensus will facilitate cross-study comparisons and drive standardization in the investigation and reporting of meningioma pathogenesis.
Conclusions
In summary, there is emerging evidence of a correlation between location, phenotype and genotype in meningioma and such correlation has its basis on the embryology of meninges. A combination of temporal and spatial epigenetic and genetic analyses is required to better characterize the developing meninges, the arachnoid from which meningiomas are thought to derive, and meningiomas themselves to advance our understanding of these tumors for further biomarker and therapy discovery and implementation.
Compliance with ethical standards
Conflict of interest The authors declare that they have no conflict of interest.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/. | 5,366.2 | 2020-12-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Bicategories in Univalent Foundations
We develop bicategory theory in univalent foundations. Guided by the notion of univalence for (1-)categories studied by Ahrens, Kapulkin, and Shulman, we define and study univalent bicategories. To construct examples of univalent bicategories in a modular fashion, we develop displayed bicategories, an analog of displayed 1-categories introduced by Ahrens and Lumsdaine. We demonstrate the applicability of this notion, and prove that several bicategories of interest are univalent. Among these are the bicategory of univalent categories with families and the bicategory of pseudofunctors between univalent bicategories. Furthermore, we show that every bicategory with univalent hom-categories is weakly equivalent to a univalent bicategory. All of our work is formalized in Coq as part of the UniMath library of univalent mathematics.
Introduction
Category theory (by which we mean 1-category theory) is established as a convenient language to structure and discuss mathematical objects and morphisms between them. To axiomatize the fundamental objects of category theory itself-categories, functors, and natural transformations-the theory of 1-categories is not enough. Instead, category-like structures allowing for "morphisms between morphisms" were developed to account for the natural transformations. Among those structures are bicategories. Bicategory theory was originally developed by Bénabou [9] in set-theoretic foundations. The goal of our work is to develop bicategory theory in univalent foundations. Specifically, we give a notion of a univalent bicategory and show that some bicategories of interest are univalent, with examples from algebra and type theory. To this end, we generalize (univalent) displayed categories of Ahrens and Lumsdaine [3] to the bicategorical setting, and prove that the total bicategory generated by a displayed bicategory is univalent, if the constituent pieces are. In addition, we show how to embed any bicategory with univalent hom-categories into a univalent bicategory via the Yoneda lemma, and we show how to use displayed machinery to construct biequivalences between total bicategories.
In the simplicial set model, univalent categories (just called "categories" in [2]) correspond to truncated complete Segal spaces, which in turn are equivalent to ordinary (set-theoretic) categories. In this respect, univalent categories are "the right" notion of categories in univalent foundations: they correspond exactly to the traditional set-theoretic notion of category. Similarly, the notion of univalent bicategory, studied in this paper, provides the correct notion of bicategory in univalent foundations-see, e.g., [6,Example 9.1]. In this work, we provide results for showing, modularly, that certain bicategories are univalent.
Throughout this article, we work in type theory with function extensionality. We explicitly mention any use of the univalence axiom. We use the notation standardized in [35]; a significantly shorter overview of the setting we work in is given in [2]. As a reference for 1-category theory in univalent foundations, we refer to [2], which follows a path suggested by Hofmann and Streicher [19,Section 5.5].
Motivation: bicategories for type theory One of the motivations for this work stems from several particular (classes of) bicategories that come up in our work on the semantics of type theories and Higher Inductive Types (HITs).
Firstly, we are interested in the "categories with structure" that have been used in the model theory of type theories. The purpose of the various categorical structures is to model context extension and substitution. Prominent such notions are categories with families (see, e.g., [13,15]), categories with attributes (see, e.g., [31]), and categories with display maps (see, e.g., [34,30]). Each notion of "categorical structure" gives rise to a bicategory whose objects are categories equipped with such a structure. In the present work, we provide machinery that can be used to show, in a modular way, that these bicategories are univalent; we exemplify the machinery with categories with families.
Secondly, Dybjer and Moeneclaey define a notion of signature for 1-HITs and study algebras of those signatures [16]. These algebras are groupoids equipped with extra structure according to the signature. In the present work, we give general methods for constructing bicategories of such algebras and we demonstrate the usage of those methods by constructing the bicategory of monads internal to a given bicategory. We then construct a bicategory of Kleisli triples (an alternative presentation of monads 1 ), and show that it is equivalent to the bicategory of monads. We also show that the resulting bicategory of monads internal to the bicategory of univalent categories is biequivalent to the bicategory of Kleisli triples.
Technical contribution: displayed bicategories In this work, we develop the notion of displayed bicategory in analogy to the 1-categorical notion of displayed category introduced in [3]. Intuitively, a displayed bicategory D over a bicategory B represents data and properties to be added to B to form a new bicategory: D gives rise to the total bicategory D. Its cells are pairs (b, d) where d in D is a "displayed cell" over b in B. Univalence of D can be shown from univalence of B and "displayed univalence" of D. The latter two conditions are easier to show, sometimes significantly easier.
Two features make the displayed point of view particularly useful: firstly, displayed structures can be iterated, making it possible to build bicategories of very complicated objects layerwise. Secondly, displayed "building blocks" can be provided, for which univalence is proved once and for all. These building blocks, e.g., cartesian product, can be used like LEGO ™ pieces to modularly build bicategories of large structures that are automatically accompanied by a proof of univalence.
In Section 2, we define the notion of biequivalence of bicategories, the "correct" notion of sameness for bicategories. We construct a biequivalence between 1-types and univalent groupoids. In Section 3, we present an induction principle for invertible 2-cells in a locally univalent bicategory and an induction principle for adjoint equivalences in a globally univalent bicategory. We put these principles to work in a number of examples. Section 4 is new. In there, we propose a definition of 2-category and of strict bicategory, and we show that these are equivalent. Section 5 is new. In there, we show that any bicategory embeds into a univalent one via the Yoneda embedding. This construction is reminiscent of the Rezk completion for categories. In Section 6, we give the definition of the displayed bicategory of monads internal to a given bicategory and the displayed bicategory of Kleisli triples . The bicategory of monads on a bicategory B is univalent whenever B is univalent, which is proved in Section 9.2. Section 8 is new. In there, we introduce the notion of displayed biequivalence. Using this notion, we show that the biequivalence between 1-types and univalent groupoids extends to a biequivalence between their pointed variants. We also construct a biequivalence between the bicategory of Kleisli triples and the bicategory of monads internal to the bicategory of univalent categories. Section 10 is new. Following a suggestion by an anonymous referee, we generalize the constructions in Sections 9.2 and 9.3 using displayed inserters.
Bicategories and Some Examples
Bicategories were introduced by Bénabou [9], encompassing monoidal categories, 2-categories (in particular, the 2-category of categories), and other examples. He (and later many other authors) defines bicategories in the style of "categories weakly enriched in categories". That is, the hom-objects B 1 (a, b) of a bicategory B are taken to be (1-)categories, and composition is given by a functor B 1 (a, b) × B 1 (b, c) → B 1 (a, c). This presentation of bicategories is concise and convenient for communication between mathematicians.
In this article, we use a different, more unfolded definition of bicategories, which is inspired by Bénabou [9, Section 1.3] and [29, Section 'Details']. One the one hand, it is more verbose than the definition via weak enrichment. On the other hand, it is better suited for our purposes, in particular, it is suitable for defining displayed bicategories, cf. Section 6. θ : B 2 (f, g) and γ : B 2 (g, h); 8. a left whiskering f θ : B 2 (f · g, f · h) for all 1-cells f : B 1 (a, b) and g, h : B 1 (b, c) and 2-cells θ : B 2 (g, h); 9. a right whiskering θ h : B 2 (f · h, g · h) for all 1-cells f, g : B 1 (a, b) and h : B 1 (b, c) and 2-cells θ : B 2 (f, g); 10. a left unitor λ(f ) : B 2 (id 1 (a) · f, f ) and its inverse λ(f ) −1 : , and h : B 1 (c, d) such that, for all suitable objects, 1-cells, and 2-cells, A bicategory is a prebicategory whose types of 2-cells B 2 (f, g) are sets for all a, b : B 0 and f, g : B 1 (a, b).
We write a → b for B 1 (a, b) and f ⇒ g for B 2 (f, g). Mitchell Riley formalized a definition of bicategories as "categories weakly enriched in categories" in UniMath, based on work by Peter LeFanu Lumsdaine. We do not reproduce this definition here; it is available as prebicategory. That definition is equivalent to our definition, in the following sense:
Proposition 2.2 (weq_bicat_prebicategory). The type of bicategories defined in Definition 2.1 is equivalent to the type of bicategories in terms of weak enrichment.
For this result, one needs to show that each B 1 (a, b) forms a category whose morphisms are 2-cells. Let us introduce this formally.
Definition 2.3 (hom)
. Let B be a bicategory and a, b : B 0 objects of B. Then we define the hom-category B 1 (a, b) to be the category whose objects are 1-cells f : a → b and whose morphisms from f to g are 2-cells α : f ⇒ g of B. The identity morphisms are identity 2-cells and the composition is vertical composition of 2-cells.
Recall that our goal is to study univalence of bicategories, which is a property that relates equivalence and equality. For this reason, we study the two analogs of the 1-categorical notion of isomorphism. The corresponding notion for 2-cells is that of invertible 2-cells.
An invertible 2-cell consists of a 2-cell and a proof that it is invertible, and inv2cell(f, g) is the type of invertible 2-cells from f to g.
Since 2-cells form a set and inverses are unique, being an invertible 2-cell is a proposition. In addition, id 2 (f ) is invertible, and we write id 2 (f ) : inv2cell(f, f ) for this invertible 2-cell.
The bicategorical analog of isomorphisms for 1-cells is the notion of adjoint equivalence.
Definition 2.5 (adjoint_equivalence). An adjoint equivalence structure on a 1-cell f : a → b consists of a 1-cell g : b → a and invertible 2-cells η : id 1 (a) ⇒ f · g and ε : g · f ⇒ id 1 (b) such that the following two diagrams commute An adjoint equivalence consists of a 1-cell f together with an adjoint equivalence structure on f . The type AdjEquiv(a, b) consists of all adjoint equivalences from a to b.
We call η and ε the unit and counit of the adjoint equivalence, and we call g the right adjoint. The prime example of an adjoint equivalence is the identity 1-cell id 1 (a) and we denote it by id 1 (a) : AdjEquiv(a, a). Sometimes, we write a b for AdjEquiv(a, b).
Before we start our study of univalence, we present some examples of bicategories and preliminary notions from bicategory theory. Example 2.6 (fundamental_bigroupoid). Let X be a 2-type. Then we define the fundamental bigroupoid π(X) to be the bicategory whose 0-cells are inhabitants of X, 1-cells from x to y are paths x = y, and 2-cells from p to q are higher-order paths p = q. The operations, such as composition and whiskering, are defined using path induction. Every 1-cell is an adjoint equivalence and every 2-cell is invertible. Example 2.7 (one_types). Let U be a universe. The objects of the bicategory 1-Type U are 1-truncated types of the universe U, the 1-cells are functions between the underlying types, and the 2-cells are homotopies between functions. The 1-cells id 1 (X) and f · g are defined as the identity and composition of functions, respectively. The 2-cell id 2 (f ) is refl, the 2-cell p • q is the concatenation of paths. The unitors and associators are defined as identity paths. Every 2-cell is invertible, and adjoint equivalences from X to Y are the same as equivalences of types from X to Y . Example 2.8 (bicat_of_univ_cats). We define the bicategory Cat of univalent categories as the bicategory whose 0-cells are univalent categories, 1-cells are functors, and 2-cells are natural transformations. The identity 1-cells are identity functors, the composition and whiskering operations are composition of functors and whiskering of functors and transformations, respectively. Invertible 2-cells are natural isomorphisms, and adjoint equivalences are external adjoint equivalences of categories. Example 2.9 (op1_bicat). Let B be a bicategory. Then we define B op to be the bicategory whose objects are objects in B, 1-cells from x to y are 1-cells y → x in B, and the 2-cells from f to g are 2-cells f ⇒ g in B.
Definition 2.10 (fullsubbicat). Let B be a bicategory and P : B 0 → hProp a predicate on the 0-cells of B. We define the full subbicategory of B with 0-cells satisfying P as the bicategory whose objects are pairs (a, p a ) : (x:B0) P (x), 1-cells from (a, p a ) to (b, p b ) are 1-cells a → b in B, and 2-cells are as in B. In Example 6.5 we present a construction of this bicategory using displayed bicategories. Example 2.11 (grpds). We define the bicategory Grpd as the full subbicategory of Cat in which every object is a groupoid. For 1-categories the "correct" notion of equality is not isomorphism of categories, but equivalence of categories. Similarly, the right notion of equality for bicategories is biequivalence.
To talk about biequivalences we need to introduce pseudofunctors.
; For each f : B 1 (a, b) and g : and such that the following diagrams commute (where all free variables should be taken to be universally quantified): We write B → C for the type of pseudofunctors from B to C.
In the remainder of the paper, we sometimes write F (a) instead of F 0 (a), and we use the same convention for F 1 and F 2 . We call the 2-cells F i and F c the identitor and compositor, respectively. From each pseudofunctor F : B → C we can assemble functors ) between the hom-categories.
We write F ⇒ G for the type of pseudotransformations from F to G. Definition 2.14 (modification). Let B and C be bicategories, F, G : B → C be pseudofunctors, and η, θ : F ⇒ G be pseudotransformations. A modification Γ from η to θ consists of 2-cells Γ(a) : η(a) ⇒ θ(a) for each a : B such that commutes for any a, b : B and f : B 1 (a, b). We write η θ for the type of modifications from η to θ.
To illustrate these three definitions, we look at some examples. Example 2.15. Let X and Y be 2-types. (ap_psfunctor) Each function f : X → Y induces a pseudofunctor f : π(X) → π(Y ), which sends objects x : X to f (x), 1-cells p : x = y to ap f p, and 2-cells h : p = q to ap (ap f ) h.
(ap_pstrans) Suppose we have f, g : X → Y and e : x:X f (x) = g(x). Then we obtain a pseudotransformation e : f ⇒ g whose component at x is e(x), and whose actions on 1-cells are given by path induction.
Note that we have a bicategory Pseudo(B, C) of pseudofunctors, pseudotransformations, and modifications. We construct this bicategory in Section 9.1 using displayed bicategories, and then we define invertible modifications to be invertible 2-cells in this bicategory. With all this in place, we can define biequivalences.
Usually, the notion of biequivalence is not sufficient, and instead biadjoint biequivalences are used. The latter notion has an extra requirement, namely that L and R form a pseudoadjunction [22]. Note that this is similar to the situation in types (see, e.g., [35,Section 4]) and categories (see, e.g., [26,Section IV.4]), where one also considers coherent notions of equivalence. However, we restrict our attention to biequivalences, because every biequivalence can be refined to a biadjoint biequivalence [18, Theorem 3.1].
Example 2.18 (biequiv_path_groupoid).
We construct a biequivalence between 1-types and univalent groupoids. We only show how the involved pseudofunctors are defined.
(path_groupoid) Define a pseudofunctor PathGrpd : 1-Type → Grpd. It sends a 1-type X to the groupoid PathGrpd(X) whose objects are X and morphisms from x to y are paths x = y.
(objects_of_grpd) Define a pseudofunctor Ob : Grpd → 1-Type. It sends a groupoid G to the 1-type Ob(G) whose inhabitants are objects of G. Note that this is a 1-truncated type, because G is univalent.
Univalent Bicategories
Recall that a (1-)category C (called 'precategory' in [2]) is called univalent if, for every two objects a, b : C 0 , the function idtoiso a,b : (a = b) → Iso(a, b) mapping the constant path to the identity isomorphism is an equivalence. For bicategories, where we have one more layer of structure, univalence can be imposed both locally and globally.
Univalence for bicategories is defined as follows: 1. Let a, b : B 0 and f, g : B 1 (a, b) be objects and morphisms of B; by path induction we define a function idtoiso 2,1 f,g : f = g → inv2cell(f, g) which sends refl(f ) to id 2 (f ). A bicategory B is locally univalent if, for every two objects a, b : B 0 and two 1-cells f, g : B 1 (a, b), the function idtoiso 2,1 f,g is an equivalence. 2. Let a, b : B 0 be objects of B; using path induction we define idtoiso 2,0 a,b : a = b → AdjEquiv(a, b) sending refl(a) to id 1 (a). A bicategory B is globally univalent if, for every two objects a, b : B 0 , the canonical function idtoiso 2,0 a,b is an equivalence. 3. (is_univalent_2) We say that B is univalent if B is both locally and globally univalent.
Local univalence can be characterized via the hom-categories. More precisely, it is equivalent to all hom-categories being univalent. If B and C are locally univalent and F is a pseudofunctor from B to C, then the identity and compositions are preserved up to a path instead of just an invertible 2-cell. However, this does not mean such pseudofunctors should be considered as strict, because these are not paths between elements of a set.
Univalent bicategories satisfy a variant of the elimination principle of path induction. More precisely, there are two such principles: a local one for invertible 2-cells and a global one for adjoint equivalences. We start with the induction principle associated to invertible 2-cells:
Proposition 3.4 (J_2_1). Let B be a locally univalent bicategory. Given a type family Y and a function y with types
there is a function In particular, in order to prove a predicate over all invertible 2-cells in a given locally univalent bicategory, it suffices to prove it for all identity 2-cells. Next, we present the induction principle associated to adjoint equivalences: . Let B be a globally univalent bicategory. Given a type family Y and a function y with types Y : there is a function In particular, in order to prove a predicate over all adjoint equivalences in a given globally univalent bicategory, it suffices to prove it for all identity 1-cells. Notice that in both induction principles the computation rules hold only up to propositional equality. Next, we present some usage examples of how to use Propositions 3.4 and 3.5. The constructions described in Example 3.6 and Proposition 3.9 work for arbitrary bicategories, not just globally/locally univalent ones. Nevertheless, these constructions are considerably simpler if the involved bicategories satisfy certain univalence assumptions. and θ : inv2cell(f, g). If f is an adjoint equivalence, then g is an adjoint equivalence as well. While this result generally holds in any bicategory B, it is particularly simple to prove when B is locally univalent. Applying Proposition 3.4, we are left to prove the statement with θ as the identity 2-cell. In that statement, f and g are definitionally equal, and hence the statement is trivially true.
Proof. Lengthy but straightforward.
If B is globally univalent and C is locally univalent, the above statement can be proved very easily. Proposition 3.9 (psfunctor_preserves_adjequiv). If B is globally univalent and C is locally univalent, then every pseudofunctor F : B → C preserves adjoint equivalences.
Proof. Applying Proposition 3.5 on f , we are left to prove that F 1 (id 1 (a)) is an adjoint equivalence. Since F is a pseudofunctor, there exists an invertible 2-cell F i (a) : id 1 (F 0 (a)) ⇒ F 1 (id 1 (a)). Therefore, by Example 3.7 and the fact that id 1 (F 0 (a)) is an adjoint equivalence, we conclude that F 1 (id 1 (a)) is an adjoint equivalence as well.
Another consequence is that biequivalences between univalent bicategories gives rise to equivalences on the level of objects. Proposition 3.10 (biequivalence_to_object_equivalence). Given univalent bicategories B and C, and a biequivalence F from B to C, then we get an equivalence of types While right adjoints are only unique up to isomorphism in general, they are unique up to identity if the bicategory is locally univalent: Then having an adjoint equivalence structure on a 1-cell in B is a proposition.
As a consequence of this proposition we get the following: Proposition 3.11 has another important use: to prove global univalence of a bicategory, we need to show that idtoiso 2,0 a,b is an equivalence. Often we do that by constructing a function in the other direction and showing these two are inverses. This requires comparing adjoint equivalences, which is done with the help of Proposition 3.11.
Local univalence is also relevant when one discusses bicategorical analogues of limits and colimits. To exemplify this, we look at biinitial objects, and we note that a similar discussion can be given for bifinal objects (bifinal_unique). We start by defining biinitiality structures. Definition 3.13 (is_biinitial). Let B be a bicategory and let a be an object in B. Then a biinitiality structure on a consists of an external adjoint equivalence structure on the canonical functor from B(a, b) to the unit category for each b : B. A biinitial object is an object a : B together with a biinitiality structure on a.
In general, adjoint equivalence structures are not necessarily unique, but they are if the bicategory is locally univalent. As such, having a biinitiality structure is not necessarily a proposition, and instead, it should be viewed as a structure on the objects. If the bicategory is locally univalent, however, then we can use Proposition 3.11 to show that biinitiality structures form a proposition.
Example 3.17. Note that both 1-Type and Cat have a biinitial object. (biinitial_1_types) The empty type is a biinitial object in 1-Type. (biinitial_cats) The empty category is a biinitial object in Cat. Now let us prove that some examples from Section 2 are univalent.
Example 3.18. The following bicategories are univalent: The bicategory of 1-types of a universe U is locally univalent; this is a consequence of function extensionality. If we assume the univalence axiom for U, then 1-types form a univalent bicategory. To show that, we factor idtoiso 2,0 as follows.
The left function is an equivalence by univalence, and the right function is an equivalence by the characterization of adjoint equivalences in Example 2.7. The fact that this diagram commutes follows from Proposition 3.11. 3. (FullSub.v, If B is univalent and P is a predicate on B, then so is the full subbicategory of B with those objects satisfying P . It is more difficult to prove that the bicategory of univalent categories is univalent, and we only give a brief sketch of this proof. Local univalence follows from the fact that the functor category [C, D] is univalent if D is. For global univalence, we use that the type of identities on categories is equivalent to the type of adjoint equivalences between categories [2, Theorem 6.17]. The proof proceeds by factoring idtoiso 2,0 as a chain of equivalences (C = D) To our knowledge, a proof of global univalence was first computer-formalized by Rafaël Bocquet 2 .
In the previous examples, we proved univalence directly. However, in many complicated bicategories such proofs are not feasible. An example of such a bicategory is the bicategory Pseudo(B, C) of pseudofunctors from B to C, pseudotransformations, and modifications [25] (for a univalent bicategory C). Even in the 1-categorical case, proving the univalence of the category [C, D] of functors from C to D, and natural transformations between them, is tedious. In Section 7, we develop some machinery to prove the following theorem.
Bicategories and 2-Categories
In this section, we propose a definition of 2-category, and compare 2-categories to bicategories. We start by defining strict bicategories.
Proposition 4.2 (isaprop_is_strict_bicat). Being a 1-strict bicategory is a proposition.
Now let us look at an example of a 1-strict bicategory.
Example 4.3 (strict_bicat_of_strict_cats).
Recall that a category is called strict if its objects form a set. Define Cat S to be the bicategory whose objects are strict categories, 1-cells are functors, and 2-cells are natural transformations. Then Cat S is a 1-strict bicategory.
The bicategory Cat of univalent categories is not 1-strict. This is because functors between two categories do not necessarily form a set. One can show that the additional coherence conditions are unique and automatic when the bicategory under consideration is locally strict or locally univalent. Furthermore, when the bicategory is locally univalent, the type of coherent strictness structures on it is contractible (unique_strictness_structure_is_univalent_2_1).
Next we look at 2-categories. These are defined as 1-categories with additional structure and properties. Definition 4.6 (two_cat). A 2-category C consists of a category C 0 ; for each x, y : C 0 and f, g : a, b) and h : C 1 (b, c) and 2-cells θ : C 2 (f, g); such that, for all suitable objects, 1-cells, and 2-cells, is defined by path induction, sending the identity path to the identity 2-cell. The paths idleft(f ), idright(g), and assoc(f, g, h) are those given by the categorical axioms for C 0 .
We call 0-cells of a 2-category C the objects of C 0 , and 1-cells the morphisms of the category C 0 . In particular, the 1-cells between every two 0-cells of a 2-category always form a set.
Remark 4.7. The last few axioms of a 2-category could, equivalently, be stated using transport along a categorical equality axiom (e.g., along idleft(f )), instead of using idto2cell.
The type of 1-strict bicategories is equivalent to that of 2-categories. . In one direction, suppose C is a 2-category. We associate to C the following bicategory: 1. 0-cells, 1-cells, and 2-cells are those of C; 2. composition and identity of 1-cells and 2-cells are those of C, respectively; 3. whiskering is given by the whiskering of C; 4. left and right unitors, and associators, are 2-cells induced by the corresponding equality axioms via idto2cell. The bicategorical axioms are then easily shown, using compatibility, in a suitable sense, of idto2cell with composition of paths (which corresponds to composition of 2-cells) and functions on paths (which corresponds to whiskering). The resulting bicategory is 1-strict.
In the other direction, suppose B is a 1-strict bicategory. We associate the following 2-category to B: 1. 0-cells, 1-cells, and 2-cells are those of B, respectively; 2. composition, identities, and whiskering are given by the corresponding operations of B; 3. the equality axioms for composition of 1-cells are proved using the strictness properties of B; 4. the remaining axioms are proved using suitable compatibility results about idto2cell.
The two functions are easily shown to be inverse to each other, thus forming an equivalence of types.
The Yoneda Embedding
In this section, we show that any locally univalent bicategory naturally embeds into a univalent one, via the Yoneda embedding. This construction is similar to the Rezk completion for categories [2,Theorem 8.5] and it makes use of the Yoneda lemma. We start by discussing representable pseudofunctors, pseudotransformations, and modifications. These are used to define the desired embedding.
(representable) Given an object a : B, we define the representable pseudofunctor Rep 0 (a) from B op (see Example 2.9) to Cat. It sends objects b to the category B 1 (b, a) and (yoneda_counit) A natural isomorphism from G · F to the identity. We only discuss the data of the involved functors. The functor F sends pseudotransformations τ to τ (a)(id 1 (a)) and modifications m to m(a)(id 1 (a))(a). In the other direction, G sends objects z : P (a) to the pseudotransformation whose components are P (f )(z) with b : B op and f : b → a. Now let us use the bicategorical Yoneda lemma to construct for each locally univalent bicategory a weakly equivalent univalent bicategory. We follow the construction of the Rezk completion by Ahrens, Kapulkin, and Shulman [2], and take the image of the Yoneda embedding to be the univalent completion.
First, we define weak equivalences of bicategories.
Definition 5.5. Let B and C be bicategories and let F : B → C be a pseudofunctor. We say (local_equivalence) F is a local equivalence if for each x, y : B the functor from B 1 (x, y) to C 1 (F (x), F (y)) induced by F is an adjoint equivalence.
(essentially_surjective) F is essentially surjective if for each y : C there merely exists an x : B and an adjoint equivalence from F (x) to y.
(weak_equivalence) F is a weak equivalence if F is both a local equivalence and essentially surjective.
The notion of weak equivalence has already been studied in classical mathematics where, using the axiom of choice, it was shown to be equivalent to the usual notion of equivalence [23,25]. However, these notions are generally not equivalent in a constructive setting, but we conjecture that they are for univalent bicategories.
Furthermore, the notion of weak equivalence can be weakened by requiring that the pseudofunctor only induces a weak equivalence of categories on the hom-categories. Such a weaker notion would be useful if one desires to find a univalent completion of arbitrary bicategories instead of just locally univalent ones. To do so, we anticipate a two-step process: first, a local completion, which embeds bicategories in locally univalent ones, followed by, second, the construction described in this section. More concretely, for any bicategory B we expect to be able to construct pseudofunctors as in the following diagram. Weak equivalences between univalent categories are actually equivalences [2, Lemma 6.8]. We conjecture that the same is possible for bicategories.
Conjecture 5.7. Every weak equivalence between univalent bicategories is a biequivalence.
From the Yoneda lemma we know that y is a local equivalence:
Corollary 5.8 (yoneda_mor_is_equivalence). The pseudofunctor y is a local equivalence.
However, y is not essentially surjective: the bicategory Pseudo(B op , Cat) contains nonrepresentable presheaves. To make y essentially surjective we restrict the bicategory of presheaves to the full image of the Yoneda embedding. Cat). Since the codomain of y is univalent by Theorem 3.20, the image is univalent as well by Proposition 5.10. Note that the corestriction gives rise to a pseudofunctor y : B → RC(B). It is essentially surjective by construction. Furthermore, y is a local equivalence by Corollary 5.8, and local equivalences are preserved by corestriction. Hence, y is indeed a weak equivalence.
Note that Construction 5.13 raises universe levels: the bicategory RC(B) lives in a higher universe than B itself, for the same reasons as in the 1-categorical case; see [2,Remark 8.6].
Displayed Bicategories
Now let us study how to construct more complicated univalent bicategories. To that end, we introduce displayed bicategories, the bicategorical analog to the notion of displayed category developed in [3]. A displayed (1-)category D over a given (base) category C consists of a family of objects over objects in C and a family of morphisms over morphisms in C together with suitable displayed operations of composition and identity. A category D is then constructed, the objects and morphisms of which are pairs of objects and morphisms from C and D, respectively. Properties of D, in particular univalence, can be shown from analogous, but simpler, conditions on C and D.
A prototypical example is the following displayed category over C :≡ Set: an object over a set X is a group structure on X, and a morphism over a function f : X → X from group structure G (on X) to group structure G (on X ) is a proof of the fact that f is compatible with G and G . The total category is the category of groups, and its univalence follows from univalence of Set and a univalence property of the displayed data.
Just like in 1-category theory, many examples of bicategories are obtained by endowing previously considered bicategories with additional structure. An example is the bicategory of pointed 1-types in U. The objects in this bicategory are pairs of a 1-type A and an inhabitant a : A. The morphisms are pairs of a morphism f of 1-types and a path witnessing that f preserves the selected points. Similarly, the 2-cells are pairs of a homotopy p and a proof that this p commutes with the point preservation proofs. Thus, this bicategory is obtained from 1-Type U by endowing the cells on each level with additional structure.
Of course, the structure should be added in such a way that we are guaranteed to obtain a bicategory at the end. Now let us give the formal definition of displayed bicategories. f Note that we use the same notation for the displayed and the non-displayed operations. These operations are subject to laws, which are derived systematically from the nondisplayed version. Just as for displayed 1-categories, the laws of displayed bicategories are heterogeneous, because they are transported along the analogous law in the base bicategory. For instance, the displayed left-unitary law for identity reads as id 2 (f ) •θ = eθ , where e is the corresponding identity of Item 13 in Definition 2.1.
id
The purpose of displayed bicategories is to give rise to a total bicategory together with a projection pseudofunctor. They are defined as follows: We also have a projection pseudofunctor π D : D → B.
As mentioned before, the bicategory of pointed 1-types is the total bicategory of the following displayed bicategory. For a groupoid G, the objects over G are objects of G. For a functor F : G 1 → G 2 between groupoids G 1 and G 2 , the displayed 1-cells over F from x to y are isomorphisms F (a) ∼ = b. Given two functors F 1 , F 2 : G 1 → G 2 , a natural transformation n : F 1 ⇒ F 2 , two points x : G 1 and y : G 2 , and isomorphisms q 1 : F 1 (x) ∼ = y and q 2 : F 2 (x) = y, the displayed 2-cells over n are paths p(a) • q g = q f . The bicategory of pointed groupoids is the total bicategory of this displayed bicategory. Now let us discuss two more examples of bicategories obtained from displayed bicategories: firstly, monads internal to an arbitrary bicategory and secondly, Kleisli triples. In Construction 8.14, we construct a biequivalence between the bicategory of Kleisli triples and the bicategory of monads internal to Cat.
Definition 6.7 (monad). Let B be a bicategory. Then we define a displayed bicategory M(B) over B such that
The displayed objects over a : B are monad structures on a. A monad structure on a consists of a 1-cell m a : a → a and 2-cells η a : id 1 (a) ⇒ m and µ a : m · m ⇒ m such that the following diagrams commute The displayed 1-cells over f : a → b from (m a , η a , µ a ) to (m b , η b , µ b ) consist of invertible 2-cells n f : m a · f ⇒ f · m b such that the following two diagrams commute The displayed 2-cells over x : f ⇒ g from n f to n g are proofs that the following diagrams The total bicategory of M(B) is the bicategory of monads internal to B.
Next, we define a bicategory of Kleisli triples (also known as extension systems [28]). In addition, two laws reminiscent of those in Definition 2.5 need to be satisfied. A displayed adjoint equivalence over the adjoint equivalence A is a pair of a displayed 1-cellf over f together with a displayed adjoint equivalence structure onf . The type of displayed adjoint equivalences fromā tob over f is denoted byā fb .
The displayed 1-cell id 1 (ā) is a displayed adjoint equivalence over id 1 (a).
Using these definitions, we define univalence of displayed bicategories similarly to univalence for ordinary bicategories. Again we separate it in a local and global condition. D be a displayed bicategory over B. 1. Let a, b : B, andā : D a ,b : D b . Let f, g : a → b, let p : f = g, and letf andḡ be displayed morphisms over f and g respectively. Then we define a function sending refl to the identity displayed isomorphism. We say that D is locally univalent if the function disp_idtoiso 2,1 p,f ,ḡ is an equivalence for each p,f , andḡ. sending refl to the identity displayed adjoint equivalence. We say that D is globally univalent if the function disp_idtoiso 2,0 p,ā,b is an equivalence for each p,ā, andb.
(disp_univalent_2) We call D univalent if it is both locally and globally univalent.
The following result states that univalence of the total bicategory can be proved from univalence of the base and of the displayed part. This is the bicategorical version of the analogous result for 1-categories shown in [3,Theorem 7.4], which in turn generalizes the Structure Identity Principle [35, Theorem 9.8.2]. Proof. The main idea behind the proof is to characterize invertible 2-cells in the total bicategory as pairs of an invertible 2-cell p in the base bicategory, and a displayed invertible 2-cell over p. Concretely, for the local univalence of D, we factor idtoiso 2,1 as a composition of the following equivalences: The function w 1 is just a characterization of paths in a sigma type. The function w 2 turns equalities into (displayed) invertible 2-cells, and it is an equivalence by local univalence of B and displayed local univalence of D. Finally, the function w 3 characterizes invertible 2-cells in the total bicategory.
The proof is similar in the case of global univalence. The most important step is the characterization of adjoint equivalences in the total bicategory. To check displayed univalence, it suffices to prove the condition in the case where p is reflexivity. This step, done by path induction, simplifies some proofs of displayed univalence.
Proposition 7.5. Given a displayed bicategory D over B, then D is univalent if the following functions are equivalences:
(fiberwise_ local_ univalent_ is_ univalent_ 2_ 1 ) For the sigma construction, we give two conditions for the univalence of the total bicategory. If we have univalent displayed bicategories D 1 and D 2 over B and D 1 respectively, then we can either show the univalence of ( D1 D 2 ) directly or we can show the displayed univalence of D1 D 2 . Note that the second property could be necessary as an intermediate step for proving the univalence of a more complicated bicategory. For the proof of displayed univalence of D1 D 2 , we need two assumptions on both displayed bicategories. 2 ). 2. If D 1 and D 2 are locally propositional and groupoidal, then D1 D 2 is displayed univalent (sigma_ disp_ univalent_ 2_ with_ props ) . We are not sure whether Item 2 of Proposition 7.9 is as strong as it can be-it might be possible to weaken the assumptions of D 1 and D 2 being locally propositional and groupoidal. However, this would make the proof significantly more complicated. In our examples these assumptions are satisfied, and thus the statement of Proposition 7.9, Item 2 is sufficient for our purposes.
Lastly, we give a condition for when a locally chaotic displayed bicategory is univalent. a : B, the type D a is a set, and for anyā : D a ,b : D b , f : a → b, the typē a f − →b is a proposition. Then D is univalent if we have a function in the opposite direction of disp_idtoiso 2,0 .
Displayed Constructions
The idea of building bicategories by layering displayed bicategories does not only allow for modular proofs of univalence, but also for the modular construction of maps between bicategories, e.g., pseudofunctors and biequivalences. In this section, we introduce the notions of displayed pseudofunctor and biequivalence, and use them to build biequivalences. The first example we look at, extends the biequivalence between 1-types and univalent groupoids in Example 2.18 to their pointed variants (Example 6.3 and Example 6.4).
Problem 8.1. To construct a biequivalence between pointed 1-types and pointed groupoids.
To construct the desired biequivalence, we first define displayed biequivalences over a given biequivalence in the base and we show that it gives rise to a total biequivalence on the total bicategories. Since biequivalences are defined using pseudofunctors, pseudotransformations, and invertible modifications, we first need to define displayed analogues of these. = ==== ⇒F 1 (f ·ḡ). In addition, several laws similar to those in Definition 2.12 need to hold. They are just dependent variants of them and they hold over the corresponding non-dependent law. Since the required laws are obtained in the same way as in Definition 6.1, we do not show them here and instead refer the interested reader to the formalization. We denote the type of displayed pseudofunctors from D 1 to D 2 over F by D 1
Definition 8.3 (disp_pstrans)
. Suppose that we have bicategories B and C, pseudofunctors F, G : B → C, and a pseudotransformation η : F ⇒ G. Suppose furthermore that we have displayed bicategories D 1 and D 2 over B and C, respectively, and displayed pseudofunctorsF andḠ from D 1 to D 2 over F and G, respectively. Then a displayed pseudotransformationη over η fromF toḠ is given by For all 1-cells f : x → y, displayed objectsx : D 1 (x) andȳ : D 1 (y) and displayed 1-cells Again laws similar to those in Definition 2.13 need to hold and again they are derived similar to those in Definition 6.1. We denote the type of displayed pseudotransformations fromF tō G over η byF η = ⇒Ḡ. ===⇒θ 0 (x) for each x : B andx : D 1 (x). In addition, the dependent version of the law in Definition 2.14 needs to hold. We denote the type of displayed modifications fromη toθ over m byη m * 4θ .
In order to formulate displayed biequivalence, we need an invertible version of Definition 8.4.
Definition 8.5 (disp_invmodification). A displayed invertible modification over
an invertible modification m : η θ is a displayed modificationm :η m * 4θ such that is invertible for each x : B andx : D 1 (x).
Each of the discussed notions also has a total version. These are constructed similarly to how the total bicategory is constructed in Definition 6.2. To define displayed biequivalences, we need composition and identity of displayed pseudofunctors and pseudotransformations: Definition 8.8. Suppose that B 1 , B 2 , and B 3 are bicategories and that D 1 , D 2 , and D 3 are displayed bicategories over B 1 , B 2 , and B 3 , respectively. In addition, let F : B 1 → B 2 and G : B 2 → B 3 be pseudofunctors and suppose we have displayed pseudofunctorsF from D 1 to D 2 andḠ from D 2 to D 3 over F and G, respectively.
Displayed invertible modifications Note that the total variant of each example in Definition 8.9 is its non-displayed analogue. Displayed biequivalences give rise to total biequivalences between their associated total bicategories. Note that to construct a displayed biequivalence, one must show several laws and construct multiple displayed invertible 2-cells. If the involved displayed bicategories are locally groupoidal (Definition 7.7) and locally propositional (Definition 7.8), then constructing a displayed biequivalence is simpler. This is because all the necessary laws follow immediately from local propositionality and all the involved displayed 2-cells are invertible. With all this in place, we finally show how to construct the desired biequivalence in Problem 8.1 with displayed machinery. Construction 8.12 (for Problem 8.1; disp_biequiv_data_unit_counit_path_pgroupoid). By Problem 8.10 it suffices to construct a displayed biequivalence. We only show how to construct the required displayed pseudofunctor from points on 1-types to points on groupoids.
Given a 1-type X and a point x : X, we need to give an object of PathGrpd(X), for which we take x.
If we have 1-types X and Y with points x : X and y : Y , and a function f : X → Y with a path p f : f (x) = y, then we need to construct an isomorphism between f (x) and y in PathGrpd(X). It is given by p f . Suppose we have 1-types X and Y with points x : X and y : Y . Furthermore, suppose we have a homotopy s : f ∼ g between functions f, g : X → Y , paths p f : f (x) = y and p g : g(x) = y, and a path h : p f = s(x) • p g . Then the required displayed 2-cell is the inverse of h. The compositor and the identitor are both the reflexivity path.
As a final example, we construct a biequivalence between the bicategory of monads internal to Cat and the bicategory of Kleisli triples.
Problem 8.13. To construct a biequivalence between monads and Kleisli triples.
Construction 8.14 (for Problem 8.13; Monad_biequiv_Ktriple). Note that the bicategory of monads and Kleisli triples are defined as the total bicategories of Definition 6.7 and Definition 6.8, respectively. Hence, by Problem 8.10, it is sufficient to construct a displayed biequivalence between the respective displayed bicategories. For the details on this construction, we refer the reader to the formalization.
9
Univalence of Complicated Bicategories In this section, we demonstrate the power of displayed bicategories on a number of complicated examples. We show the univalence of the bicategory of pseudofunctors between univalent bicategories and of univalent categories with families. In addition, we give two constructions to define univalent bicategories of algebras.
Pseudofunctors
As promised, we use displayed bicategories to prove Theorem 3.20. For the remainder, fix bicategories B and C such that C is univalent. Recall that a pseudofunctor consists of an action on 0-cells, 1-cells, 2-cells, a family of 2-cells witnessing the preservation of composition and identity 1-cells, such that a number of laws are satisfied.
To construct the bicategory Pseudo(B, C) of pseudofunctors, we start with a base bicategory whose objects are functions from B 0 to C 0 . Then we add structure to the base bicategory in several layers. Each layer is given as a displayed bicategory over the total bicategory of the preceding one. The first layer consists of actions of the pseudofunctors on 1-cells. On its total bicategory, we define three displayed bicategories: one for the compositor, one for the identitor, and one for the action on 2-cells. We take the total bicategory of the product of these three displayed bicategories. Finally, we take the full subbicategory of that total bicategory on those objects that satisfy the axioms of a pseudofunctor. To show its univalence, we show the base and each layer are univalent. Now let us look at the formal definitions.
Definition 9.3 (identitor_disp_cat).
We define a displayed bicategory MapId(B, C) over Map1(B, C) as follows: The displayed objects over (F 0 , F 1 ) are identitors The displayed morphisms over (η 0 , η 1 ) from F i to G i are equalities
Definition 9.4 (compositor_disp_cat).
We define a displayed bicategory MapC(B, C) over Map1(B, C) as follows: The displayed objects over (F 0 , F 1 ) are compositors The displayed morphisms over (η 0 , η 1 ) from F c to G c consists of equalities for all X, Y, Z : B 0 , f : B 1 (X, Y ) and g : B 1 (Y, Z).
Definition 9.5 (map2cells_disp_cat). We define a displayed bicategory Map2D(B, C) over Map1(B, C) as follows: The displayed objects over (F 0 , F 1 ) are The displayed morphisms over (η 0 , η 1 ) from F 2 to G 2 consist of equalities We denote the total category of the product of Map2D(B, C), MapId(B, C), and MapC(B, C) by RawPseudo(B, C). Note that its objects are of the form ((F 0 , F 1 ), (F 2 , F i , F c )), its 1-cells are pseudotransformations, and its 2-cells are modifications. However, its objects are not yet pseudofunctors, because those also need to satisfy the laws in Definition 2.12. Definition 9.6 (psfunctor_bicat). We define the bicategory Pseudo(B, C) as the full subbicategory of RawPseudo(B, C) where the objects satisfy the following laws F 2 (id 2 (f )) = id 2 (F 1 (f )) and Note that the objects, 1-cells, and 2-cells of the resulting bicategory correspond to pseudofunctors (Definition 2.12), pseudotransformations (Definition 2.13), and modifications (Definition 2.14) respectively. Each displayed layer in this construction is univalent. In addition, if C is univalent, then so is Base(B, C). All in all, the results of this subsection can be summarized as follows.
Definition 9.7. Given bicategories B and C, we define a bicategory Pseudo(B, C) whose objects are pseudofunctors, 1-cells are pseudotransformations, and 2-cells are modifications.
Theorem 9.8. If C is univalent, then so is Pseudo(B, C).
Algebraic Examples
Next, we show how to use displayed bicategories to construct univalent bicategories of algebras for some signature. We consider signatures that specify operations, equations, and coherencies on those equations. More specifically, a signature consists of a pseudofunctor F (specifying the operations), a finite set of pairs of pseudotransformations l i and r i (specifying the equations), and a proposition P (specifying the coherencies) which can refer to F and the l i and r i . An algebra on such a signature consists of an object X, a 1-cell h : F (X) → X, 2-cells l i (X) ⇒ r i (X), such that the predicate P is satisfied by all this data.
To define the bicategory of algebras on a signature, we define three displayed bicategories which add the operations, equations, and coherencies. Since the equations can make use of the operations and the coherencies can refer to the equations, the displayed bicategories must be layered suitably. More specifically, starting with a bicategory B and a pseudofunctor F : B → B, we first define a displayed bicategory whose displayed objects are algebras on F . On top of its total bicategory, we give a displayed bicategory which adds 2-cells (modeling equations) to the structure. This gives rise to another total bicategory. Finally, we consider the full subbicategory of the latter total bicategory consisting of all objects satisfying the desired coherencies. The objects of the resulting total bicategory are models for the signature we started with.
To illustrate our approach, we show how to define the bicategory of monads internal to a bicategory, as discussed in Definition 6.7. A monad internal to a bicategory B consists of, among others, a 0-cell X : B and 1-cell X → X as an "operation". Such structure is encapsulated by algebras for a pseudofunctor and pseudomorphisms between those algebras. Definition 9.9 (disp_alg_bicat). Let B be a bicategory and let F : B → B be a pseudofunctor. We define a displayed bicategory Alg D (F ).
The objects over a : B are 1-cells F 0 (a) → a. The 1-cells over f : Given f, g : B 1 (a, b), algebras h a : F 0 (a) → a and h b : F 0 (b) → b, and h f and h g over f and g respectively, a 2-cell over θ : f ⇒ g is a commuting square We write Alg(F ) for the total category of Alg D (F ). Returning to the example of monads, define M 1 to be Alg(id(B)). Objects of M 1 consist of an X : B 0 and a 1-cell X → X. To refine this further, we need to add 2-cells corresponding to the unit and the multiplication. We do this by defining two displayed bicategories over M 1 .
In general, the construction for building algebras with 2-cells (which model "equations") looks as follows. Suppose that we have a displayed bicategory D over some B. Our goal is to define a displayed bicategory over D where the displayed 0-cells are certain 2-cells in B. The endpoints for these 2-cells are choices of 1-cells that are natural in objects, thus they are given by pseudotransformations l, r. The source of the endpoints is π D · S for some S : B → B, and the target is π D · id(B) where π D is the projection from D to B. The source pseudofunctor S : B → B determines the shape of the free variables that occur in the endpoints. Note that the target of the endpoint is π D · id(B), instead of π D , which is symmetric to the source π D · S. This allows us to construct such transformations by composing them.
Thus, pseudotransformations l, r : π D · S → π D · id(B) give 1-cells l(a, h a ), r(a, h a ) : B 1 (S(a), a) for each (a, h a ) : D. By allowing l and r to depend not only on the 0-cell a : B, but also on the displayed cell h a : D(a), the endpoints can refer to the operations that were added as part of algebras in Definition 9.9. Formally, the construction that adds 2-cells from l(a) to r(a) is defined as the following displayed bicategory. Definition 9.12 (add_cell_disp_cat). Suppose that D is a displayed bicategory over B. Let S : B → B be a pseudofunctor and let l, r : π D · S → π D · id(B) be pseudotransformations. We define a displayed bicategory Add2Cell(D, l, r) over D as a locally chaotic displayed bicategory (c.f. Item 4 in Definition 6.6).
The objects over (a, h a ) are 2-cells l(a, h a ) ⇒ r(a, h a ).
Returning to the example of monads, let us use Definition 9.12 to add the unit and the multiplication 2-cells to the structure of M 1 . We can add the unit and the multiplication separately, as two displayed bicategories. For the unit, we pick the source pseudofunctor S(a) = a and the endpoints are defined as l(a, f : a → a) = id 0 (a) and r(a, f : a → a) = f . For the multiplication, we use the same source pseudofunctor and the same right endpoint, but we pick the left endpoint to be l(a, f : a → a) = f · f .
Let M 2 be the product of these two displayed bicategories, displayed over M 1 . We use the sigma construction (c.f. Item 2 in Definition 6.6) to obtain a displayed bicategory M 2 over B. It is almost the bicategory of monads internal to B. To finalize the construction, we need to require the structures in M 2 to satisfy the monadic laws: for each object (f, η, µ) in M 2 the diagrams from Definition 6.7 need to commute. We construct the final bicategory M(B) (as in Definition 6.7) as the full subbicategory of M 2 with respect to these laws. Again to guarantee that M(B) is displayed over B, we use the sigma construction. From Proposition 7.9, Theorems 9.10 and 9.13, and Example 7.6 we conclude: Theorem 9.14 (bigmonad_is_univalent_2). If B is univalent, then so is M(B).
Categories with Families
Finally, we discuss the last example: the bicategory of (univalent) categories with families (CwFs) [15]. We follow the formulation by Fiore (described as "dependent context structures" in [17]) and Awodey [8,Section 1], which is already formalized in UniMath [4]: a CwF consists of a category C, two presheaves Ty and Tm on C, a morphism p : Tm → Ty, and a representation structure for p.
However, rather than defining CwFs in one step, we use a stratified construction yielding the sought bicategory as the total bicategory of iterated displayed layers. The base bicategory is Cat (cf. Example 2.8). The second layer of data consists of two presheaves, each described by the following construction. Denote by CwF 1 the total category of the product of PShD with itself. An object in CwF 1 consists of a category C and two presheaves Ty, Tm : C op → Set.
The next piece of data in a CwF is a natural transformation from Tm to Ty: Definition 9.16 (morphisms_of_presheaves_display). We define a displayed bicategory dCwF 2 on CwF 1 as the locally chaotic displayed bicategory (Item 4 in Definition 6.6) such that The objects over (C, (Ty, Tm)) are natural transformations from Tm to Ty. Suppose we have two objects (C, (Ty, Tm)) and (C , (Ty , Tm )), two natural transformations p : Tm ⇒ Ty and p : Tm ⇒ Ty , and suppose we have a 1-cell f from (C, (Ty, Tm)) to (C , (Ty , Tm )). Note that f consists of a functor F : C → C and two transformations β : Ty ⇒ F op • Ty and β : Tm ⇒ F op • Tm . Then a 1-cell over f is an equality With dCwF 2 and the sigma construction from Item 2 in Definition 6.6, we get a displayed bicategory over Cat and we denote its total bicategory by CwF 2 . As the last piece of data, we add the representation structure for the morphism p of presheaves.
Definition 9.17 (cwf_representation). Given a category C together with functors Ty, Tm : C op → Set and a natural transformation p : Tm ⇒ Ty, we say isCwF(C, Ty, Tm, p) if for each Γ : C and A : Ty(Γ), we have a representation of the fiber of p over A.
A detailed definition can be found in [4,Definition 3.1]. Since C is univalent, the type isCwF(C, Ty, Tm, p) is a proposition, and thus we define CwF as a full subbicategory of CwF 2 .
Displayed (2-)Inserters
In this section, we study two general constructions which have been suggested by an anonymous referee. Both the constructions and their name were suggested by the referee. We already saw instances of them, namely in Sections 9.2 and 9.3. The first one, called the displayed inserter, constructs a displayed bicategory whose total bicategory represents the inserter of two pseudofunctors. A similar construction, namely inserters of 1-cells in bicategories, has already been studied in the literature. Lambek defined subequalizers of functors [24], and these are inserters in the bicategory of categories. These inserters are also known as dialgebras, and they have been used to study the semantics of inductive-inductive types [7]. Power and Robison defined PIE-limits (products, inserters, equaifiers) in 2-categories and showed that they can be used to construct a general class of limits [32]. In addition, it has been shown that bicategories of algebras are closed under inserters [10,36]. Note that the terminology "displayed inserter" has also been used for the inserter of displayed functors [11], which is different from what we look at.
The remaining properties are readily shown; we refer to the formalization for details.
Example 10.2. Definitions 9.9 and 9.15 are-almost-instances of Definition 10.1. Specifically, Definition 9.9 is obtained as the displayed inserter with F the identity pseudofunctor and with G the pseudofunctor F of Definition 9.9. However, this does not yet give the correct displayed 1-cells; we furthermore need to take the full sub-bicategory of invertible displayed 1-cells (cf. disp_sub1cell_bicat). Definition 9.15 is obtained by taking F to be the identity on Cat and G to be the functor that is constantly Set op .
Note that this is slightly different than in Definition 9.15, corresponding to the two ways to represent a contravariant functor H : A → B in terms of a covariant one-as a functor H : A op → B or a functor H : A → B op . While Definition 9.15 uses the former, this is not possible here: domain and codomain of the inserter are specified by pseudofunctors, but the function (_) op : Cat 0 → Cat 0 on categories does not extend to a pseudofunctor Cat → Cat that could take the place of the pseudofunctor F above. Instead, here we have to represent contravariant functors by taking the opposite of the target category, and thus consider the constant pseudofunctor returning the category Set op . Next we look at displayed 2-inserters. These are quite similar to displayed inserter, but with one main difference: instead 1-cells, 2-cells are added to the structure. More precisely, given two pseudotransformations α and β, the displayed 2-inserter gives a displayed bicategory of maps from α(x) to β(x) for every x.
Example 10.5. The displayed bicategory of Definition 9.12 is immediately a displayed 2-inserter.
The displayed bicategory of Definition 9.16 can be obtained as the following displayed 2-inserter: consider the functors F, G : CwF 1 → Cat given by F (C, Ty, Tm) :≡ C and G(_) :≡ Set op . As pseudotransformations, we take the projections α :≡ Tm and β :≡ Ty, respectively. As in Example 10.2, we have to put the oppositization into the target pseudofunctor G, that is, take presheaves on C to be functors C → Set op instead of C op → Set.
Conclusions and Open Questions
In the present work, we studied univalent bicategories. Showing that a bicategory is univalent can be challenging; to simplify this task, we introduced displayed bicategories, which provide a way to modularly reason about involved bicategorical constructions. We then demonstrated the usefulness of displayed bicategories by using them to show that certain complicated bicategories are univalent. The same approach is useful for many other basic notions and constructions such as pseudofunctors, pseudotransformations, modifications, and biequivalences: the displayed machinery allows one to stratify their presentation and thus eases reasoning on such objects. Veltri and Van der Weide [36] used the techniques described in the present paper to construct univalent bicategories of algebras for a class of signatures. In addition, they defined displayed biadjunctions, and those were used to construct biadjunctions between bicategories of algebras.
For the practical mechanization of mathematics in a computer proof assistant, two issues may arise when building an elaborate bicategory as the total bicategory of iterated displayed bicategories. Firstly, the structures may not be parenthesized as desired. This problem can be avoided or at least alleviated through a suitable use of the sigma construction of displayed bicategories (Item 2 in Definition 6.6). Secondly, "meaningless" terms of unit type may occur in the cells of this bicategory. We are not aware of a way of avoiding these occurrences while still using displayed bicategories. However, both issues can be addressed through the definition of a suitable "interface" to the structures, in form of "builder" and projection functions, which build, or project a component out of, an instance of the structure. The interface hides the implementation details of the structure, and thus provides a welcome separation of concerns between mathematical and foundational aspects.
We have only started, in the present work, the development of bicategory theory in univalent foundations and its formalization. There are some important questions that we have left open, such as proving the universal property of the Rezk completion. Furthermore, the precise relationship to the bicategories studied in [6, Example 9.1] should be established; those bicategories are defined, in particular, using relations instead of functions. It seems reasonable to hope for our univalent bicategories to coincide (in the sense of an equivalence of types) with the univalent bicategories of [6, Example 9.1]; a construction of such an equivalence is outside the scope of this work. We also anticipate that the displayed machinery can be usefully employed for extending the comparison of different categorical structures for type theories started by Ahrens, Lumsdaine, and Voevodsky [4] to the bicategorical setting. | 15,279.6 | 2019-03-04T00:00:00.000 | [
"Mathematics"
] |
Preparation and biological properties of a novel composite scaffold of nano-hydroxyapatite/chitosan/carboxymethyl cellulose for bone tissue engineering
In this study, we report the physico-chemical and biological properties of a novel biodegradable composite scaffold made of nano-hydroxyapatite and natural derived polymers of chitosan and carboxymethyl cellulose, namely, n-HA/CS/CMC, which was prepared by freeze-drying method. The physico-chemical properties of n-HA/CS/CMC scaffold were tested by infrared absorption spectra (IR), transmission electron microscope(TEM), scanning electron microscope(SEM), universal material testing machine and phosphate buffer solution (PBS) soaking experiment. Besides, the biological properties were evaluated by MG63 cells and Mesenchymal stem cells (MSCs) culture experiment in vitro and a short period implantation study in vivo. The results show that the composite scaffold is mainly formed through the ionic crossing-linking of the two polyions between CS and CMC, and n-HA is incorporated into the polyelectrolyte matrix of CS-CMC without agglomeration, which endows the scaffold with good physico-chemical properties such as highly interconnected porous structure, high compressive strength and good structural stability and degradation. More important, the results of cells attached, proliferated on the scaffold indicate that the scaffold is non-toxic and has good cell biocompatibility, and the results of implantation experiment in vivo further confirm that the scaffold has good tissue biocompatibility. All the above results suggest that the novel degradable n-HA/CS/CMC composite scaffold has a great potential to be used as bone tissue engineering material.
Background
Nowadays, bone tissue engineering, using a porous scaffold material to induce the formation of bone from the surrounding tissue or to act as a template to grow cell for bone tissue regeneration, has some distinct advantages over autografting and allografting [1], and it is a rapidly growing alternative approach to heal damaged bone tissue. However, in the realm of bone tissue engineering, it is a great challenge to develop a desirable porous scaffold material used for successful bone regeneration. Obviously, an ideal scaffold for bone tissue engineering should have highly interconnected porous structure, good mechanical property and biocompatibility [2,3]. In addition, to achieve the requirements better for bone regeneration, biomimetic matrices are usually adopted, which may provide a suitable microenvironment to promote osteoblast proliferation and osteogenesis [4]. As we know, the extracellular matrices (ECMs) of hard tissue are com-posed of organic and inorganic phases, the inorganic phase consisting primarily of nano-hydroxyapatite(n-HA)crystals, and the organic phase consisting mainly of type I collagen and small amount of ground substance including glycosaminoglycans (GAGs), proteoglycans and glycoproteins [5]. So n-HA/polymer composite scaffolds have been reported in succession during the process of designing the scaffold for bone tissue engineering, such as, n-HA/collagen [6], n-HA/gelatin [7], n-HA/polyamide [8,9], n-HA/poly(L-lacticacid) [10], n-HA/poly(lactide-co-glycolide) [11], which are designed according to bionic principle. However, among the selected polymers, natural biodegradable polymers are a kind of promising candidate, because they avoid a second surgical operation after new bone tissue regeneration, and they possess better biocompatibility and lower cost than synthetic polymers. Chitosan(CS), a deacetylated derivative of chitin, is a biodegradable and biocompatible cationic natural polymer, and CS-based materials can accelerate bone formation because of the similarity to GAGs in structure [12][13][14][15]. Consequently, n-HA/CS composite scaffold has been widely studied for bone tissue engineering [16][17][18][19][20]. However, a poor interaction exists between n-HA and CS phases so that the n-HA/CS composite scaffold has poor physico-chemical properties. Fortunately, carboxymethyl cellulose(CMC) is a natural biodegradable and biocompatible anionic polymer obtained from natural cellulose by chemical modification, and it is very similar to CS in structure, thus, there is strong ionic cross-linking action between CMC with CS [21,22]. Based on the above thought, in this paper, CMC was first introduced into n-HA/CS system, where n-HA was expected to be incorporated into the polyelectrolyte of CS/CMC so as to has a strong interaction between n-HA and polymer [23], accordingly, a novel n-HA/CS/CMC composite scaffold was also fabricated. In addition, to investigate the potential of the novel biodegradable n-HA/CS/CMC composite scaffold to be used for bone tissue engineering material, the physico-chemical properties including the porous morphology, compressive strength, structural stability and degradation in vitro and the microstructure of n-HA/CS/CMC composite scaffold were investigated. Besides, its biological properties such as cell biocompatibility in vitro and tissue biocompatibility in vivo by a short period implantation were also preliminarily studied. The main purpose of the study is to make full use of the most abound natural derived polymer of cellulose and chitin, to develop a novel biodegradable composite scaffold for bone tissue engineering material according to the bionic principle.
Preparation of n-HA/CS/CMC composite scaffold
The n-HA/CS/CMC scaffold was fabricated as described by the following procedure. Firstly, 3 g CS and 3 g CMC powders were mixed evenly and added into 150 ml n-HA slurry(containing 4 g n-HA powder)with constant stirring for 2 hours under ambient conditions to obtain a homogeneous mixture. Secondly, 3 ml glacial acetic acid was added, and the stirring was kept till solidified mixture was obtained. Then the solidified mixture was maintained in a freezer at -30°C for overnight to freeze the solvent. Finally the sample was lyophilized in a freezing dryer until dried, and a scaffold was achieved. To remove the residue acetic acid, the scaffold was immersed in 0.2 mol/ l NaOH solution for several hours, and then washed in deionized water and dried in a vacuum oven at 60°C.
Fourier transform-infrared (FT-IR) spectroscopic studies
Infrared spectroscopy was used to characterize intermolecular interaction between components in system. The IR spectra of n-HA, CS, CMC and n-HA/CS/CMC composite scaffold were recorded with a FT/IR spectrophotometer(American Perkin Elmer Co.) by using a KBr die kit. The spectra were collected over the range of 4000-400 cm -1 .
Transmission electron microscope(TEM) observation
The microstructure of n-HA/CS/CMC composite scaffold was examined with transmission electron microscope(TEM), (JME-100CX, Seike Instruments) on a 200 kV. TEM samples were prepared by ultrasonication dispersion method using deionized water.
Scanning electron microscope (SEM) observation
The surface morphology of n-HA/CS/CMC composite scaffold was examined with scanning electron microscopy(SEM). The scaffold was gold-coated and observed with SEM (JSM -5900LV, Japan) at an accelerating voltage of 20 kV.
Mechanical property test
The compressive strength of the n-HA/CS/CMC scaffold was tested using universal material testing instrument (AG-10AT, DaoJin, Japan) following the guideline of ASTM standard D 695-96. Three parallel samples of the scaffold were cut into cylindrical blocks with a size of Φ6 × 12 mm and conducted with a constant strain rate of 5 mm min -1 until 50% reduction in specimen height, and the mean value of the compressive strength was given.
Porosity measurement
The porosity was determined by the liquid displacement method [25]. Briefly, the specimen was immersed into the dehydrated alcohol for 48 h until it was saturate, and the porosity of the sample was calculated according to the formula of P = (W 2 -W 1 )/(ρV 1 ), where W 1 and W 2 represent the weight of the sample before and after immersing, respectively, and V 1 is the volume before immersing, ρ is a constant of the density of dehydrated alcohol. Three parallel samples of the scaffold were conducted.
In vitro soaking test
The structural stability and degradation were investigated by phosphate buffer solution (PBS) soaking. The n-HA/ CS/CMC scaffold samples were dried and weighed, noted as W 0 . The samples were immersed in tube containing 10 ml of PBS, kept oscillating at 37.0 ± 0.5°C. After soaking for 5, 10, 15, 25 and 30 days, the samples were withdrawn from the solution, gently rinsed with deionized water and weighed after being dried, and noted as W 1 . The rate of weight loss (W L ) was calculated according to the formula of W L = (W 0 -W 1 )/W 0 × 100%. Five parallel samples of the scaffold were also carried out.
In vitro cell culture experiments
To evaluate the cell biocompatibility of the scaffold, MG63 cells and mesenchymal stem cells (MSCs) were both investigated. MG63 cells were human osteosarcoma osteoblasts, obtained from Center for Cell Culture in Wuhan University. MSCs were isolated from neonatal Wista rats calvaria by the sequential enzymatic digestive process and placed in a standard culture medium containing Dulbecco's Modified Eagle medium (D-MEM) (10% fetal bovine serum (FBS), 200 mg/ml penicillin, and 200 mg/ml streptomycin), then MSCs at passage three were removed into the culture medium containing osteogenic reagents(50 mg/l L-ascorbic acid, 10 -8 mol/l dexamethasone and 10 mmol/l b-glycerophosphate, 10 mmol/l VitD3, 100 mg/ml penicillium and100 mg/ml streptomycin, 0.3 mg/ml amphotericin, 2.2 g/l sodium bicarbonate and 10% fetal bovine serum). The medium was changed every 2 days [26].
The n-HA/CS/CMC composite scaffolds with a size of 2 × 10 × 12 mm 3 were sterilized using ethylene oxide gas and placed in a 12-well cell culture plate. Approximately 2 × 10 5 cells of MG63 or MSCs were seeded on the scaffold with undisturbed in an incubator for 3 h, then an additional 1 ml of culture medium was added into each well, and the scaffold/cell samples were cultured in an incubator(37°C with a humidified 5% CO 2 atmosphere) for 11 days. Medium was changed every 2 days. The controls(empty wells without scaffold) were treated in the same manner.
Estimation of cellular growth was accomplished using the MTT(3-4,5 -dimethylthiazol-2yl{-2,5-diphenyl-2H-tetrazoliumbromide)assay. The medium in the cell-loaded scaffold culture plate was removed after cultured for 1, 4, 7 and 11 days, and 2 ml MTT solution was added to each sample. Following 4 h incubation at 37°C in an air atmosphere containing 5% CO2, and DMSO was used to dissolve the formazan crystals, and the optical densities(OD) were determined using an Elisa microplate reader (ELx 800, BIOTEK) at 570 nm comparied with DMSO blank, which is linear correlations to cell numbers [27]. At least 4 wells were randomly examined each time. The controls(empty wells without scaffold) were treated in the same manner.
In vivo implantation
Twelve skeletally mature female adult SD rats(approximately 2 months old and weighing 200 g) were anesthetized with pentobarbital sodium. After the skin was prepped and sterilized with iodine, a ~2 cm incision was made on the lateral thigh of the rat. The gluteus maximus muscle was exposed and an incision of ~1.5 cm was cut on the muscle to make a small pouch. The n-HA/CS/CMC composite scaffold with 4 × 8 mm 2 in size was implanted into the muscle pouch. After 2 and 4 weeks implantation, six rats were sacrificed every time, and the scaffold samples were harvested. The three specimens were fixed with 4% formaldehyde for 4 d, then dehydrated in 50%, 75%, 85%, 95%, and 100% ethanol and embedded with paraffin. The specimens were cut into slices of 5 μm in thickness and stained with H&E and Masson's trichrome for the observation by light microscopy (Olympus, Japan). Meanwhile, the scaffold morphology of the other three specimens were observed by SEM after 2 and 4 weeks implantation.
Physcio-chemical properties of n-HA/CS/CMC composite scaffold
In order to illustrate intermolecular interaction between components in system, IR spectroscopy measurements were taken (Fig. 1). Comparing the IR spectra of pure n-HA, CS, CMC and n-HA/CS/CMC composite scaffold, it can be found that the specific peaks of pure n-HA, CMC and CS all appeared in the spectrum of n-HA/CS/CMC composite scaffold as shown in Fig. 1b, which suggests that there is no change of the three compositions after compounding. However, an absorption at 1655 cm -1 in CS were shifted to 1621 cm -1 in n-HA/CS/CMC composite scaffold, and the peak of -NH 2 (1599 cm -1 ) was absent, which may be the result of the formation of -NH 3 + , and the peak of asymmetry stretching of -COOis still found at 1420 cm -1 . Obviously, these observations mean that there was electrostatic attraction between -NH 3 + of CS and -COOof CMC, which results in the formation of CS/CMC polyelectrolyte network, and n-HA was filled in it easily. Additionally, to further demonstrate the microstructure of distribution state of n-HA in composite, TEM photograph of the n-HA/CS/CMC composite was given (Fig. 2). It shows that n-HA crystals are still in the range of nanometer grade and have a good dispersive property in the polymers, which has a positive effect on the mechanical and biological properties of the n-HA/CS/CMC composite scaffold.
The SEM images (Fig. 3) showed that the scaffold was three-dimensional complicated irregular porous structure together with good interconnections between the pores (Fig. 3(a) and 3(b)), and the walls of the macropores ranging about from 100 to 500 μm contained many micropores (Fig. 3(c)). In addition, the porosity was 77.8% ± 3.24. More important, the compressive strength of the scaffold could also reach 3.5 ± 0.13 MPa, which was in the range of cancellous bone(2 MPa~10 MPa) [28]. Based on the highly porous structure, we thought that it was likely for the scaffold to promote cell adhesion and attachment as well as nutrient delivery to the site of tissue regeneration [29,30].
After PBS soaking for 30 days, we find that the scaffold still remain the original shape, which indicates that the scaffold has good structural stability. In addition, Fig. 4 gives the weight loss of n-HA/CS/CMC scaffold as a function of soaking time in PBS. According to the weight loss tendency, we found that the weight loss increased gradually, and the weight loss was tested as near 30% after 30 days soaking, which showed that the scaffold degraded with the soaking time. Fig. 5 shows the morphology of the cells attachment and spreading on the scaffold for different days, observed by phase-contrast microscopy. At 4 days, MG63 cells attached on the surface and spread well. After 7-days cul-ture, more and more cells attached tightly with their filopodium and lamellipodium, and elongated to every corners of scaffold( Fig. 5(b)). Similarly, more MSCs adhered on the scaffold than MG63 after culturing for 4 days. And at the seventh day, cells have dramatically proliferated and aggregated each other to form stratified cell layers on the surface (Fig. 5(d)), indicating the scaffold was nontoxic and suitable for the attachment and growth of both MG63 and MSCs. The cell proliferation on the scaffold was assessed using MTT test. Fig. 6 gives OD value after 1, 4, 7 and 11 days of culture. According to the data, it can be found that both MG63 and MSCs increased with time during the in vitro culture period, and the scaffold group has obvious proliferation tendency, suggesting the scaffold did not retarded the cell proliferation, which shows the scaffold is nontoxic.
In vitro cell attachment and proliferation
Based on the above results, we took it for granted that the novel n-HA/CS/CMC composite scaffold had good cell biocompatibility in vitro.
In vivo tissue biocompatibility
Tissue biocompatibility in vivo of the scaffold was evaluated by a short time implantation in muscles of rats and harvesting after 2 and 4 weeks. During the experiment period, all rats remained good health, and the surgical incisions healed well without any wound complications. After 2 and 4 weeks harvest, the histological sections of the scaffold specimen were stained with H&E and Masson's trichrome, respectively (Fig. 7). According to Fig. 7(a)-(c) (stained with H&E), we can find that the muscle cells were normal and there was no evident foreign body reactions for the surrounding tissues at 2-weeks implantation( Fig. 7(a)). After 4-weeks implantation, it could be seen that the scaffold integrated well with surrounding tissues and there was no visible interface between muscles (M) and the scaffold (S) (Fig. 7(b)). Meanwhile, it can be found that most of the scaffold had been biodegraded and The SEM microstructure of n-HA/CS/CMC composite scaffold many blood vessels(BV) had grown into the pores of the scaffold (Fig. 7(c)). Fig. 7(d)-(f) were the histological sections stained with Masson's trichrome, which were used to assess the formation of collagen and vascularization. After 2 weeks implantation, there was bulk collagen (marked with letter C) in blue appeared (Fig. 7(d)), after 4 weeks, most surface of the scaffold was covered by collagen, indicating that the scaffold had good biocompatibility (Fig. 7(e)). Additionally, a large number of small blood vessels were also seen in the scaffold (Fig. 7(f)), which were favorable to deliver nutrient and ensured cells to survive and proliferate so as to construct new tissues.
In addition, to make a further investigation on the biocompatibility in vivo, the SEM microstructure of the scaffold sample harvested after 2 and 4 weeks implantation in body were given (Fig. 8). It also showed that many cells had grown into the pores of the scaffold after 2 weeks implantation. Likewise, more and more new tissues almost covered the pores at 4 weeks of implantation ( Fig. 8(b)), indicating that the scaffold had good tissue biocompatibility.
Discussion
In order to achieve successful regeneration of damaged bone based on the tissue engineering concept, it is critical to select the proper component to develop a scaffold. In this study, we chose n-HA, CS and CMC as raw materials, which all have good qualities such as biocompatibility, bioactivity, biodegradation. Moreover, it is designed according to the bionic principle. On the other hand, there has a closely relation between physico-chemical properties of a scaffold and the structure of the component in itself. In this paper, CS and CMC have similar structure and opposite electric charge, which can interact strongly and form polyelectrolyte net structure by static electric interaction in adequate acidity solution, simultaneously, n-HA was incorporated into the polyelectrolyte matrix. Thus, we can obtain the n-HA/CS/CMC composite in the form of solidified mixture without adding any other cross-linking agents under ambient conditions, subsequently, we can achieve the porous scaffold by freeze-drying method, which is incomparable for other system scaffold in the same conditions, such as n-HA/CS scaffold, n-HA/CS-Gel [31].
In addition, the scaffold had good three-dimensional irregular porous network structures with pore size of 100~500 μm. These macropores can promote the formation of internal mineralized bone, and the micropores and the interconnected pores were all conduce to nutrient delivery. Moreover, the original compressive strength of 3.5 MPa with a porosity of 77.8%. Obviously, it can bear the reconstruction of new bone tissue. On the other hand, biodegradable scaffold has a great superiority. Here, CS and CMC are both biodegradable polymers, theoretically, the n-HA/CS/CMC composite scaffold is a degradable scaffold. In this study, we chose PBS as the soaking solution, a suitable and reliable method, to investigate the degradation and structural stability. According to the results of the soaking in PBS, it can be concluded that the composite scaffold is degradable. Moreover, it has good structural stability after soaking 30 days, which is also contributed to the intermolecular interaction between components in system and microstructure.
In a word, the novel degradable n-HA/CS/CMC composite scaffold prepared here had good physico-chemical properties, and it was suitable to act as a template to grow cell for bone tissue regeneration.
It is no doubt that the scaffold used for bone tissue engineering should be non-toxic and have good cell biocompatibility, which is a central criteria to ultimately decide the feasibility of implantation in body. The information of the cell attachment and proliferation on the scaffold provided by cell culture experiment in vitro is often used as an important initial evaluation of cell biocompatibility. In the present study, MG63 and MSCs are used in the cell seeding experiment, the phase-contrast microscopy obser-vation and the MTT assay results show the n-HA/CS/CMC scaffold is nontoxic and cell biocompatible, which is contributed to the ideal three-dimensional porous structure and mechanical property. However, the chemical composition and the method of fabrication of the scaffold play a more important role in cell biocompatibility [32]. In this paper, n-HA, CS and CMC are all nontoxic and hydrophilic. Moreover, comparing with other methods to develop porous scaffold, such as particulate leaching [33] or foaming method [34], the method here avoided using any poisonous solvents in the process of preparation. Therefore, the scaffold is cell biocompatible, and it is suitable for implantation.
Although in vitro test has given some preliminary indicatives of cell biocompatibility, it is still necessary to procure a much clearer idea of host tissue response to the scaffold after implantation in vivo. In this study, the scaffold was placed in the muscle of rats for 4 weeks. For the short period implantation, from the roughly observation, we find all experimented rats survival before harvesting, which shows that the sample of implantation is nontoxic and no evident feign body reaction. By observing histological sections, we find that most of the scaffold had been biodegraded after 4 weeks, which results from the biodegradable component of CS and CMC, and it is useful to blood vessels and new tissues to ingrow. Moreover, the tissue had no obvious inflammation, which shows the scaffold in itself and degradation products are all nontoxic. In addition, most of the scaffold was covered by collagen, suggesting the scaffold had osteoinduction to some extent due to the biomimetic matrices of n-HA/CS/CMC com- The SEM microstructure of n-HA/CS/CMC composite scaffold harvested at different times after implantation in muscle (a) 2 weeks, (b) 4 weeks posite scaffold. On the other hand, we observed the microstructure of samples by SEM after 2 and 4 weeks, the results also showed that various types of cells and tissues grew into the pores, even covered the scaffold, which is another evidence to demonstrate the tissue biocompatibility of the scaffold.
In conclusion, the results of cell culture experiment in vitro and a short period of implantation in vivo indicated that the scaffold had good biocompatibility, biodegradation and osteoinduction to some extent. All these results showed that the scaffold can meet the requirement of biological properties for bone tissue engineering.
Conclusion
In this paper, a novel degradable n-HA/CS/CMC composite scaffold was developed by freeze-drying method. Based on the above analyses and discussion, it can be concluded that the scaffold had desirable physico-chemical properties due to the strong ionic cross-linking interactions between CS and CMC, such as highly complicated interconnection irregular porous network structure(the pore size ranging about from 100 to 500 μm), and the compressive strength reached 3.5 MPa with the porosity of 77.8%, which would meet the basic requirement to grow for new bone tissue. In addition, the MG63 and MSCs cells attached and proliferated well on the scaffold during cell culture period in vitro, and the short period implantation experiment in vivo further confirmed that the scaffold had no feign body reaction and many blood vessels grew into the porous structure while the scaffold was biodegraded gradually. Meanwhile, most scaffold surface was covered by collagen after 4 weeks implantation, which shows the scaffold had good tissue biocompatibility. In conclusion, the above results indicated that n-HA/CS/ CMC composite scaffold was a novel biodegradable porous material with a desirable physico-chemical and biological properties to meet the essential requirement for bone tissue engineer materials, suggesting a potential applications in the field of bone tissue engineer, which is also a new approach to exploit the most abundant natural resource of cellulose and chitin. | 5,332.6 | 2009-07-14T00:00:00.000 | [
"Materials Science",
"Biology",
"Engineering"
] |
Foraging Behavior of the Blue Morpho and Other Tropical Butterflies: The Chemical and Electrophysiological Basis of Olfactory Preferences and the Role of Color
Inside a live butterfly exhibit, we conducted bioassays to determine whether the presence of color would facilitate the location of attractants by the butterflies. It was found that color facilitated odor attraction in some species that feed on flowers (Parthenos silvia, Heraclides thoas, Dryas julia, and Idea leuconoe), but not in the exclusively fruit-feeding species, such as Morpho helenor, hence demonstrating that species with different natural diets use different foraging cues. Green, ripe, and fermented bananas were evaluated for their attractiveness to butterflies together with honey and mangoes. The fermented bananas were determined to be the most attractive bait, and the electrophysiological responses to their volatiles were studied in Morpho helenor and Caligo telamonius. During GC-EAD evaluation, fifteen different aliphatic esters, such as isobutyl isobutyrate, butyl acetate, ethyl butanoate, and butyl butanoate (both fermentation products and fruit semiochemicals) were shown to be detected by the butterflies’ sensory apparatus located in the forelegs, midlegs, proboscis, labial palpi, and antennae. Legs, proboscis, and antennae of Morpho helenor and Caligo telamonius showed similar sensitivity, reacting to 11 chemicals, while labial palpi had a lower signalto-noise ratio and responded to seven chemicals, only three of which produced responses in other organs.
Introduction
Although the mechanisms involved in foraging for food have been studied in several model species of butterflies, there is still much unknown for this ecologically and physiologically diverse group of ca.20,000 species.It is known that butterflies possess trichromatic vision, [1] which has a rather complex mechanism and evolutionary history [2].It has been shown to play a role in the selection of potential mates [3], as well as in the location of adult food sources [4].Butterflies possess the ability to discriminate even between fine variations of color, as it has been shown in Heliconius charithonia L. [5].Many species have the capacity to learn to associate colors with a food source [6].Papilio xuthus L., for example, was successfully trained to feed on a sucrose solution placed on a disk of a particular color and after a few such training sessions, the butterfly was able to select the color from an array of multi colored disks [7].Butterflies can overcome their natural preferences, as in the case of newly emerged Heliconius, for which color stimulus can overtake scent in its importance after conditioning [8].The same was found in monarchs, Danaus plexipus L., which show strong innate color preferences but can rapidly learn to associate colors with sugar rewards, doing so for noninnately preferred colors as quickly and proficiently as they do for innatelypreferred colors [9].
Although butterfly foraging is normally associated with feeding on nectar or pollen, many butterfly species, especially in the tropics, do not feed on flowers but instead are attracted to nitrogen-rich substances, such as feces and carrion, or feed on fermented fruit, tree sap, and other less colorful, but odorous substrates [10].These food sources contain Psyche different volatile attractants than nectar, and species feeding on them are expected to use a different set of foraging cues than those used by purely flower-feeding species [11].These rotting foods are characterized by low sugar concentrations and the presence of fermentation products (ethanol and acetic acid) [12].However, the chemical composition of such rotting foods and the effects of these constituents on butterfly feeding behavior have rarely been investigated.As with other flower-visiting insects, the importance of visual and olfactory cues most likely varies with species, with each species having a unique favored combination of color cues and chemical compounds [13,14].
Scent plays an important role in foraging, sometimes acting synergistically with color.In Vanessa indica Herbst, it was found that either scent or visual stimulus (artificial flowers used in experiments) acts as the important cue, depending on the particular color [4].In another study, hawkmoths were attracted to flower models by either olfactory or visual cues, but only simultaneous exposure to both stimuli elicited feeding [15].This reliance on perceiving both cues simultaneously is supported by neurological examinations of Lepidoptera [16], which revealed that activity in the mushroom bodies of the hawkmoth, Manduca sexta L., is influenced by both olfaction and vision during foraging.While testing color preferences in butterflies can be a relatively straight-forward task, identifying the specific compounds that they find attractive is often more difficult.Andersson and Dobson [17,18] identified specific volatile compounds present in flowers commonly visited by butterflies and showed that there were antennal responses to most of these compounds.It was suggested that the presence of these compounds may be a result of adaptive pressure on flowers and host plants to specifically attract butterflies.
Not only do olfactory stimuli play an important role in locating food sources and potential mates, but they are crucial in locating the right host plants for oviposition.Plant odors, which are complex blends of dozens, if not hundreds of chemicals, are thought to have specific compounds, unique for different plants that are attractive to insects (e.g., [13]).It has been shown that electroantennographic responses can be elicited from butterflies with volatiles collected from the leaves of their corresponding host plants (e.g., [19][20][21]).Butterflies can be stimulated to lay eggs on certain plants by specific volatile compounds, as well as deterred from doing so by others (e.g., [22][23][24][25][26][27]). For instance, it was found that the oviposition behavior of female Papilio xuthus can be induced by methanol extracts of fresh leaves of Citrus plants [22], while hydroxybenzoic acid derivatives in a nonhost rutaceous plant deter both oviposition and larval feeding [28].
Considering the above studies, an extremely complex picture of butterfly sensory systems and of their foraging mechanisms emerges, it is clear that not only controlled experiments, but also field behavior and ecological studies involving a variety of species and scenarios, in combination with electro-antennographic analyses, are necessary, before we can fully understand the foraging strategies employed by Lepidoptera.
In the present study, by offering various combinations of color and scent, we examine how a diverse assemblage of tropical butterflies in a live butterfly exhibit responds to foraging cues.Red and yellow colors were chosen for the trials, as these colors are common in flowers, and they were most easily associated with a food source in a study of the swallowtail butterfly, Papilio xuthus [7].The unique opportunity to examine foraging cues in a semicontrolled environment for multiple species of tropical butterflies with different ecologies and multiple individuals of different ages, sexes, and levels of foraging experience was presented by the Butterfly Rainforest facility at the University of Florida.Conducting experiments in these settings allowed the experiment to partially simulate a natural environment, while greatly increasing the chance of response due to a high density of butterflies and a variety of species present.
Following the initial bioassays, the most frequently observed species, Morpho helenor Cramer, was chosen for further electrophysiological examination.In this particular fruit-feeding forest species, color was expected to play little or no role in locating food, because rotten fruit on the forest floor is hardly discernible from the leaf litter.M. helenor must, therefore, rely on scent to find their food, and as a result, this species was expected to be particularly sensitive to olfactory stimuli.GC-EAD analyses were used to examine the antennal responses of M. helenor to the volatile compounds present in ripe banana, and another fruit feeding species, Caligo telamonius Felder, was used for comparison.
Previous studies showed that gustatory organs in insects often exhibit olfactory abilities [13] (and references within) and that olfactory receptors are present not only on antennae, but also on other appendages [29].Hence, we decided to electrophysiologically examine not only the antennae, which are traditionally thought of as having an olfactory function, but also the proboscis, the foreleg, the midleg, and the labial palpi.C).Five cardboard landing pads covered with red (650 nm wave length, 49% reflectivity), yellow (570 nm wave length, 35% reflectivity), or black (7% reflectivity) paper were placed simultaneously on the railings ca. 1 m above the enclosure floor.Three of the pads (red, yellow and black) were covered with 10% honey solution, while two of the landing pads (red and yellow) were used as controls and were left unbaited.The landing pads were placed so that each pad received approximately the same amount of sunlight.The positions of the landing pads were rotated at random every hour to prevent butterflies from memorizing a specific location of bait.Honey solution was reapplied with a sponge every 30 min.Butterfly landings (physical contact with a circle, either by direct landing or by landing near the pad and then crawling onto it) were recorded and photographed.
Bioassay II.
In 2010, a second bioassay was conducted to determine which bait (mango, banana (green, ripe, and fermented), or honey) was the most attractive to M. helenor.Four separate replications were performed between October 25 and November 29, 2010.The trials were conducted between 16.00 h and 17.00 h on each occasion (RH: 62-79%, temp.: 20-29 • C).
In this bioassay, the butterflies were simultaneously offered a choice between eight "scent stations" which emitted scents of mango, banana, or honey.The control station contained no bait.Scent stations were comprised of a glass jar (7 cm tall, 4 cm in diameter) that contained the bait placed under an upended plastic cup (12 cm tall, 8 cm in diameter) with holes punctured in the bottom to allow scent to escape the cup (Figures 3(a)-3(c)).Each type of bait included one red (650 nm wave length, 49% reflectivity) and one clear cup to test for possible color preference.The distance from the holes to the cups with the attractants was greater than the proboscis length of a butterfly, preventing them from feeding.The bioassay was designed so that butterflies were unable to make physical contact with the attractants/bait and were only able to land individually on each "scent station."Thus, in Bioassay II, we were able to eliminate possible factors of gregarious behavior and conditioning from our experimental design (perceived shortcomings associated with Bioassay I).The scent stations were placed on the same railing as in Bioassay I and their positions were randomly rotated every 30 min.Butterfly landings (physical contact with a cup) were recorded by species for each scent station.Separately, green banana bait was compared with fermented banana.Landing data in Bioassay 1 were compared with a Mann-Whitney statistical test using PAST software.ANOVA was also used when analyzing the results of Bioassay 2. Chemistry Research Unit of the Center for Medical, Agricultural and Veterinary Entomology (USDA-ARS, Gainesville, FL).
Volatile Collection.
To obtain fermented banana, the unpeeled ripe bananas were cut into thin pieces and placed into a resealable plastic bag, which was stored outside for 6 days (RH: 80%; temp: 31 Research Systems, Gainesville FL).Clean air passed into the chamber through Teflon tubing (Cole Parmer, Vernon Hills, IL); air flow was regulated with a flow meter (Alborg, Orangeburg, NY) set to 110 mL/min.The air passed over the fruit before being drawn out with a vacuum (also regulated with a flow meter).As the air was pulled from the chamber, it passed through a filter containing 50 mg Super Q (Alltech, Nicholasville, KY) which captured volatiles.Volatile collections were conducted for 2 hrs.To elute volatiles trapped in the filter, 200 µL of methylene chloride (Sigma-Aldrich, St. Louis, MO) were added to filter and pushed through with nitrogen gas into a 1.5 mL glass vial (Sun-Sri, Rockwood, TN) with a 0.25 mL conical insert (Sun-Sri, Rockwood, TN).The volatiles collected for GC-EAD did not have an internal standard added.Those for identification and quantification had 5 µL of 80 ng/µL nonyl acetate (Sigma-Aldrich, St. Louis, MO) solution in methylene chloride added before the filter was eluted.Samples were stored at −80 • C until they were used for assays.
Volatile Bioassay. Volatiles were tested in the Butterfly
Rainforest by deploying 10 µL with a syringe onto 5.5 -cm qualitative filter paper and observing the number of butterfly landings for 1 h.Volatiles were reapplied every 30 min.It was determined that they maintained the same level of attraction for the butterflies as fermented banana.
GC-MS Analyses.
Chemical analyses and quantifications were conducted using both gas chromatographymass spectroscopy (GC-MS) and gas chromatography-flame ionization detection (GC-FID).Volatile chemical identities determined by GC-MS analyses (Figure 4) were confirmed by analysis of synthetic standards, by comparing retention times and MS fragmentation patterns for 2-methylbutyl acetate, butyl acetate, ethyl butyrate, hexyl acetate, butyl butyrate, and propyl acetate.Other chemicals were identified from their MS fragmentation patterns using ChemStation software (Agilent Technologies, Santa Clara, CA) and the NIST mass spectral library (NIST, Gaithersburg, MD): all chemicals matched at least 90%.For GC-FID, samples in extracts were injected (1 µL) in the splitless mode, injector purge at 1 min.Helium was used as a carrier gas at a linear flow velocity of 20 cm/sec.The oven was held at 35 • C for 5 min and then increased at 5 • C/min to 75 • C and then 10 • C/min to a final a temperature of 230 • C. EI GC-MS analyses of extracts from Super Q filters were conducted using a Hewlett Packard HP6890 GC interfaced to an HP5973 MS and equipped with a 30 m × 0.25 mm ID HP1 Column.The ion source temperature was 220 • C and the transfer line was held at 240 • C. The GC oven was operated under the same conditions as for the GC-FID analyses.
Gas Chromatography Electroantennographic Detection.
The coupled GC-EAD was conducted using a GC-FID (Hewlett Packard HP 6890 equipped with an Alltech Econo-Cap, EC-1 30 m × 0.25 mm Column), a heated transfer line and humidified cool air to introduce the separated chemicals to the insect organ.Conditions of chromatographs were the same as those used for the previously described GC-FID analyses.The GC was connected to both a flame ionization detector (FID) and an electroantennographic detector (EAD) (Syntech, The Netherlands), with a split ratio of 1 : 1.The EAD column discharged into a glass tube, and a humid air stream chilled and transported the volatile compounds to the organ situated on the electrode.Labial palpi, antenna, legs, proboscis, and forelegs (SEMs of these organs are shown in Figure 6) were procured from two live female Morpho helenor.In total, four midlegs, four forelegs, four antennae, and two proboscises were examined.The antenna, proboscis, legs, and forelegs were cut in half.Electrode Gel (Spectra 360, Parker Labs, Fairfield, NJ) was spread evenly on both sides of a PRG-2 probe (Syntech, Netherlands).Under a microscope, both halves of the organ were placed 2 mm apart on the electrode.The ends of both halves of the organ were covered with the electrode gel, so that only the middle remained uncovered and suspended between the two sides of the electrode.When preparing the labial palpi for the GC-EAD analysis, they were not cut in half and were placed onto a 4 mm wide probe.Similar techniques were applied to a single female individual of an Owl butterfly, Caligo telamonius Felder.Once this was set up, the probe was attached to an Intelligent Data Acquisition Controller, IDAC-232 (Syntech, Netherlands) that interfaced with a personal computer running EAD2000, Version 2.6 (Syntech, The Netherlands), to record both the output from the FID and the antenna.This enabled the pairing of insect antennal responses and the corresponding FID signals.
Scanning Electron Microscopy.
Morphology of the electrophysiologically examined organs was illustrated in M. helenor using a Scanning Electron Microscope (model JOEL JSM-5510-LV) at the Florida State Collection of Arthropods.
Bioassay I.
A total of 507 landings by 18 species were recorded (Table 1).Clipper, Parthenos sylvia, which landed a total of 98 times (19% of all landings) showed a strong preference for the red baited circle (P < 0.02) (Figure 1).Hence, in P. sylvia, the combination of the red color and the scent of honey produced the strongest response.Similarly, Heraclides thoas (L) (Papilionidae), Dryas julia (Fabricius), and Idea leuconoe Erichson (Nymphalidae), which, in combination, accounted for 9% of the landings, also showed a statistically significant preference for the red baited pad.For H. thoas, 85% of the landings (N = 13) were on the red baited pads; D. julia landed on red baited pads 100% (N = 14); I. leuconoe-71% (N = 17).Figure 4: GC-MS chromatograms of volatiles collected from fermented banana.Numbered compounds produced EAD-responses in the antennae, proboscis, forelegs, midlegs, and labial palpi of Morpho helenor and Caligo telamonius (compound no.13, propyl acetate, was determined using GS-FID).For compound names, see Table 2.
Morpho helenor was the dominant species in our study, responsible for 63% of all landings (N = 319).For this species, the three baited pads were significantly more attractive than the nonbaited pads (P < 0.02), but no significant color preference was found (Figure 2).
Bioassay II.
In the second bioassay, which was primarily aimed at determining the preferred food source of Morpho helenor, banana was found to be more attractive than mango, honey, and the control.To test the significance of color in the foraging of this species, red and clear cups were offered for each type of bait.No significant difference (P > 0.05) was found in the color preferences of Morpho helenor: it landed as frequently on red cups as it did on the clear cups (Figure 3(d)), supporting the results of Bioassay I. Green, ripe, and fermented bananas were tested for their attractiveness, and the latter were found to be significantly more attractive.
GC-EAD Analyses. Morpho helenor showed electrophysiological responses to a total of fifteen compounds (all esters)
from the fermented banana volatiles (Figure 5, Table 2).Four of the five organs analyzed (proboscis, foreleg, midleg, and antenna) reacted with similar intensity to the same 11 chemicals (Figure 5).The labial palpi reacted to three of these chemicals as well as four other chemicals, which did not elicit response from proboscis, legs, or antenna, and the responses were weaker.A single Owl butterfly, a female of Caligo telamonius, was analyzed in the same manner in which we studied Morpho helenor, and the EADresponses of its organs were also similar.In Figure 4, we provide a chromatogram from fermented bananas to show the presence and abundance of attractive compounds.The chromatograms obtained initially from green, ripe (readyto-eat stage), and fermented bananas indicate that number and abundance of volatile compounds increased as the fruit aged and the fermentation is responsible for a number of the compounds, absent in green fruit.The electrophysiological responses to these volatiles in Morpho helenor and Caligo telamonius (Figures 5(a) and 5(b)) indicate that different aliphatic esters, such as isobutyl isobutyrate, butyl acetate, ethyl butanoate, and butyl butanoate, which are both fermentation products and fruit semiochemicals, produce responses in the olfactory organs of fruit-feeding butterflies.Some of these compounds, such as 1-Methylbutyl acetate (aka isoamyl acetate) are responsible for banana odor in banana oil, while others, such as isobutyl isobutyrate or ethyl 2-methylpropanoate, are the products of fermentation.
Morphology.
The examination of legs, proboscis, labial palpi, and antennae using SEM showed the presence of sensilla on all of these organs (Figure 6).Organs other than proboscis possess long sensilla that have been described as olfactory [29].We also illustrate gustatory sensilla located on proboscis on the midproximal part of galeae (Figure 6(k)).These are possibly responsible for both gustatory and olfactory functions as is the case with sensilla styloconica found on proboscis of cabbage armyworm [30].
Discussion
Bioassay I showed that different species are likely to exhibit different foraging behaviors and that generalized conclusions should not be drawn from experiments conducted on one or two species.The fact that flower-feeding butterflies Parthenos sylvia, Heraclides thoas, Dryas julia, and Idea leuconoe showed a preference for a specific color (red) during these bioassays confirmed that color cues are important for butterflies during foraging.Many flowers in the Butterfly Rainforest are red and the nectar-feeding butterflies could have learned to associate this color with the nectar reward.Repeated landing on the red baited circles and obtaining the honey solution reward could have further conditioned these butterflies.The outcome of this assay supports previous studies (e.g., [4,8,31]), which have found that a combination of stimuli are necessary to elicit feeding response and that some butterflies have the ability to discriminate between colors.While Parthenos sylvia, Heraclides thoas, Dryas julia, and Idea leuconoe greatly preferred the red baited circles, the fruit-feeding Morpho helenor did not discriminate between colors and equally visited red, yellow, and black baited circles, while avoiding the unbaited controls (regardless of color).In the Butterfly Rainforest, where M. helenor are fed yellow mangoes and bananas, we expected a preference for yellow baited circles, yet this species exhibited no such preference.In the darkness of the forest floor, where M. helenor forage naturally for rotting/fermenting fruit that have fallen from the canopy, visual cues are reduced.Hence, the ecology of this species corresponds with our findings: M. helenor relies more upon volatile cues than color when foraging.
EAD-analyses performed on various organs of Morpho helenor and Caligo telamonius suggest that rotting fruit volatiles can be perceived not only by sensilla located on antennae (as has been traditionally thought), but also by those located on the proboscis, labial palpi, and legs.It is possible that the olfactory receptors covering the legs, antennae, proboscis, and labial palpi are simultaneously sending messages to the butterfly's brain, hence increasing the magnitude of the signal.This supports previous studies which indicated the possible olfactory role of the proboscis ( [30] (and references within)).Previously proposed functions of the labial palpi include the detection of sexual pheromones, attunement to adult food sources, stimulants for migration, a shield for the proboscis, and wipers for cleaning the eyes of the butterfly [29].The present study indicates that labial palpi are equipped to detect some of the volatile chemicals present in adult food sources.However, the array of chemicals perceived by the palpi is different from the array perceived by antennae, legs, or proboscis, with an overlap of only three compounds (out of 15 total, Table 2).Such a distribution of different functions between organs might contribute to the efficiency of olfactory processing in butterflies.
When butterflies are foraging for fermenting fruit, they use cues from the fruits themselves and from the fermentation products produced by the rotting fruit [11].This notion has been confirmed by the present study: Morpho helenor reacted to several compounds that are products of fermentation (e.g., isobutyl isobutyrate; ethyl 2-methylpropanoate), as well as several compounds that are commonly found in fruit (e.g., butyl acetate; ethyl butanoate; butyl butanoate) and are responsible for fruity odor.When we initially compared the volatiles released by unripe, ripe, and fermented banana using GC-MS, it was generally observed that the abundance of the volatile compounds increased as the fruit fermented.It was also observed that the GC-MS of fermented banana volatiles contained compounds that are not present at all in the chromatograms of green and ripe bananas.This explains why fruit-feeding butterflies are not attracted to unripe fruit: to locate their food source, they require volatile compounds associated with both fruit and fermentation.Fermentation cues must also play a role when butterflies are locating other food sources, such as rotting fish, dung, and carrion.
The specific chemistry of banana volatiles and the responses these chemical compounds elicit in Morpho helenor and Caligo telamonius are the most analytical and controlled parts of our study, and yet they may also be the most ambiguous when it comes to drawing conclusions.For instance, although we determined that the most abundant compound in the fermented banana volatiles that M. helenor and C. telamonius reacted to was 3-methylbutyl acetate, this does not necessarily mean that it is the banana semiochemical that attracts butterflies, as we do not know what role the abundance of a chemical plays.It is known that semiochemicals, such as pheromones, do not need to be present in great volume to trigger a response [32].The response-triggering chemical could be any of the ones that are present in the volatiles or a combination of two or more chemicals.In fact, a detected chemical could be attractive, repellent, or completely ignored by a butterfly.Hence, the next step to advance the understanding of this system will be securing (through synthesis or commercial acquisition) the individual banana semiochemicals that were identified by the present study, and conducting bioassays with these compounds.This will allow us to determine which compound(s) are responsible for butterfly attraction.(USDA-ARS) for their support and assistance with the use of equipment.They would particularly like to thank Rebecca Blair for her help with the GC-EAD.Comments by the anonymous reviewers greatly improved the quality of the paper.
Figure 1 :Figure 2 :
Figure 1: Bioassay I.The Clipper butterflies, Parthenos sylvia (a), showed a strong preference for the red baited landing platforms when offered an array of other choices.
Figure 3 :
Figure 3: Bioassay II.(a-c) Scent stations attracting butterflies in the Butterfly Rainforest.(d) Graph showing the frequency of landings by Morpho helenor on bait stations emitting smells of mango (M), banana (B), or honey (H).C: clear cups, R: red cups.CC and CR are control (no bait) stations.
Produced responses in the antennae, proboscis, and legs Produced responses in the labial palpi only Produced responses in all examined
Table 1 :
Butterfly species that visited landing platforms during Bioassay I.
• C), thus inducing the fermentation process.Banana volatile collections were conducted using dynamic headspace adsorption method.Bananas were sliced and placed on a sheet of aluminum foil, which was put into a 1.7L glass volatile collection chamber (Agricultural Psyche (Photos of these and other species found in the Butterfly Rainforest can be viewed at http://www.flmnh.ufl.edu/butterflies/guide/.)
Table 2 :
Chemical volatile compounds collected from fermented banana and the presence (+) of EAD-responses to these compounds in various appendages of Blue Morpho butterfly (Morpho helenor). | 5,905.2 | 2012-04-11T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Multimodal functional deep learning for multiomics data
Abstract With rapidly evolving high-throughput technologies and consistently decreasing costs, collecting multimodal omics data in large-scale studies has become feasible. Although studying multiomics provides a new comprehensive approach in understanding the complex biological mechanisms of human diseases, the high dimensionality of omics data and the complexity of the interactions among various omics levels in contributing to disease phenotypes present tremendous analytical challenges. There is a great need of novel analytical methods to address these challenges and to facilitate multiomics analyses. In this paper, we propose a multimodal functional deep learning (MFDL) method for the analysis of high-dimensional multiomics data. The MFDL method models the complex relationships between multiomics variants and disease phenotypes through the hierarchical structure of deep neural networks and handles high-dimensional omics data using the functional data analysis technique. Furthermore, MFDL leverages the structure of the multimodal model to capture interactions between different types of omics data. Through simulation studies and real-data applications, we demonstrate the advantages of MFDL in terms of prediction accuracy and its robustness to the high dimensionality and noise within the data.
Introduction
Advances in high-throughput technologies have enabled us to collect enriched multiomics datasets that capture the highdimensional and complex variations at various omics levels.This collected multimodal data of omics, which includes the genome, epigenome, transcriptome, proteome, metabolome, etc., allows for a systematic study of how different omics levels act jointly to affect human diseases.While the emerging multiomics datasets hold great promise for enhancing our understanding of these diseases, the high dimensionality, complex inter-relationships, low signal-to-noise ratio, and issues with data quality (e.g.missing values) in the multiomics data pose considerable analytical challenges [1].
Over the past two decades, a variety of methods have been developed for multiomics data analysis: methods such as Similarity Network Fusion [2] and mixOmics [3], select, extract, and integrate features of multiomics.Other tools, like MultiOmics Factor Analysis [4] and miodin [5], integrate data based on factor analysis, which could have computational efficiency issues [6].To alleviate the computational burden, dimension reduction techniques have been widely applied in multiomics data analysis [7].
Commonly used dimension reduction approaches include extensions of classical methods [8], such as penalized canonical correlation analysis (CCA) [9], sparse CCA [10], generalized SVD [11], co-inertia analysis (CIA) [12], sparse extensions of partial least squares (PLS) [13], and the self-paced learning L1/2 absolute network-based logistic regression model (SLNL) [14].However, these approaches have inherent limitations on sparse data.More recently, the state-of-the-art machine learning (ML) methods have been increasingly used in multiomics data, including a novel MultiOmics Meta-learning Algorithm (MUMA) [15] and other methods reviewed by Chung et al. [16].Although ML-based feature selection approaches [17] and ML-based clustering methods [18] have notably tackled the computational efficiency challenges of high-dimensional datasets; most ML methods still suffer from overfitting problems when integrating multiomics datasets [6].In a recent report [19], we developed a new functional neural network (FNN) method that incorporates functional data analysis (FDA) techniques to account for the underlying structure of genetic data, such as the linkage disequilibrium (LD) among neighboring variants, which successfully alleviates the overfitting issue in high-dimensional genetic data.In this study, we propose a multimodal functional deep learning (MFDL) method to facilitate multiomics data analysis, with the advantages in overfitting control, genetic structure modeling, and multiomics data integration, which will be illustrated in detail below.
In the proposed MFDL method, we introduce an omics variant function by fitting a series of basis functions to each type of omics data (e.g.genome, epigenome, transcriptome, etc.) in the input layer, then integrate these fitted functions into the dimensionreduced hidden layers as a shared representation.From this shared representation, additional hidden layers are formed to continue the training of the model and to learn the complex relationships between multiomics and the phenotype of interest.The MFDL model has the following unique advantages: (i) it inherits the robustness of the FNN method to high-dimensional datasets and low signal-to-noise ratios by utilizing FDA techniques; (ii) its f lexible multimodal structure allows to learn a shared representation through the hidden layers of deep neural networks, which facilitates the capture of interactions and correlations between multiple omics inputs.(iii) The MFDL model can analyze outcomes in various forms (e.g.scalar, vector, or functional outcomes) and complex nonlinear relationships between outcomes and multiomics inputs.Through simulation studies and two real data applications, we demonstrate the superiority of the MFDL models in terms of both accuracy and robustness compared to the functional linear model (FLM), FNN, and feedforward artificial neural networks (NN) in multiomics data analysis.
The paper is organized as follows: section "System and Methods" introduces the MFDL model with a brief overview of the FLM and the FNN method.In section "Simulation studies," we conduct three simulations to compare the performance of the proposed MFDL with FLM and FNN under various simulation settings.In section "Simulation Settings," we demonstrate the MFDL through two real data applications.Section "Discussion" discusses the merits of the proposed method and future directions.Technical details are included in Appendix 1.
System and Methods
To motivate the MFDL model, we first introduce the FLM and FNN methods for genetic data analysis along with the notation used in this paper.Building on these methods, we propose the MFDL model to accommodate multiple omics inputs and complex phenotypes.
For the i-th individual of the study, we denote y i as phenotype and g ki = g ki1 , g ki2 , • • • , g kipk as the k-th omics input with dimension p k for i = 1, . . ., n and k = 1, . . ., m.Without loss of generality, for the rest of the paper, we state the models in the case of m = 2.
Functional linear model
A functional linear model (FLM) can be constructed from a traditional linear model by substituting the vector of covariate observations with functional covariates, provided that at least one of the following conditions holds: (i) the dependent or response variable is considered functional and (ii) one or more of the independent variables or covariates are considered functional [20].In the context of genetic data analysis, to evaluate the joint association of multiple omics levels with a disease phenotype, an FLM can be formulated by incorporating multiomics variants as functional covariates [21].
For each type of omics data with available location information (e.g. a gene), we scale location information to [0, 1], denoted as t k .We construct the omics variant function G ki (t k ) using a linear combination of the Dirac Delta function [19].The FLM incorporating two functional inputs G 1i (t 1 ) and G 2i (t 2 ) can be expressed as where α 0 is the overall mean, α is the regression coefficient of covariates, and β 1 (t 1 ) and β 2 (t 2 ) are the functional genetic effects of G 1i (t 1 ) and G 2i (t 2 ).i is an error term that is normally distributed.
Functional neural network
The functional neural network (FNN) model we previously developed was constructed based on the hierarchical structure shown in Fig. 1, where X and α are covariates (e.g.gender) and their corresponding coefficients, respectively.β (d) (s) and α (d) 0 refer to the functional weight and scalar bias at d-th hidden layer, which can be estimated by backward propagation in functional form.The term Z (d) refers to the hidden function at the d-th hidden layer, which captures nonlinear and nonadditive effects by applying nonlinear activation functions.Technique details can be found in Zhang et al. [19].
The FNN model has been proven to offer certain advantages for genetic data analysis including the ability to capture the complex relationships between a single source of genetic variants and disease phenotypes and the f lexibility to handle various types of phenotypes.However, when dealing with multiple sources of omics data, the discretized matrices of the omics variant functions must be concatenated as the input in the FNN model.This process may reduce the model's ability to capture correlations between inputs.To overcome this limitation, we propose the multimodal functional DL model, which features a novel network structure that better accommodates multiple inputs.
Multimodal functional DL model
Multiomics datasets, collected from diverse biological features (e.g.genetic variation, gene expression, methylation, etc.), exhibit complementary and heterogeneous properties.A multimodal structure can leverage these properties to exploit the correlations between different data sources and improve prediction performance [ 22].As shown in Fig. 2, the model we propose consists of two parts.In the separate training part, we train an FNN model for each modality using the functional data analysis technique to account for the LD effect and to reduce data dimensionality.In the combined training part, we further build a shared representation layer that encapsulates complex correlations between omics features, and construct another feedforward neural network model on the shared representation to model the complex relationship between multiomics and the phenotype of interest, considering possible interactions.The structural details of conventional neural networks and FNN can be found in [23] and [19], respectively.
Specifically, we construct the omics variant function and omics effect function denoted as G ki (t k ) and β ki (t k ) , k = 1, 2. We then use the discretized forms of these functions as inputs to form separate FNN models with D k hidden layers Z (1) ki , . . ., Z (Dk) ki as follows: where X ki and α k represent the covariates and their corresponding coefficients, respectively.The terms α (dk) k0 (t), β Input: g 1 , t 1 , g 2 , t 2 , X 1 , X 2 , y Output: ŷ, W, b Initialization: Construct genetic variant functions G 1 and G 2 while the objective function is not converged do 1: Construct functional neural network (FNN) for G 1 and G 2 separately; 2: Feed forward both FNN models to obtain intermediate output k are parameters that need to be estimated.η j 's are predetermined basis functions, and J (dk) is the number of basis functions at the d k -th hidden layer.
After the forward propagation process through all the hidden layers in each FNN model, we concatenate the outputs of the last hidden layer, Z (Dk) ki , k = 1, 2 to form a shared representation Z wide,i .
Finally, a feedforward NN with D hidden layers is trained on the Z wide,i to obtain the fitted phenotype ŷ.
where w (d) and b (d) The training process of the model is described in Algorithm 1. Specifically, to train the MFDL and estimate the model parameters defined in the previous forward propagation process, we denote the parameters of interest as To estimate the model parameters, we apply the backward propagation with respect to the mean squared error (MSE) loss function regularized by the L 2 norm penalty.First, we define the empirical risk function J W, b and the penalty term (W) as follows.
The regularized loss function ∼ J is then defined as where λ is the penalty parameter determined by the crossvalidation technique.The parameters can be estimated by minimizing the regularized loss function with the gradient descent technique.We iteratively update the parameters based on the following equations, until the loss function (1) converges.
Here r represents an adaptive learning rate determined by the ADADELTA algorithm [24].The technique details can be found in Appendix 2.
Compared to the FLM and FNN models, our model provides a more f lexible structure that can easily accommodate multiple omics inputs, consider their nonlinear and nonadditive (e.g.interactions) features, and have advantages of being robust to high dimensionality and high noise levels.The shared representation layer in our model is capable of capturing the correlations between multiple omics inputs and avoids the situation where certain hidden nodes are trained exclusively for one source of omics input.Through the simulation study in section "Results," we show that these two improvements in our proposed model lead to better prediction performance compared to FLM, NN, or FNN.
Simulation studies
Through simulation studies, we evaluate the performance of MFDL for multiomics data analysis and compare it with FLM and FNN.For all simulation studies, to mimic the minor allele frequencies and LD in the real genome, all genotype data was drawn directly from the 1000 Genomes project [25].We simulated various nonlinear and interactive relationships between the phenotype and omics data to demonstrate the efficiency of MFDL in capturing complex relationships.We also simulated phenotypes in both scalar and vector forms to demonstrate the f lexibility of MFDL and introduced various noise levels to show the robustness of MFDL.
Simulation settings
For simplicity, we use two types of omics information: genotype data (i.e.SNPs) as G 1 (t 1 ) and gene expression data as G 2 (t 2 ), while the method can accommodate various types of omic data.For all the simulations, we used real genetic data from the 1000 Genome project to ref lect the real sequencing data structure (e.g.LD pattern and allele frequency).Specifically, we used a 1 Mb region from the genome (Chromosome 17: 7344328-8344327), and randomly chose a 30-kb segment from the 1 Mb region for each simulation replicate to mimic LD patterns and allele frequency distributions from the real genetic data.The minor allele frequency (MAF) of the SNPs in the genome region ranged from 4.50 × 10 −4 to 4.99 × 10 −1 , with a distribution highly skewed to rare variants (34.8% of the variants with MAF < .001,69.1% of the variants with MAF < .01 and 80% of the variants with MAF < .03).We randomly select 200 samples (n = 200) and 100 SNPs (p 1 = 100) from the 30 kb segment to construct G 1 (t 1 ).Two cases of gene expression data G 2 (t 2 ) of p 2 = 1 and p 2 = 50are generated for 200 samples from multivariate normal distributions with μ = 0, σ 2 = 0.5 for p 2 = 1 and μ = (0, . . ., 0 We simulate two types of outcomes: scalar and vector.The relationship between the phenotype outcomes and omics data consists of two functions f 1 and f 2 based on G 1 and G 2 , respectively.Moreover, we consider three types of relationships between omics and outcomes: a linear relationship, a linear relationship with interaction, and a nonlinear relationship.
The linear and nonlinear models are simulated as: where f 1 , f 2 are linear/nonlinear functions when the relationships are linear/nonlinear for a scalar response.For a vector response, the fixed coefficients in f k (G ki (t k )) can be simulated in different dimensions to facilitate a vector-to-vector transformation from G ki (t k ) to y ki .For the linear relationship, we take For the nonlinear relationship, we define kl sin a kl s + d (2) kl where kl , d (2) kl ∼ unif (−π, π ) , and e = 1 3 , 3 2 , 3 .B k1 (t k ) is a predetermined fifth-order B-spline basis functions, and C k is a fixed coefficient matrix that takes different dimension according to the data type of y i .
To further evaluate interaction effects in the simulation, we introduced an interaction term to the linear transformation, which is defined as the inner product of f 1 and f 2 [26], where c is chosen as a fixed scalar coefficient.The corresponding linear model with interaction is defined as where is generated from a normal distribution with a mean of 0 and various choices of variance.In all three simulations, we randomly divide the samples into a training set of size 160 and a testing set of size 40.To mitigate the risk of random findings, we replicate each simulation setting 200 times and set a maximum of 10 5 training epochs.To ensure consistency across all models, Figure 3. MSE of the three methods under three relationships (the linear, the interaction, and the nonlinear relationships) and two types of omics data (G-E and G-G).
we use the L 2 penalty for all models, where the regularization parameter, λ, is selected from the set {0.1, 0.3, 1, 3, 10} using the validation technique.We compare the performance of MFDL with FLM and deep FNN with three hidden layers (FNN-3HL).Two evaluation criteria are employed: mean square error (MSE) and RV correlation coefficients between the predicted values Ŷ = ŷ1 , . . ., ŷn and true values Y = y 1 , . . ., y n , defined in the two equations below.The RV correlation coefficient is a multivariate generalization of the squared Pearson correlation coefficient proposed by Robert and Escoufier [27].
Simulation 1
In the first simulation, our aim is to evaluate three types of underlying relationships, a linear, a linear with interaction, and a nonlinear relationship between different omics inputs and a scalar phenotype, with a fixed noise level (i.e.var ( i ) = 0.3).We explore two types of omics input data: (i) G 1 (t 1 ) as vector and G 2 (t 2 ) as scalar to mimic the Genetic and Gene Expression data (G-E), and (ii) both G 1 (t 1 ) and G 2 (t 2 ) as vectors to mimic the Genetic and Genetic data (G-G).The primary distinction between treating omics data as vectors or functional data is whether we account for information (e.g.LD) from neighboring genetic variants.For an omics input treated as a functional input in MFDL, we apply beta-smoothing to the weight parameter in the input layer and vector-to-vector transformation in the hidden layers.We assess the model performances across six scenarios with the two types of omics data and three transformation functions, defined as in (equations ( 4)-( 8)).The results of these six scenarios are shown in Figs 3 and 4.
In Figs 3 and 4, the first row depicts the performance of three methods under various relationships (i.e. a linear, a linear with interaction, and a nonlinear relationship) for the G-E data, while the second row summarizes the results for the G-G data.In Figure 4. RV correlation coefficients of the three methods under three relationships (the linear, the interaction, and the nonlinear relationships) and two types of omics data (G-E and G-G).
the linear setting (left panels of Figs 3 and 4), MFDL and FLM have comparable performance and outperform FNN-3HL for all input data types.When there is an interaction between the omics data (middle panels of Figs 3 and 4), the MFDL model attains higher accuracy than FLM and FNN-3HL in terms of MSE and RV correlation coefficients, particularly in the G-G setting.In cases of nonlinear relationships (right panels of Figs 3 and 4), MFDL again achieves the highest accuracy across all models for both types of omics input data.Overall, the findings suggest that the proposed MFDL model excels at capturing complex nonlinear and nonadditive relationships between outcomes and multiple omics data, while it attains comparable performance to the other two methods in simpler scenarios (e.g. the linear relationship).
Simulation 2
In the second simulation, we compare the performance of the three methods across different phenotype types (i.e.scalar and vector) under three types of underlying relationships with the G-E omics data.The noise level is set at 0.3 (i.e.var ( i ) = 0.3).For this setup, we generate G 1 (t 1 ) as functional data with p 1 = 100, while G 2 (t 2 ) is simulated as a scalar.Two types of phenotypes y are simulated: a scalar and a vector of dimension 50.
Similar to the results in simulation 1, MFDL attains better or at least comparable performance than FLM and FNN-3HL across different phenotype types, underlying relationships, and omics data.Additionally, both MFDL and FLM outperform FNN-3HL under the linear relationship (left panels of Figs 5 and 6), while both MFDL and FNN-3HL outperform FLM with the vector phenotypes and nonlinear relationship (right bottom panel of Figs 5 and 6).
Simulation 3
In the third simulation, we evaluate the robustness of the three methods with increasing levels of noise, mimicking the high noise-signal ratio in real-world multiomics data.Specifically, we simulate three noise levels: 0.3, 0.45, and 0.6.(i.e.var ( i ) = {0.3,0.45, 0.6}).In this simulation, we considered scalar phenotypes, the linear relationship with interactions, and both G-G and Figure 5. MSE of the three methods under three relationships (the linear, the interaction, and the nonlinear relationships) and two types of phenotypes (scalar and vector phenotypes).Figure 6.RV correlation coefficients of the three methods under three relationships (the linear, the interaction, and the nonlinear relationships) and two types of phenotypes (scalar and vector phenotypes).
G-E omics data settings.The omics input data are generated as described in simulation 1. Figures 7 and 8 show that the proposed MFDL model achieves the smallest MSE and the highest RV correlation in all six scenarios, indicating the robustness of the MFDL model against various noise levels.
In conclusion, through three simulations, we demonstrate the MFDL model's ability to capture complex relationships between different types of phenotypes and multiomics data.With the advantage of the multimodal structure, MFDL provides a more effective way of capturing the latent features from various omics data and modeling the interaction effect between multiple omics.Additionally, our proposed model demonstrates robustness against various noise levels and high-dimensional omics and phenotype data.
Real data application
Alzheimer's disease (AD) is a progressive neurodegenerative disorder characterized by its complex and multifactorial nature.Although numerous studies have explored the role of various omics data in AD, the combined effects of multilevel omics data remain underevaluated.In this study, we undertake an integrative analysis of DNA sequencing and gene expression data derived from the AD Neuroimaging Initiative (ADNI) project.ADNI is a multisite study designed to evaluate clinical, imaging, genetic, and biospecimen biomarkers across the spectrum of normal aging to early mild cognitive impairment (MCI) and AD.DNA samples from 808 participants were subjected to non-CLIA whole-genome sequencing (WGS) at Illumina.
For our phenotype of interest, we focus on hippocampal volume changes observed in structural MRI scans, a critical marker for Late-Onset AD (LOAD), examining the contributions of genetic, gene expression, and biomarker variations to these changes over time.
Before analysis, per-individual quality control (QC) and permarker QC [28] were implemented.The per-individual QC excludes samples with massive missing genotype or related to other individuals.The per-maker QC excludes SNPs with insufficient proportion of successful genotype calls, marks that shows significant deviation from Hardy-Weinberg equilibrium (HWE) or with a very low minor allele frequency.
Predicting hippocampus volume change over time with Apolipoprotein E (APOE) genotype, gene expression, and biomarker data
To prepare the multiomics dataset for the analysis, we select three omics inputs: APOE genotypes, APOE gene expression levels and biomarker "Aβ-42" [29], which are recognized to affect AD pathology.We extracted the APOE genotypes, corresponding gene expression levels and biomarker Aβ-42 for all 808 participants from the ADNI dataset.This omics data were then integrated with longitudinal measurements of hippocampal volume derived from the ADNI structural MRI data.Participants who had only a single hippocampal volume measurement were excluded, resulting in a final dataset comprising 370 individuals and 1456 hippocampal volume measurements, along with the participants' ages at each visit.
We applied four methods to the omics dataset from the ADNI, including FLM, DL model, FNN-3HL, and the proposed MFDL.These methods were used to investigate the combined effects of APOE genotypes, gene expression and the biomarker Aβ-42 on the hippocampus volume change over time.In FLM, the omics inputs are modeled as three separate terms, as detailed in section "Functional Linear Model."The DL model treats the three data matrices in vector form and concatenates them column-wise prior to training the neural network.For the FNN-3HL model, genotype data are modeled in a functional form, and the discretization of the genetic variant function is then combined with the gene expression data and biomarker data for model fitting.For MFDL, the three omics inputs are trained independently to construct the shared information layer.
The phenotype comprises two or more observations of hippocampus volume change per participant taken during their visits and is insufficient to construct a function across these points.Consequently, the phenotype is treated in vector form, and the patients' age at the time of their first visit is used as a covariate in all models.Similar to the simulation studies, 278 patients were randomly selected to train the models, while the remaining 92 patients were used as the test set.The models are evaluated and compared using the MSE, MAE, and RV correlation coefficients between the observed and predicted phenotypes.To mitigate the effects of random data splitting, we repeated the process 200 times for each model with three-fold crossvalidation on the training set.
Figure 9 shows that the MFDL model achieves superior performance compared to other methods on the test set regarding all the three criteria (MSE, MAE, and RV correlation).Moreover, compared to deep FNN models, our proposed MFDL model exhibits considerably less overfitting by comparing the performance between the training and test sets.
The effect of APOE-ACE interaction on predicting hippocampus volume change over time
In this part of the study, we study a gene-gene interaction related to hippocampus volume change over time.We consider ACE, which has been previously identified as having a strong association with LOAD and exhibits gene-gene interactions with the APOE4 allele status [30].By applying our methods to the ADNI dataset, we aim to evaluate the impact of the interaction between APOE and ACE on the prediction of hippocampal volume changes over time.ACE genotypes were sourced from the ADNI dataset.Following the same data processing as in "Predicting hippocampus volume change over time with Apolipoprotein E (APOE) genotype, gene expression and biomarker data," a total of 625 samples with 1250 hippocampus volume measurements were retained for analysis.We extracted SNPs from the APOE and ACE genotypes, along with their SNP location information, from the ADNI dataset, which were modeled as functional inputs in the FLM, FNN, and MFDL.Similar to the analysis in section "Predicting hippocampus volume change over time with Apolipoprotein E (APOE) genotype, gene expression and biomarker data," we repeated the modeling process 200 times for each model with three-fold crossvalidation on the training set.
Figure 10 shows that our proposed method surpasses the existing FLM, DL, and FNN in terms of testing MSE, MAE, and RV correlation coefficient performance.Additionally, the difference of the training and testing results may suggest that DL and FNN-3HL are susceptible to overfitting.In contrast, our proposed model exhibits robust performance.Compared to data from section "Predicting hippocampus volume change over time with Apolipoprotein E (APOE) genotype, gene expression and biomarker data," there is an increase of data dimensionality in the two genotypes and possibly the noise level, the MFDL still consistently captures the gene-gene interaction and is less prone to the overfitting issue.
Discussion
In this paper, we introduce a novel multimodal functional DL method for the analysis of high-dimensional multiomics data.The proposed MFDL method uses the hierarchy of neural networks to learn complicated features from omic data, making it more powerful to model complex relationships (e.g.interactions between omics) than the tradition methods, such as FLM.By modeling effects as a function in the form of a combination of basis functions, MFDL is able to take information from nearby markers into account and reduce model complexity, providing more robust performance than DL for high-dimensional omic data analysis.By using a shared representation layer, the MFDL model is f lexible to handle different types of omic data.Unlike existing methods, such as FNN, MFDL uses subnets to model each omic data and model their complex relationships based on the shared representation layer.Such a strategy not only provides f lexibility to accommodate different data types (e.g.functional data versus nonfunctional data) but also reduces the complexity of the network structure.
Through simulation studies and real-world data applications, the proposed model has demonstrated superiority over both the FLM and FNN models in scenarios where multiomics data exhibit complex relationships (e.g.nonlinear relationships and interactions).The MFDL model also exhibits robustness in scenarios with increasing noise levels or high-dimensional data.In comparison with the FNN model, which is prone to overfitting under certain conditions, our proposed MFDL model is more adept at handling multiomics data.
In a traditional feedforward neural network with fully connected layers, when multiple inputs are merged to train the model, the hidden nodes in the network tend to become exclusively attuned to one type of the input.For example, in the case of a two-omics-input scenario, after certain training iterations, some hidden nodes may predominantly relate to the first type of input, while others are more closely associated with the second type of input.This tendency presents challenges for the neural network in capturing strong interactions between omics data, which can result in poor performance.Although FNN leverages the smoothness of genetic information to enhance predictive performance, it still struggles to identify a functional transformation that encapsulates the internal relationships among multiple types of inputs.The multimodal structure with its shared representative layer offers an effective solution for modalities that have latent interactions as demonstrated in multimodal DL methods applied to video-audio datasets [ 31].The adaptability of its structure, combined with the robustness of the shared representative layer, positions our proposed model as a useful tool for modeling multiomics data.
MFDL can also be further extended to consider functional phenotypes (e.g.imaging and time-dependent phenotypes).For a single genetic input, FNN addresses this issue by converting vector-to-vector transformations between hidden layers into function-to-function transformations.However, for multiple omics inputs, the main challenge faced by FNN is fitting a function on both functional (e.g.SNPs) and nonfunctional data (e.g.gene expression), in which almost no basis systems are suitable for both data.Moreover, FNN becomes problematic in defining a function from the shared representative since the location information is not unified across different omics data.Consequently, our proposed model faces the same challenges when dealing with functional phenotypes.While a simple solution is treating a functional phenotype as a vector, exploring alternative strategies that incorporate additional information (e.g.location and temporal information, and networks) is worthwhile for future investigation.Statistical testing building on MFDL holds great promise for rigorously evaluating the complex associations between multiomics with the phenotype of interest and result interpretation.This represents an important avenue for further research.
Key Points
• We develop an MFDL approach to model the complex relationships between multiomics and disease phenotypes.• The MFDL approach imposes a hierarchical structure of deep neural networks using the individually trained
Figure 1 .
Figure 1.The hierarchical structure of FNN with D hidden layers.
Figure 2 .
Figure 2. The hierarchical structure of MFDL with two omics input.
(dk) k (s, t)denote the functional bias and weights at the d k -th hidden layer of the FNN model, respectively.The function σ represents the activation function for the hidden layer, while the f function on the output layer is the linear link function.The explicit form of the bias and Algorithm 1. Training process of MFDL for two omics inputs.
Figure 7 .
Figure 7. MSE of the three methods under three noise levels (0.3, 0.45, and 0.6) and two types of omics data (G-E and G-G).
Figure 8 .
Figure 8. RV correlation of the three methods under three noise levels (0.3, 0.45, and 0.6) and two types of omics data (G-E and G-G).
Figure 9 .
Figure 9. Prediction of the change of hippocampus volume using APOE genotypes, gene expression, and the biomarker Aβ-42.
Figure 10 .
Figure 10.Prediction of the change of hippocampus volume by considering an interaction between APOE and ACE.
represent the vector weights and scalar bias at the d-th hidden layer, respectively.It is worth mentioning that the proposed MFDL framework is flexible to fit various types of inputs depending on the nature of the omics data.For instance, if the input is a scalar (e.g.gene expression of a single gene) or multivariate with low dimensions (e.g.gene expression data of a limited number of genes), which are not suitable for functional smoothing, those functional weights, | 7,393.4 | 2024-07-25T00:00:00.000 | [
"Computer Science",
"Biology",
"Medicine"
] |
Configuring A Mini-Laboratory and Desktop 3-Axis Parallel Kinematic Milling Machine
Globalization and the shortening of products’ life cycles have caused dramatic changes in the configuring and designing of new products. The customization of products has become a trend in recent years [1] to [3]. Consequently, the customization of configuring process represents a critical issue that can be addressed by adapting interfaces to CAD/CAM systems, thereby using the web interface to rapidly configure a new product. The application of the web interface as a link to the CAD system is currently widely used in the development of various products [4] to [6]. Examples of some of the available CAD configurators of standard components from different manufacturers are shown in Fig. 1. Available CAD configurators can provide standard, but not custom-made components, which are also required for the process of designing the product families. Programming customized configurators for a family of particular components is possible using programmable web interfaces, such as Pro/Web.Link in the Creo CAD/CAM system. This approach is applicable at the component and assembly levels. The conceptual design of product families has inspired many research efforts [7] to [9]. The stateof-the-art methods can be classified into two main categories: scalable and configurational product family design [10]. Fig. 1. The choice of standard components in CAD configurator [4] and [6]
INTRODUCTION
Globalization and the shortening of products' life cycles have caused dramatic changes in the configuring and designing of new products.The customization of products has become a trend in recent years [1] to [3].Consequently, the customization of configuring process represents a critical issue that can be addressed by adapting interfaces to CAD/CAM systems, thereby using the web interface to rapidly configure a new product.
The application of the web interface as a link to the CAD system is currently widely used in the development of various products [4] to [6].Examples of some of the available CAD configurators of standard components from different manufacturers are shown in Fig. 1.Available CAD configurators can provide standard, but not custom-made components, which are also required for the process of designing the product families.Programming customized configurators for a family of particular components is possible using programmable web interfaces, such as Pro/Web.Link in the Creo CAD/CAM system.This approach is applicable at the component and assembly levels.
The conceptual design of product families has inspired many research efforts [7] to [9].The stateof-the-art methods can be classified into two main categories: scalable and configurational product family design [10].The choice of standard components in CAD configurator [4] and [6] Scalable product family design refers to the definition of scaling variables (parameters) that are used to scale product components/subassemblies in one or more directions in order to address a variety of customer requests [10] and [11].The second approach is configurational product family design (also known as module-based product family design) in which the family members are configured by adding, removing or substituting one or more functional modules from the initially developed modular product [12].The application of programmable web interfaces in the design of a product family is an emerging field of research.
In [6] the authors use Pro/Web.Link and Pro/ Engineer CAD/CAM system for designing a family of trailers.A similar approach presented in [8] investigates configuring a family of products starting from a low-level skeleton model.A lamp and a golf cart were the examples considered.
In this paper, the application of CAD configurators with web interfaces in configuring the components and assemblies of new machine tools are investigated.Compared to the examples presented in [7] and [8], a machine tool is a highly complex product in which the kinematics of the machine is a critical issue.Therefore, the conceptual (skeleton) model of a machine tool should incorporate all the kinematic joints that will enable the kinematic verification of the mechanism.
The main benefits of the web interface application for configuring new machine tools can be expressed as follows: • Interactive decision-making that is based on the configuration of the machine tool and its settings according to the specific input data based on user requirements, database access, standards, recommendations, functional requirements, etc.; • Automation of machine tool modelling and configuration, with new input parameters that can be interactively set; • Use of a large number of standard components and generation of machine configuration that can be easily remodelled; • Use of CAD configurators with the web interface for unique components; • Verification of the machine virtual prototypes through simulation of mechanism kinematics during machining according to a control program (this is enabled by incorporation of all kinematic joints into the machine conceptual model); • Improved efficiency for machine designers, in order to reduce the time required to configure the new product.Finished solid models of the standard components with the desired dimensions, based on a query filled in by the user, can be downloaded.Downloaded standard components should not be further developed at all, and they are directly built in the main assembly.This approach is applicable in configuring new machine tools [13].In this paper, the web configurators, web interface, CAD/CAM systems and Pro/Web.Link are utilized for the development of a mini-laboratory and a desktop 3-axis parallel kinematic milling machine (PKMM).
PKMM is a research-and-development topic in many laboratories [14] and [15], although many of them, unfortunately, in fact, do not have a PKMM.Therefore, the use of a mini-laboratory and desktop educational 3-axis PKMM has been suggested as an aid in the process of acquiring basic experience with a PKMM [16] to [18].Research works that consider diverse aspects of PKMM have been published [19] and [20].
The simulation of the machining process created in this paper includes the simulation of machine operation based on a generated program in a CAD/ CAM system.However, since the mini-laboratory machine is used for the machining of workpieces of soft materials (Styrofoam), the simulation does not include finite element (FE) analysis of cutting forces, as in [21] and [22].
The rest of the paper is organized as follows.In Section 1, a general methodology for application of CAD configurators, Pro/WEB.Link and top-down approach in configuring new products is presented.This methodology was applied for designing minilaboratory and desktop 3-axis PKMM in Section 2. We have developed two virtual prototypes, one of which was implemented in the real world.In Section 3, the configured virtual prototypes are used for the verification of the machining program and off-line programming system using machining simulation in the CAD/CAM environment, which was possible because the skeleton model of the virtual prototypes had the incorporated all kinematic joints.During the simulation, PKMM tool paths were based on programs that were created using CAM systems [23].In Section 4, the test workpieces that were used for verifying the control and programming system on a real world laboratory prototype are presented.
CONFIGURING BY PRO/WEB.LINK AND TOP-DOWN APPROACH
Pro/Web.Link links the internet to Creo Parametric, enabling the use of web as a tool to automate and streamline parts of the engineering process [24] and [25].This paper describes the implementation of a simple automation solutions using Pro/Web.Link in configuring the family of parts or assemblies.Pro/Web.Link is a set of routines, protocols, and tools that can change and adjust parts and assemblies in Creo or Pro/Engineer CAD/CAM systems.An embedded web browser in the Creo CAD/CAM system improves communication with parts and assemblies in Creo, allowing researchers to concentrate on the process of configuring a new product based on available modules.Pro/Web.Link allows users to rapidly obtain CAD models of components.
The application of Pro/Web.Link provides direct access to information about the model.Moreover, the designer can create, modify, or delete any information regarding the model.
Traditionally, machines have been designed using a classic approach, from smaller to larger (bottomup), i.e., from the components to the main assembly, Fig. 2a.In this case, the necessary information for the main assembly depends significantly on the selected components.Opposite to the bottom-up approach, the topdown approach, [26] and [27], uses system analysis as a method for project management.The structure and basic logic of the top-down approach are shown in Fig. 2b.In this approach, all the data are located at the top level and dictate the essential information needed for components.The result is a component that fits perfectly to the main assembly and requires very little modification later.
The essential features of the top-down approach are: (1) a method for placing critical information at a high-level location, (2) communicating that information to the lower levels of the product structure, and (3) capturing the overall design information at one centralized location [26].The main goals achieved by the top-down approach are: cycle time reduction, increased user satisfaction with software, design efficiency increase, and cost reduction.
The top-down design strategy is the mostcommon method currently used by industries.This design process is conducted from the system to subsystem, then to the sub-sub-system, and eventually to the part, as shown in Fig. 3.The advantage of the top-down design approach is that inter-linkages from one sub-system to another can be correlated [8].
Fig. 3. Top-down design strategy for product development [8] The main assembly (ASM1) is organized like a tree according to subassemblies (ASM2, ASM3, etc.) and components (P1, P2, P3, etc.), which are allocated to specific project teams and designers who are responsible for their own tasks and who do not and should not view the project as a whole.
The use of a skeleton model is a powerful method for implementing the top-down design.The skeleton model is a simplified assembly, with zero mass and geometry with features such as outer surface contours, parting lines, hole locations, etc.Although it is typically only a non-solid geometry (surfaces, planes, curves, axis, etc.), there is no restriction on what kind of features can be put into a skeleton.
DEVELOPMENT OF THE MINI-LABORATORY AND DESKTOP 3-AXIS PKMM
Previous experience in the field of PKMM and a successfully developed first experimental prototype of a vertical milling machine based on newly developed parallel mechanism, [19] and [20], inspired the idea of developing a mini-laboratory and desktop 3-axis PKMM.
The structure of the mechanism, modelling approach, inverse and direct kinematics, workspace and singularity analysis of developed mini-laboratory and desktop 3-axis PKMM, as well as control and programming systems have been described in previous research, [16] to [18].
A representation of the initial model of the developed parallel mechanism and analytically obtained workspace are shown in Fig. 4.
The mechanism consists of the moving platform, three joint parallelograms, c 1 , c 2 , and c 3 , and a stationary base with two parallel guide-ways.Two crossed parallelograms (c 1 and c 2 ) with spherical and/ or universal, i.e. cardan joints, are connected with one of their ends to the mobile platform, and with their other ends to the independent sliders (p 1 and p 2 ) which, with a common guideway, make two powered and controlled translatory joints.The third joint parallelogram (c 3 ) is connected with one of its ends, through passive translator rotating joints with 2-DOF, to the moving platform.Its other end is connected with rotating joints to the slider p 3 , which makes, with the second guideway, the third powered and controlled translatory joint.The actuation of sliders p 1 , p 2 , and p 3 offers three degrees of freedom to the moving platform, i.e., the tool, so that the platform retains a constant orientation in its motion through the space [16] to [18].Since this machine has guideways in a parallel position, the workspace extension is achieved by elongation of one axis (x-axis).
Configuring the mini-laboratory and desktop 3-axis PKMM is done in the Creo CAD/CAM system, using the top-down approach, PRO/Web.Link and the web CAD configurator for the standard components.
The CAD model of the developed mini-laboratory and desktop 3-axis PKMM are shown in Fig. 5.
The Application of Top-Down Approach
Using the top-down approach, all of the crucial information is at the highest level, from where it is forwarded to the lower levels.During configuring, the skeleton model of a parallel mechanism (Fig. 6) is used to define a plan for integration of parts/ subassemblies in final assembly of the machine (completing the project of machine).The skeleton model (Fig. 6) of the parallel mechanism contains all the crucial parameters of the mechanism and the kinematic relations between moving components.The following kinematic relations are used: (1) a slider joint for moving three sliders (p 1 , p 2 , and p 3 ) and the passive translational joint, (2) a spherical joint (ball) for coupling two joint parallelograms (c 1 and c 2 ) with the platform and sliders (p 1 and p 2 ) and (3) rotary axes (pins) for the passive rotating joint c 3 .Illustration of the application of a skeleton model is shown in Fig. 7.
Components and sub-assemblies are integrated into the skeleton model, and, as a result, a parallel mechanism ready for further assembly is obtained.The main machine assembly (ASM0) is organized like a tree, Fig. 8.A large number of standard and commercially available components (Igus [4], NSK [5], Bosch [6]) have been used.The dominant axis (x-axis) of the parallel mechanism pn101 can be elongated, which makes the mechanism suitable for the design of a family of machines with different lengths of the x axis.In this section, how the family of supporting structures and guideways can be easily configured using CAD configurator (Pro/WEB.Link) for design of the family of machines is shown.Fig. 9a presents the web application interface for the CAD configurator for the example of the supporting structure.The configurator for Igus guideways is shown in Fig. 9b.These guideways are used for lead sliders (p 1 , p 2 , and p 3 ) and for the passive translatory joint.Based on the configurators from Fig. 9, a family of generic modules for configuring the family of machines with different lengths of x-axis has been obtained, in which the family parameter is length (L).
Virtual Prototypes
The first low-cost, educational, desktop 3-axis PKMM presented in [16] to [18] was physically realised and was named pn101_st V1.This paper presents a new version (pn101_st V1.5) with small modifications, which includes the protection cover, as well as a trunk for chips, Fig. 10.
This version is configured from available components for realization of the first prototype.Since this is a mini-laboratory and desktop machine, its main components (step motors, leadscrews, sliding guideways, joints, etc.) can be easily procured, and all other components can be built in a laboratory.
In addition to the virtual prototype presented in Fig. 10, a virtual prototype of a new version of the machine called pn101_st V2.5, Fig. 11 has been developed and realized.This prototype is configured by using as many standard components as possible, and it is intended for commercial purposes.The pn101_st V1 model is an educational mini-laboratory and desktop 3-axis PKMM whose main components can be purchased commercially; the remaining components can be easily built in the laboratory.
For standard components, some of the common CAD configurators [4] to [6] available on the web have been used; components were downloaded in STEP format, Fig. 1.These components can be easily loaded into any CAD environment and integrated into the required position in the skeleton model.
Upon analysing virtual prototypes from Figs. 10 and 11, the following details can be observed: • both prototypes have the same built-in parallel mechanism: pn101; • the first version (V1.5) is configured, so that it can be made of currently available components with a minimum investment, in order to obtain the first prototype; • the second version (V2.5) is configured with as many standard components as possible, in order to obtain a virtual prototype of a machine that could be easily homemade; the original idea was to use the machine for educational purposes; • spherical joints in the first prototype are homemade, while in the second standard Igus spherical joints have been used [4]; • guideways of the passive translatory joint are cylindrical in the first prototype, while they are standard square Igus guideways [4] in the second prototype; • the supporting structure of the first prototype is made of welded steel profiles, while in the second prototype it is made of standard Bosch Rexroth aluminium profiles and connecting elements [6]; • leadscrews in the first prototype are with a common metrical thread, while in the second prototype ball screws were used; • in both versions, protecting the workspace with transparent Plexiglas on three sides is planned; the front doors are also transparent, and adapted to the form of the supporting structure.Below the machine is a container for the gathering and disposal of chips.Both virtual prototypes have the same parallel mechanism with identical primary parameters designed with different components.The first prototype (pn101_st V1.5) is designed to use available components, while the second prototype (pn101_st V2.5) is designed to use standard components.The second prototype is planned for further commercial development.
MACHINING SIMULATION IN THE CAD/CAM SYSTEM
The configured virtual prototypes are used for the verification of the programming system in a CAD/ CAM environment by machining simulation based on the generated tool path, which also includes machine simulation.This machining simulation is critical in order to: (i) configure the off-line programming environment, (ii) verify the program before machining, (iii) detect the collision of the parallel mechanism during program execution, and (iv) verify the position of the workpiece within the workspace of a parallel mechanism.
Machining simulation by running the program is possible thanks to the applied modelling of the parallel mechanism with all kinematic connections between the components, which allows the motion of a virtual model as a system of rigid bodies.
Fig. 12 shows a detailed virtual prototype of parallel mechanism with all kinematic relationships defined in the same way as shown in the skeleton model in Fig. 6.
Fig. 12. CAD model for the simulation of mechanism kinematics
This assembly enables the motion of models in the range defined for each connection, which is of particular importance for the identification of possible collisions during the work of the parallel mechanism.
Machining simulation of the virtual prototype allows the motion of movable segments with a tool at the end.The tool path is a result of the execution program obtained by programming using the CAD/ CAM system.The machine is programmed in the programming format based on the G code.
Although this is a parallel kinematic machine, the same resources are used for its programming as for the machine tool with serial kinematics.Post-processing is done as for a 3-axis vertical milling machine.The postprocessor is configured using the postprocessor generator in the Creo CAD/CAM system.The equations of direct and inverse kinematics are incorporated into the control system for this machine [16] and [17].
For the first test, a scaled ISO test workpiece whose dimensions are 50×50×12.5mm is used.Because of the particular shape and size of the workspace of parallel kinematics machines, attention should be paid when setting up a workpiece, which must be within the limits of the workspace of the machine.For the test workpiece shown in Fig. 13a, the zero point in the middle of the underside of the workpiece has been adopted, with the coordinate axes x, y, z as has been used in the vertical 3-axis milling machine, marked as MACH_ZERO.The identical zero point (MACH_ZERO) exists on the machine (on the working table) on which the workpiece is placed, Fig. 13c.Matching these two coordinate systems is accomplished by setting the workpiece on the machine during the machining simulation.Fig. 13b presents the simulated tool path on the scaled ISO test workpiece, based on the generated CL file.The tool coordinate system is defined in the same way as the workpiece coordinate system and marked as a TOOL_POINT (Figs. 13b and d).
During the simulation of tool paths, a complete prototype of the virtual machine can be included into the simulation, with a machine play option.An example of machine simulation for virtual prototype pn101_st V2.5 is shown in Fig. 14 for an ISO test workpiece.Fig. 15 presents the second test with a machining simulation of the virtual prototype of real world machine pn101_st V1.5.For the machining test and verifying control and programming system, a nonstandard test workpiece with a grid of slots was chosen [28].This type of workpiece is used since linear interpolation represents a significant test for the parallel kinematic machine.A linear motion is perhaps one of the most difficult motions that parallel kinematic machines can perform.
Based on the realized simulations of machine virtual prototypes according to the running programs, no collision between the machine elements was observed during program execution.Accordingly, we can state that these tests have successfully verified the programming system and program in G code for both machine virtual prototypes.For the prepared program, the workpiece is set in workspace boundaries correctly and workpiece machining can be performed without collision.
MACHINING TEST
The model pn101_st V1.5 mini-laboratory and desktop 3-axis PKMM has been built and tested in our laboratory.While it is educational system with complex kinematics, a virtual machine is included in the control and programming system too [17].A virtual machine configured in the Python object-oriented programming language is implemented in the control system's enhanced machine controller (EMC2) in an axis graphical users interface (GUI) [29], for program simulation and verification, Fig. 16.A completely realized version of the machine (pn101_V1.5)during a real world machine verification is shown in Fig. 17a.Both test workpieces considered in the machining simulation were used for verifying control and programming the system.The machined ISO test workpiece is shown in Fig. 17b, and a machined non-standard test workpiece with grid of slots is shown in Fig. 17c.
Dimensions of the workpieces are set according to the dimensions of the machine workspace.These two test workpieces (Figs.17b and c) were made of Styrofoam.In both cases, a flat end mill tool (diameter 3 mm) was used.These experiments confirmed that it is possible to realize a low-cost mini-laboratory and desktop 3-axis PKMM for workpieces of light materials and lower tolerances, which can be directly used by students, CNC machine tool programmers and operators.For this machine, in the future we are planning to upgrade the control system, so that it can be utilized for research and application of the new method of programming known as STEP-NC [30].Furthermore, we will consider the upgrade of the existing parallel mechanism by adding a two-axis serial head on the platform, so that the machine tool can be used for five-axis machining.
CONCLUSIONS
In order to contribute towards the acquisition of practical experiences in configuring, design, control, programming, verification and the use of a PKMM, we have developed a mini-laboratory and desktop 3-axis parallel kinematic milling machine.
Two versions of mini-laboratory and desktop 3-axis PKMM have been considered.The first version of the machine was completely carried out, while the second represents a project for possible commercial development for education.Both machines use the same parallel mechanism, which is incorporated into the skeleton model.The difference in machine design is in the components that are built in the skeleton model using specially developed and standard CAD configurators.In its essence, this is a top-down approach, since we start from the basic idea represented by the machine skeleton, which is further developed depending on the available components and desired machine parameters.
The concept of the mini-laboratory and desktop 3-axis PKMM was verified by simulation and by the machining of standardized test pieces.The simulation enabled prior identification of possible collisions between the machine elements during program execution and verification, regardless of whether the parallel mechanism motion is within the boundaries of the machine workspace.
The developed mini-laboratory and desktop 3-axis PKMM represents a comprehensive and sophisticated didactic facility.It can machine soft materials, it is programmable in a conventional way, and it is completely safe for beginners to use.
Fig. 6 .
Fig. 6.A simplified assembly of the skeleton model with constraints
Fig. 7 .
Fig. 7.The application of a skeleton model in configuring
Fig. 8 .
Fig. 8. Design strategy for the development of mini-laboratory and desktop 3-axis PKMM
Fig. 13 .Fig. 14 .
Fig. 13.Coordinate system of the workpiece and tool with tool path simulation | 5,574.8 | 2015-01-15T00:00:00.000 | [
"Materials Science",
"Computer Science"
] |
Transformer Characteristics of Linear Motor-Transformer Apparatus
The characteristics of linear transformer are studied analytically. The transformer is composed in one of modes of linear motor-transformer apparatus proposed for future wireless light rail vehicle (LRV). The secondary (onboard) power factor can be adjusted at any value by an onboard converter. The equivalent circuit is used to study the transferred power control. The parameters are determined by three-dimensional finite element method (FEM) analysis for one pole-pair model. Under the rated primary (input) and secondary voltage and current, which are specified for linear motor operation, the characteristics of the secondary power factor are cleared. It is also shown that the input capacitor can improve the primary power factor and decrease the input power capacity, but does not change the efficiency. This linear transformer has the efficiency of 91% and the input power factor of 0.87 when the apparatus without input capacitor is controlled at the secondary power factor of 0.4.
Introduction
New type of public transportation which is in harmony with environment has been hoped for a future urban transit system [1][2][3].We have proposed the linear motor-transformer apparatus for overhead-wireless and non-contact power collection of light rail vehicle (LRV), which has functions the secondary current controlled linear induction motor and linear transformer [2,3].The transformer and linear motor mode respectively can be switched by only the signal of onboard converter, as shown in Figure 1.The transformer mode without thrust is used for non-contact charge to onboard battery in standstill at station.The charging power and the secondary power factor respectively can be controlled by onboard converter.
In the paper, the power supply characteristics of transformer are studied analytically.The equivalent circuit of transformer is used to compute the characteristics.The parameters in equivalent circuit are obtained by the analysis using the three-dimensional finite element method (FEM).On this transformer, the desirable primary power factor can be obtained by control the amplitude and phase of secondary voltage and current.As functions of secondary power factor, therefore, the characteristics of primary power factor and efficiency are cleared under the limit of primary voltage, primary current and secondary current.The effect of serially or parallel connected input capacitor to obtain a unity power factor at primary side is also studied.
Analytical Model
Figure 1 shows configuration for each operating mode.For economical configuration, the primary winding on ground is the concentrated single-phase winding supplied by commercial power source.The secondary winding on board is a double-layer distributed winding with hexagonal shape in the end winding.The onboard converter operates as a rectifier for the transformer with single-phase and as a two-phase inverter for the linear motor respectively.collecting power per vehicle is 200kW and the rated power per one pole-pair length is 4.2kW in this design.The main specifications are shown in Table 1.The rated values of voltage and current are determined to obtain the rated thrust in linear motor operation mode because the apparatus is used as both transformer and motor.
Figure 3 shows the model of three-dimensional analysis, in which the FEM tool named JMAG made in Japan is used.The periodic method in the longitudinal direction and the mirror image method in the lateral direction are applied respectively.
Equivalent Circuit
The equivalent circuit of this transformer is expressed as shown in Figure 4.The values of element parameters can be estimated by using FEM analysis [4].The basic electrical circuit can be modified to the machinery equivalent circuit with the leakage inductances, the exciting inductance and the special corrected inductance respectively [4].Although the expression is intelligible, it is difficult to estimate the equivalent turn ratio of primary to secondary exactly, as shown later in The values of constants obtained by three-dimen-sional FEM are shown in Table 2.
Figure 5 shows the ratio of primary to secondary on voltage and current.These ratios are not constant and considerably different from the turn ratio of windings.
Ratio of primary current to secondary current
Characteristics
In the following, the condition of rated collection power of 4.2 kW is dealt with On this transformer, the converter is used to control the power for charge and to obtain the desirable primary power factor by control the secondary power factor.Although the primary voltage is fixed in practical use, it is hard to consider the fixed voltage in this stage because the relation between primary and secondary voltage changes as shown in Figure 5.The secondary power factor controlled by the converter is, therefore, used as a parameter to clear the characteristics of this type of transformer.
Figure 6 denotes the secondary voltage curves as functions of secondary power factor at the rated secondary power.It is examined on the rated current of 110A, 90A and 50A as the secondary current.The secondary voltage increases as secondary power factor or secondary current decreases.As the secondary voltage is limited under the rated voltage of 82 V which is determined in linear motor operation, the secondary power factor must be over 0.23 at the rated secondary current.When the secondary current is smaller, the secondary voltage increases under the constant output of 4.2 kW, and the usable region of secondary power factor becomes narrow.
Figure 7 shows the primary voltage-secondary power factor characteristics.The primary voltage is not propor-tion to the secondary voltage.This is much different from conventional transformer.On the rated secondary current, the primary voltage decreases as secondary power factor decreases in the power factor range higher than 0.5.As the primary voltage is also limited to the rated value of 220 V, the usable region of secondary power factor is over 0.23.
Figure 8 represents the primary current characteristics.The primary current increases as the secondary current increases at secondary power factor of 1.0.On the other hand, the primary current decreases as secondary current increases in the region of secondary power factor of 0.3.This figure shows that the usable region of secondary power factor is from 0.18 to 0.92 on the condition of rated primary current.
From Figures 6-8, the usable region of secondary power factor for control is determined to be from 0.23 to 0.92 from total conditions of rated primary voltage, current and secondary voltage.
Secondary power factor cos 2 Figure 9 denotes the primary power factor characteristics as functions of secondary power factor.When the secondary power factor is controlled to be 1.0, the primary power factor is very low for any secondary current because the magnetic coupling between primary and secondary member is weak for large air gap.At the rated secondary current, the primary power factor increases as secondary power factor decreases in the region of secondary power factor between 0.4 and 1.0.
The extreme value of primary power factor increases as the secondary current increases.The maximum primary power factor can be 0.87 at the secondary power factor of 0.4 for the rated secondary current.
Figure 10 shows the curves of input capacity as functions of secondary power factor.The minimum capacity is small as the secondary current is large.The rated secondary current of 110 A and the about secondary power factor of 0.4 gives the minimum capacity.When the secondary power factor is controlled to be 1.0, the input capacity is about four times larger than the minimum value.Figure 11 shows the efficiency characteristics, which is computed taking into consideration of only copper loss in the primary and secondary windings.For the rated secondary current of 110 A, in which the current density is 2.21A/mm 2 , the efficiency is 91% when the secondary power factor is 0.4.
Figure 12 represents the ratio between primary and secondary copper loss.These losses have influence directly on the efficiency.The current densities of primary and secondary winding are 2.00 A/mm 2 and 2.21 A/mm 2 respectively at rated current.When the secondary current is the rated value of 110A, the ratio of secondary loss to primary loss increase sharply as the secondary power factor becomes smaller than 1, and the ratio is 5.7 at secondary power factor of 0.4.As the influence of secondary resistance is largely on the efficiency, the smaller secondary current density will bring higher efficiency.
Effect of Input Capacity
In the following, the effect of input capacitor is studied to improve the input capacity.The capacitor is connected in series or parallel at the input side of linear transformer, as shown in Figure 13.In these cases, the input capacity S 1 is defined as V 1 I 1 and the primary power factor is defined as the ratio of input effective power to input apparent power of V 1 I 1 .
Serial Capacitor
Figure 14 shows the primary power factor curves as functions of capacitance in series connected input capacitor when the output power is the rated value and the secondary power factor is kept at unity.The capacitance with primary power factor of unity depends on value of secondary current.In the following section, the capacitor with capacitance of 2.61mF for the rated secondary current of 110 A, 2.43 mF for I 2 =90 A and 1.83mF for I 2 =50 A are used respectively for both primary and secondary power factor of unity at the rated output power.In the connection of serial capacitor, the primary power factor is markedly improved as shown in Figure 15.On the rated secondary current of 110A, the primary power factor is a value above 0.9 in the wide region over the secondary power factor of 0.3.On the input apparent power shown in Figure 16, it is kept at nearly minimum value in the region over secondary power factor of 0.3 at the rated output power and the secondary current.However, the minimum value of primary apparent power in serial capacitor connection is almost equal to that in the case without input capacitor which is indicated in Figure 10.
Figure 17 represents the efficiency curves in serially connected input capacitor.These values are quite equal to those in case without input capacitor.The copper loss in primary winding is determined by the value of current in primary winding.
The input capacitor does not work to change the current in winding which is determined by the design parameters of apparatus.The capacitor works decrease of primary terminal voltage of apparatus.
Parallel Capacitor
Figure 18 indicates the capacitance for primary power factor of unity at the rated output power and secondary power factor of unity, which is 2.46mF for rated secondary current, 2.22mF for I 2 =90A and 1.74mF at I 2 =50A respectively.The capacitances are slightly smaller than those in series capacitor.
Figure 19 shows the primary power factor as functions of secondary power factor in case with input parallel capacitor.The primary power factor at rated secondary current of 110A is about 1.0 in the region between 0.65 and 1.0 in secondary power factor.The primary apparent power characteristics shown in Figure 20 are improved significantly compared to the case without capacitor, as the parallel capacitor works to reduce the input current although it does not work to change the input voltage which is equal to the voltage of primary winding.
Comparison
Figure 21 shows the comparison of input apparent power among in cases with serial capacitor, parallel capacitor and without capacitor.On the region of second-dary power factor with minimum value of primary ap-parent power, the region of serial capacitor is wide com-pared with that of parallel capacitor.The minimum value in case without capacitor is almost equal to that in case of serial or parallel capacitor although the value is ob-tained at the limited secondary power factor of about 0.4.In this apparatus, the usable region of secondary power factor is over 0.23, which is obtained from the condition of rated primary voltage, current and secondary voltage for linear motor operation.
Figure 22 indicates the capacity for serial capacitor compared to that for parallel capacitor in the condition rated secondary current with optimum capacitance for input power factor.Considering the capacity and the usable region of secondary power factor, the serial capacitor will be better than parallel capacitor.However, the improvement of the apparent input power is not signifycant in the region between secondary power factor of 0.
Conclusions
1) It is cleared that the secondary power factor can be controlled in the value from 0.23 to 0.92 under the conditions of rated primary voltage, current and secondary voltage for linear motor operation, as this apparatus is used for both linear transformer and linear motor.
2) When the input capacitor is used in series or in parallel, the input power factor of unity can be obtained at the secondary (output) power factor of unity.The effect of input capacitor is recognized in the input apparent power which can be kept at nearly minimum value in the region over secondary power factor of 0.3 in serial capacitor or 0.4 in parallel capacitor at the rated output power and the secondary current.
3) The serial input capacitor will be better than parallel capacitor, considering the capacity and the usable region of secondary power factor.4) However, the improvement of the apparent input power is not significant compared to the minimum value in case without capacitor.The efficiency characteristics do not change if the input capacitor is removed.5) In the operation without input capacitor, the efficiency is 91% and the input power factor is 0.87 when the secondary power factor is controlled at 0.4.
Figure 1 (Figure 1 .
Figure1shows configuration for each operating mode.For economical configuration, the primary winding on ground is the concentrated single-phase winding supplied by commercial power source.The secondary winding on board is a double-layer distributed winding with hexagonal shape in the end winding.The onboard converter operates as a rectifier for the transformer with single-phase and as a two-phase inverter for the linear motor respectively.Figure1(a)shows the formation of linear transformer with single phase primary-single phase secondary configuration, in which the thrust does not generate at the position.The model with numerical value for one pole-pair length is shown in Figure2.The rated
Figure 2 .
Figure 2. Analytical model and dimension for one pole-pair length.
Figure 5 .
Therefore, in the following study, the equivalent circuit of Figure 4 is used, and the values of elements are dealt with constant values.When the primary and secondary members are in the position of Figure 2 and the each secondary current is controlled in the same manner, M 1,a = M 1 , b and L 2,a = L 2,b .
Figure 5 .
Figure 5. Ratio of voltage and current between primary and secondary, (a) Voltage (b) Current.
Figure 21 .Figure 22 .
Figure21shows the comparison of input apparent power among in cases with serial capacitor, parallel capacitor and without capacitor.On the region of second-dary power factor with minimum value of primary ap-parent power, the region of serial capacitor is wide com-pared with that of parallel capacitor.The minimum value in case without capacitor is almost equal to that in case of serial or parallel capacitor although the value is ob-tained at the limited secondary power factor of about 0.4.In this apparatus, the usable region of secondary power factor is over 0.23, which is obtained from the condition of rated primary voltage, current and secondary voltage for linear motor operation.Figure22indicates the capacity for serial capacitor compared to that for parallel capacitor in the condition rated secondary current with optimum capacitance for input power factor.Considering the capacity and the usable region of secondary power factor, the serial capacitor will be better than parallel capacitor.However, the improvement of the apparent input power is not signifycant in the region between secondary power factor of 0.3 and 0.4, as shown in Figure21. | 3,572.8 | 2011-10-24T00:00:00.000 | [
"Engineering",
"Physics"
] |
Nonlinear theory of the modulational instability at the ion-ion hybrid frequency and collapse of ion-ion hybrid waves in two-ion plasmas
We study the dynamics of two-dimensional nonlinear ion-ion hybrid waves propagating perpendicular to an external magnetic field in plasmas with two ion species. We derive nonlinear equations for the envelope of electrostatic potential at the ion-ion hybrid frequency to describe the interaction of ion-ion hybrid waves with low frequency acoustic-type disturbances. The resulting nonlinear equations also take into account the contribution of second harmonics of the ion-ion hybrid frequency. A nonlinear dispersion relation is obtained and, for a number of particular cases, the modulational instability growth rates are found. By neglecting the contribution of second harmonics, the phenomenon of collapse of ion-ion hybrid waves is predicted. It is shown that taking into account the interaction with the second harmonics results in the existence of a stable two-dimensional soliton.
I. INTRODUCTION
The presence of several species of ions is often found in both space and laboratory plasmas.In particular, space plasmas in most cases consist of several species of ions and the relative concentration of different species can vary in a fairly wide range.For example, the ionospheric and plasmaspheric plasma are composed of several species of ions [1], and in the upper ionosphere O + ions with a small addition of He + are predominant.Phenomena in multi-ion space plasmas have been intensively studied for many years [2][3][4][5].In laboratory conditions, a plasma with two ion species is of great interest primarily in relation to the ion cyclotron resonance frequency (ICRF) heating method in plasma magnetic confinement devices, where one of the most successful schemes involves minority species heating at the ion-ion hybrid resonance or at the minority cyclotron frequency [6][7][8] in H − D plasma.Recently, efficient plasma heating with the three-ion ICRH scenario with a small amount of 3 He ions in H − D mixture was suggested in Ref. [9,10].The presence of two ion species is inherent to dusty plasmas [11][12][13], where in addition to the main ion component, a second, heavy micronsize ions are present.Two-ion plasmas, although unmagnetized, also naturally arises in the inertial thermonuclear fusion experiments [14].
In a magnetized plasma consisting of electrons and two ion species with different charge-to-mass ratios, in addition to the lower-and upper-hybrid resonances, there is the so-called ion-ion hybrid resonance at the frequency ω ii defined by (1) and first introduced by Buchsbaum [15].Here, ω pα and Ω α are the plasma frequency and gyrofrequency of * Electronic address<EMAIL_ADDRESS>the ions of species α = 1, 2, respectively with ω 2 pα = 4πZ 2 α e 2 n 0α /m α and Ω α = Z α eB 0 /m α c, where e is the elementary charge, n 0α , m α and Z α are the equilibrium density, the mass and charge number of the ions of species = 1, 2, respectively.Overall charge neutrality n 01 + n 02 = n 0e = n 0 is assumed, where n 0 is the equilibrium plasma density and n 0e is the equilibrium electron density.From Eq. ( 1) one can see that the ion-ion hybrid frequency ω ii lies between the gyrofrequencies Ω 1 and Ω 2 of the ions of different species.Note also that ω ii is determined only by the magnetic field B 0 and the relative population of each ion species.The presence of an additional type of ions in a magnetized plasma significantly modifies the dispersion relation and leads to the appearance of new branches of plasma oscillations that are absent in the single-ion case.The properties of such a plasma differ in many respects from the properties of single-ion plasma.Linear theory of wave propagation in plasmas with two species of ions, including an inhomogeneous plasma, has been considered in quite a few works (see, e.g., Refs.[16][17][18][19][20][21]). Parametric instabilities in a plasma with two ion species were investigated in Refs.[22][23][24][25], where the standard kinetic method for studying parametric instabilities [26,27] was used, as well as in Refs.[28,29] within the framework of fluid model.
The linear theory is valid only for sufficiently small wave amplitudes, when nonlinear effects can be neglected.Nonlinear coherent structures in plasma, in particular solitons, have been the subject of intensive theoretical study for several decades and have been experimentally observed both in laboratory and space plasmas [30][31][32].In a broad sense, a soliton is a localized structure (not necessarily one-dimensional) resulting from the balance of dispersion and nonlinearity effects.Multidimensional solitons often turn out to be unstable, and the most well-known phenomena in this case are wave collapse and wave breaking [33][34][35].Despite the obvious importance of studying nonlinear phenomena occurring in a plasma with two species of ions in the vicinity of the ionion hybrid frequency, the corresponding nonlinear theory, especially in the multidimensional case, has not been suf-ficiently developed, in contrast to the lower-hybrid (LH) and upper-hybrid (UH) resonances, for which nonlinear phenomena have been studied in more detail.In particular, in the one-dimensional case, various types of solitons, including envelope solitons, were discovered at the LH [36][37][38] and UH frequencies [39][40][41].In the multidimensional case, the phenomenon of collapse of the LH [42][43][44][45][46] and UH waves [47,48] was predicted.Taking nonlocal nonlinearity into account, stable two-dimensional UH solitons and vortex solitons were found in Ref. [49].
Note that one-dimensional solitons in a plasma with two species of ions were considered in a number of works [50][51][52][53][54][55][56][57][58], but they were, with the exception of Refs.[50,53], not the envelope solitons at the ion-ion hybrid frequency (that is, they were not ion-ion hybrid solitons), but to solitons in the frequency range of much lower or much higher the ion-ion hybrid frequency ω ii .
One-dimensional (1D) nonlinear waves near the ion-ion hybrid frequency ω ii were considered in Refs.[50,53].In both of those works, equations for the wave envelope at the ion-ion hybrid frequency ω ii were derived.In Ref. [50], an equation with nonlocal nonlinearity was obtained, however, the nonlinearity was incorrectly taken into account due to the neglect of the usual striction nonlinearity associated with the ponderomotive force, in comparison with the nonlocal nonlinearity due to the interaction with the second harmonics.In Ref. [53], the 1D nonlinear Schrödinger equation (of both focusing and defocusing types) was obtained.In that work, however, the action of the ponderomotive force of the HF field of ion-ion hybrid waves on ions was completely neglected, although the ion contribution is comparable (and sometimes exceeds) the electron contribution.Besides, the dispersion of low-frequency waves was incorrectly taken into account, so that the results obtained in Ref. [53] have a very limited area of applicability.We also note that in the one-dimensional theory there is no possibility of taking into account the vector (gyrotropic) nonlinearity, similar to the nonlinearity that occurs near the LH resonance [42,44].
The aim of this paper is to obtain two-dimensional (2D) nonlinear equations to describe the interaction of ion-ion hybrid waves with low-frequency (LF) acoustictype disturbances.In the equation for the envelope we also take into account the interaction of second harmonics at the ion-ion hybrid frequency.In the case where second harmonics can be neglected, we predict a collapse of ion-ion hybrid waves, similar to the collapse for the LH and UH waves.Taking into account the additional nonlinearity associated with the second harmonic, however, leads to a stable 2D soliton.
The paper is organized as follows.In Sec.II, we derive a set of nonlinear equations for the wave envelope and LF ion density perturbations.A nonlinear dispersion relation was obtained in Sec.III.In Sec.IV the phenomenon of collapse of ion-ion hybrid waves is predicted.A stable 2D soliton, taking into account the second harmonic, was found in Sec.V. Finally, Sec.VI concludes the paper.
II. MODEL EQUATIONS
For a cold plasma containing two ion species and immersed in a homogeneous external magnetic field B 0 = B 0 ẑ, where ẑ is the unit vector along the z-direction, the linear dispersion relation for electrostatic waves propagating normal to B 0 is given by [59] where ω pe and Ω e are the electron plasma frequency and electron-cyclotron frequency, respectively.In the high frequency range ω ≫ ω p1,2 , Ω 1,2 , where only electrons take part in the plasma motion, for the upper-hybrid frequency ω UH we get, In the intermediate frequency range Ω 1 , Ω 2 ≪ ω ≪ ω pe , Ω e , solution of Eq. ( 2) yields the frequency of the lower-hybrid resonance ω LH , Here, only ions play an active role in motion (the role of electrons is reduced to screening).Assuming ω ≪ Ω e , equation (2) can be written as In the lowest frequency range Ω 1 , Ω 2 ≪ ω p1 , ω p2 , we have and Eq. ( 5) can be simplified to On assuming that equation ( 7) yields the ion-ion hybrid frequency ω ii determined by Eq. (1).In this case, Ions of both species equally take part in the plasma motion and move in opposite phases, and it is this, compared with the cases of UH and LH resonances, that complicates the description of the behavior of electrostatic waves near the ion-ion hybrid resonance.
In this section, we derive nonlinear equations to describe the dynamics of waves near the ion-ion hybrid frequency ω ii .We consider the case of an arbitrary ratio of the ion densities n 01 and n 02 , as well as arbitrary ratios of the electron and ion temperatures.The reason is that in plasma of the Earth's ionosphere, for example, the ratios of these quantities can be, depending on the altitude, either comparable to each other, or significantly (sometimes by orders of magnitude) differ from each other in one side or another [1].
A. High-frequency disturbances
The basic equations governing the plasma dynamics are the fluid equations of motion and continuity of the ions of both species and electrons, where n α , v α , p α , e α and m α are the density, velocity, pressure, charge, and mass of the particle species α = e, i (electrons and ions), respectively.For the gas kinetic pressure we take p α = γ α n α T α , where T α is the temperature and γ α is the ratio of specific heats, and in the next we introduce the notation v T α = γ α T α /m α for the particle thermal velocity.Equations ( 10) and (11) are supplemented by the Poisson equation, Following the well-known idea dating back to the original work by Zakharov [60] of separating the slow and fast time scales and averaging over the fast time, we represent the electrostatic potential, velocities, and densities in the form where c.c. stands for the complex conjugate, v (1),( 2) α and n (1),( 2) α are assumed to vary on a timescale much more slowly than 1/ω ii .The contribution of the second harmonic of frequency ω ii is taken into account in Eqs. ( 13), ( 14) and ( 15), and we assume that the amplitudes of the first harmonic are much larger than the others so that conditions |v α , δn α are assumed to be met.Note that, as shown in Refs.[32,61,62], taking into account the second harmonic of the Langmuir frequency ω pe in a nonmagnetized plasma may halt Langmuir collapse in two or three dimensions.
The ion-ion hybrid waves have wave numbers almost normal to the external magnetic field (k z ≪ k ⊥ ), and in this paper we restrict ourselves to the 2D case of perpendicular propagation when the condition is satisfied.Substituting Eq. ( 14) into Eq.( 10) , we have where where ∇ ⊥ = (∂/∂x, ∂/∂y).Substituting Eq. ( 15) into Eq.( 11), we find where J α is the nonlinear current, As noted above, electrons do not take part in the plasma motion at the frequency of the ion-ion hybrid resonance, and ions of different species move in opposite phases, so we can write 1 + e 2 n (1) In zero order in iω ii ∂ t /(Ω 2 α − ω 2 ii ) ≪ 1, neglecting the thermal dispersion and nonlinearity, from Eq. ( 17) we obtain the perpendicular velocity and then from Eq. ( 19) the density perturbation where In the following order, taking into account the thermal dispersion and nonlin-earity, one can obtain where we have neglected non-stationary corrections ∼ ∂ t ∇n α and ∼ ∂ t F (1) in terms responsible for thermal dispersion and nonlinearity, respectively.Using Eq. ( 24) we substitute α into Eq.( 19) and get In Eq. ( 25) we substitute the zero approximation (23) in the term with ∆n α responsible for weak thermal dispersion, and then, multiplying Eq. ( 25) by 4πe α , we use Eq. ( 21).As a result, taking into account that one can finally obtain where the right hand side of Eq. ( 27) corresponds to the nonlinear terms.Equation ( 27) can be rewritten in the form where R is the dispersion length defined by In the linear approximation, taking ϕ (1) ∼ exp(ik ⊥ • r − iωt), where ω and k ⊥ the frequency and perpendicular wave vector respectively, Eq. ( 28) yields the dispersion relation of ion-ion hybrid wave, From Eqs. ( 10) and (11), for the second harmonic perturbations v α and n (2) and respectively, where v 0α and n (1) 0α are determined by Eqs. ( 22) and (23).The perturbation of the electrostatic potential at the second harmonics ϕ (2) is determined from Eq. ( 12), where we neglect the electron contribution at the frequency 2ω ii as before at ω ii .
B. low-frequency disturbances
For the LF disturbances with ω ≪ Ω 1 , Ω 2 , ions of both species are strongly magnetized and move only along the external magnetic field.Then, the LF motion is governed by the continuity equation, and the parallel momentum equation for each ion species α = 1, 2, where is the ponderomotive force (per unit ion mass) acting on the ions due to the high-frequency (HF) pressure of the ion-ion hybrid waves, and the angular brackets denote the average over the fast time.From Eqs. ( 34) and (35) we have, (37) For inertialess electrons in slow motions, one can write the force balance equation along the magnetic field, where is the ponderomotive force (per unit electron mass) acting on the electrons.From Eqs. ( 14), ( 36) and ( 39) we find, The expressions for the perpendicular ion v α,⊥ and electron v (1) e,⊥ velocities follow from Eq. ( 22) and are given by respectively.For parallel ion v α,z and electron v e,z velocities, from Eqs. ( 10) and ( 14) we have v (1) α,z = − v (1) e,z = ie m e ω ii ∂ϕ (1) ∂z .
The quasineutrality condition reads Then substitution Eq. ( 49) into Eq.( 37), taking into account Eqs. ( 46) and ( 50), gives after some transformations, and where v s1 = T e /m 1 and v s2 = T e /m 2 are the ion sound speeds of species 1 and 2, respectively, and we have introduced the notation for relative ion concentration ν α = n 0α /n 0 , (α = 1, 2).When obtaining Eqs. ( 51) and ( 52), we took into account that the electron contribution to the scalar nonlinearity turns out to be smaller than the contributions of other terms by a factor of order ∼ m α /m e , and thus it can be neglected.The contributions of the electron and ion vector nonlinearities, as well as the ion scalar nonlinearity, are of the same order.In the linear approximation, assuming δn α ∼ exp(ik z z − iΩt), Eqs. ( 51) and ( 52) give the dispersion relation where k z is the parallel wave number.The linear dispersion relation (53) corresponding to two modes in a plasma with two ion species was first obtained in Ref. [21].Introducing notation q = (k, ω) and using the convolution identity (f and g are arbitrary functions) from Eqs. ( 51) and ( 52) one can write explicit expressions for the ion density perturbations in the Fourier space (taking ∼ exp(ik • r − iΩt)) as where Equations ( 51) and ( 52) describe the dynamics of LF acoustic-type disturbances (in the linear case corresponding to two branches of ion-ion sound) under the action of the ponderomotive force of the HF field of ion-ion hybrid wave.
C. neglecting second harmonics Equations ( 28), ( 31)- (33) for the HF motions along with Eqs. ( 51) and ( 52) for LF disturbances is a closed system of nonlinear equations for the HF envelope of electrostatic potential ϕ (1) and LF ion density perturbations δn 1 and δn 2 .It can be seen, however, that due to taking into account the nonlinear terms corresponding to the second harmonic of the ion-ion hybrid frequency (that is, containing ϕ (2) , v α , and n (2) α ), this system turns out to be extremely cumbersome and very difficult to analyze.In particular, taking into account second harmonics leads to terms containing F α .In addition, vector nonlinearities have, generally speaking, the same order as scalar ones.This distinguishes the case under consideration from the case of nonlinear upper-hybrid waves, where, under reasonable conditions, vector nonlinearity can be neglected, as well as from the case of lower-hybrid waves, when, on the contrary, vector nonlinearity is always dominant.However, the system of equations ( 28) and ( 31)-( 33) is greatly simplified for radially symmetric field distributions.We show below in Sec.V that taking into account the contribution of the second harmonics results in the existence of a stable 2D soliton, but for now we neglect this contribution.Then, Eq. ( 28) takes the form (here and after denoting ϕ = ϕ (1) which can be rewritten as The first term in square brackets on the right hand side of Eq. ( 59) corresponds to the scalar nonlinearity, and the second term in the form of the Poisson bracket corresponds to the so-called vector nonlinearity.The latter identically vanishes in the one-dimensional case and also for radially symmetric field distributions.
III. NONLINEAR DISPERSION RELATION
In this section we consider the linear theory of the modulational instability of a pump wave with a frequency close to the ion-ion hybrid frequency ω ii in the framework of the model equations ( 51), ( 52) and (59).We decompose the ion-ion hybrid wave into the pump wave and two sidebands, i.e.
where δ k = ω ii k 2 ⊥ R 2 /2, while the low frequency perturbations of ion plasma densities are expressed as The amplitudes of the up-shifted ϕ + and down-shifted ϕ − satellites can be calculated from Eq. ( 60).We have where D ± is the Fourier transform of the linear operator in the left hand side of Eq. ( 59) evaluated in k ± q and δ k ± Ω, and The amplitudes of the LF perturbations n1 and n2 can be found from Eqs. ( 55) and ( 56), where Ω 2 + and Ω 2 − are determined by Eq. ( 53), and where By combining Eqs. ( 63), ( 64), (68), and (69) we obtain a nonlinear dispersion relation Note, that in the case of coplanar (in the plane perpendicular to the magnetic field) wave vectors k ⊥ q ⊥ , the parametric coupling of the waves due to the vector nonlinearity is absent, while the coupling due to the scalar nonlinearity is the most effective.In general case k ⊥ ∦ q ⊥ both types of the nonlinearities yield comparable contribution, and this, taking into account that Eq. ( 73) is an equation of the sixth degree in Ω, leads to a rather complicated picture of instability.Significant simplifications are possible in a number of special cases.For example, assuming that Ω ≪ Ω − , Ω + , k ⊥ q ⊥ and k ⊥ ≫ q ⊥ , the dispersion equation ( 73) after direct calculations can be reduced to the form where v g = ω ii k ⊥ R is the group velocity of the ion-ion hybrid wave, and we have introduced the notations for the coefficients F and G, which we will use in what follows.Equation (74) predicts instability with the growth rate γ = |Im Ω|, .
(77) In the opposite case k ⊥ ≪ q ⊥ , one can get and Eq. ( 78) describes a purely growing instability with the growth rate given by Eq. (77).For example, for the upper F region/topside ionosphere (∼ 500 km), taking and Ω 1 ∼ 2 • 10 2 s −1 in accordance with Ref. [1] (subscripts 1 and 2 correspond to O + and He + , respectively, and the concentration of H + is two orders of magnitude less than the concentration of O + ), the estimate for the threshold field at q ⊥ R ∼ 0.1 is E 0 ∼ 100 mV/m.
IV. COLLAPSE OF ION-ION HYBRID WAVES
In this section, neglecting the second harmonics, we discuss the possibility of collapse of the ion-ion hybrid waves.Let us consider an important case when we can neglect the time derivatives in the LF equations ( 51) and (52).Physically, this corresponds to the balance between gas-kinetic and wave (ponderomotive) pressures.In this static approximation ("subsonic" case), from Eqs. ( 51) and ( 52) for LF perturbations of ion densities δn 1 and δn 2 one can obtain, where {ϕ (1) , ϕ (1) * } , (81) and G is determined by Eq. ( 76).The perturbation of ion densities due to the vector nonlinearity can correspond to both a density well and a hump and depends on the relative phase of ϕ and ϕ * .Introducing dimensionless variables by where F is determined by Eq. (75), and substituting Eqs.
(79) and (80) into Eq.( 60), one can obtain where we have introduced the dimensionless coefficients c 1 , c 2 and c 3 , where G 1 , G 2 and G 3 are determined in the Appendix.The nonlinear equation (85) for the envelope ψ differs significantly from the corresponding equations (in the subsonic case) for nonlinear Langmuir, lower-hybrid and upper-hybrid waves.If there is only scalar nonlinearity, that is c 1 = 0, c 2 = 0 and c 3 = 0, Eq. ( 85) is reduced to the well-known equation for nonlinear Langmuir waves [60].The case c 2 = 0 and c 3 = 0 corresponds to nonlinear upper-hybrid waves (with another dimensionless coefficient c 1 ) [49].If the scalar nonlinearity can be neglected, and also if c 2 = 0 and c 3 = 0 (again with another dimensionless coefficient c 1 ), Eq. ( 85) becomes the equation for the envelope at the lower-hybrid frequency [42][43][44][45].In all these cases, the corresponding dimensionless variables for time and space coordinates and the electrostatic potential envelope are implied.The nonlinearities corresponding to the fifth (c 2 = 0) and sixth (c 3 = 0) terms in Eq. (85) have apparently never been considered in nonlinear problems before.Equation (85) conserves the plasmon number the momentum where the momentum density is the angular momentum and the Hamiltonian Note that the second term in H is a focusing nonlinearity, while the sign of the other terms (which, as you can easily see, is real since {ψ, ψ * } purely imaginary) depend on the phase relation between ψ and ψ * and also on the signs of the coefficients c 1 , c 2 and c 3 , that is, on the ratio of ion and electron temperatures, masses of ions of different species and their relative concentrations.An essential feature of the considered model equation ( 85) is its two-dimensional nature and the cubic nonlinearity.The stationary solution of Eq. ( 85) in the form of ψ(r, t) = Ψ(r) exp(iλ 2 t) corresponds to a stationary point H for a fixed plasmon number N and resolves the variational problem that is By analogy with Langmuir, upper-hybrid [33] and lowerhybrid [32,63] waves, multiplying Eq. ( 93) by Ψ * , and then integrating over the whole D-dimension space, we get where On the other hand, one can write Next, we consider an N -preserving scaling transformation Ψ (α) = Ψ(αr) and introduce the corresponding values It is evident that from which we have, From Eqs. (94), ( 98) and (102) we then find Equation ( 103) is identical to the relationship between the Hamiltonian H and the number of plasmons N for nonlinear Langmuir waves [32,33].It can be seen that the reason for the coincidence is the same linear parts (dimensionless) and the cubic nature of the nonlinear terms.
Since for the considered 2D model we have H = 0 for stationary solutions, one can conclude that an arbitrary initial localized field distribution with H = 0 never reaches a stationary state in the course of evolution, that is, either spreads out or collapses.Hamiltonian (91) is not positively definite, despite the fact that the third term in Eq. (91), as mentioned above, may have a defocusing character.A rigorous proof of the collapse of ion-ion hybrid waves (as well as Langmuir waves in arbitrary geometry) is apparently a very difficult problem.Here we only point out, taking into account the arguments presented above, that with a negative initial Hamiltonian, the collapse of two-dimensional ion-ion hybrid waves apparently occurs.
V. STABLE 2D RADIALLY SYMMETRIC SOLITON
In the radially symmetric case, the vector nonlinearities vanish identically, and Eqs. ( 22), ( 23) and ( 31) takes the form where E = −∇ ⊥ ϕ (1) and E (2) = −∇ ⊥ ϕ (2) is the electric field at the second harmonics.Equation ( 33) can be rewritten as From Eqs. ( 32) and (107), we have Using Eq. ( 106), we eliminate v α in Eq. ( 108) and then substitute expressions for n Inserting Eq. (109) into Eq.(106) we have It can be shown that the second term in Eq. ( 32) can be neglected, and then substituting Eq. (111) into Eq.( 32) one can obtain The term corresponding to the contribution of the second harmonics on the right hand side of Eq. ( 28) can be written as ∇ ⊥ • N (2) , where In the considered radially symmetric case, we are interested here only in the radial component E r of the electric field E.Then, writing Eq. ( 60) through the electric field E with the additional term which takes into account the contribution of second harmonics, and taking its radial projection, we have for E r , 2i where ∆ r = ∂ 2 /∂r 2 + (1/r)∂/∂r is the 2D radial Laplacian, and for the considered 2D case we have used the relation (∆E) r = ∆ r E r − E r /r 2 .Further, as in the previous section, we consider the static approximation (79) and (80) for ion density perturbations, and take into account that, in the radially symmetric case, the vector nonlinearities in Eqs. ( 81) and (82) vanish identically.Next, we use the dimensionless variables defined by Eq. ( 83), and introduce the dimensionless radial electric field E through From Eqs. (79), (80), ( 113) and (114), one can finally obtain and Hamiltonian (118) An equation similar to Eq.
(116) was obtained in Ref. [64], where the influence of electron-electron nonlinearities on unstable two-dimensional and threedimensional Langmuir solitons was studied.In that work it was shown that the effective radius r ef f , defined as is bounded from below provided Q = 0, so that additional nonlinear terms proportional to Q prevent collapse (the same applies to the 3D case).Moreover, it has also been shown that the gradient norm |∂E/∂r| 2 d 2 r is bounded from above by conserved quantities Ñ and H.The authors of Ref. [64] also numerically found the 2D soliton solution and demonstrated the stability of such a soliton by direct numerical simulation of the soliton dynamics within the framework of Eq. ( 116).Thus, we can conclude that, taking into account the additional nonlinearity associated with the second harmonics of the ion-ion hybrid frequency, in the radially symmetric case there exists a stable two-dimensional ion-ion hybrid soliton.
VI. CONCLUSION
We have derived a nonlinear system of equations for the envelope of electrostatic potential at the ion-ion hybrid frequency to describe the interaction between an ion-ion hybrid waves and LF acoustic-type disturbances in a magnetized plasma with two species of ions.The resulting nonlinear equations also take into account the contribution of second harmonics of the ion-ion hybrid frequency.We have obtained a nonlinear dispersion relation predicting the modulational instability of ion-ion hybrid waves.For a number of particular cases, the modulational instability growth rates have been found.By neglecting the contribution of second harmonics, the phenomenon of collapse of ion-ion hybrid waves is predicted.It has been also shown that taking into account the interaction with the second harmonics suppresses collapse of ion-ion hybrid waves and results in the existence of a stable two-dimensional soliton.The developed theory is applicable to a wide range of theoretical and experimental problems in both space and laboratory (primarily devices with magnetic plasma confinement) plasma with two species of ions.
A number of open questions remains to be addressed: 1) We have restricted ourselves to the 2D case, when the condition ( 16) is met and the ion-ion hybrid wave propagates perpendicular to the external magnetic field.In a more general case, it is necessary to take into account an additional term of the form ∼ (k 2 z /k 2 )(m α /m e ) in the dispersion relation of the ion-ion hybrid wave Eq. (30).Then the model becomes three-dimensional and essentially anisotropic.The anisotropy of the models in the cases of upper-and lower-hybrid resonances results in the absence of a stationary point of the Hamiltonian (for a fixed number of plasmons), that is, in this case there are no three-dimensional soliton solutions (even unstable ones) [32,34].Note that the results obtained in Refs.[32,34] essentially use the cubic type of nonlinearity.A similar situation apparently occurs in the case of ion-ion hybrid resonance.
2) The model under consideration takes into account the interaction of HF ion-ion hybrid waves only with electrostatic LF disturbances which corresponds to a pertur-bation of the plasma density and neglects the interaction with nonpotential LF disturbances of the Alfvén type, which would correspond to an LF perturbation of the magnetic field (the interaction of LF Alfvén waves with HF lower-hybrid and upper-hybrid waves was considered in Refs.[46,49]).Such neglect corresponds to the smallness of the magnetic pressure in comparison with the plasma gas-kinetic pressure, and is valid under the condition v Aα ≪ v sα , where v Aα = B 0 / √ 4πn 0α m α is the Alfvén velocity of the ion species α.
3) Accounting for second harmonics is not the only reason for stopping the collapse and the existence of stable 2D solitons.In the static approximation, the Boltzmann distribution of electrons and ions leads to a saturating exponential nonlinearity.Stable multidimensional Langmuir solitons with this type of nonlinearity were obtained in Refs.[65,66].Then, apparently, the stable 2D ion-ion hybrid solitons could exist without accounting for the contribution of second harmonics. | 7,274.4 | 2024-04-01T00:00:00.000 | [
"Physics"
] |
Inequality for local energy of Ising models with quenched randomness and its application
We extend a lower bound on average of local energy for the Ising model with quenched randomness [J. Phys. Soc. Jpn. 76, 074711 (2007)] to asymmetric distribution. Compared to the case of symmetric distribution, our bound has a non-trivial term. Applying the attained bound to the Gaussian distribution, we obtain lower bounds on the expected value of the square of the correlation function. As a result, we show that, in the Ising model with the Gaussian random field, the spinglass order parameter always has a finite value at any temperature, regardless of the form of other interactions.
I. INTRODUCTION
Spin-glass models describe magnetic material that interacts spatially randomly. While the mean-field theory of spin-glass models, that is, the Sherrington-Kirkpatrick, was solved rigorously by the full replica symmetry breaking solution [1][2][3][4], it is very difficult to obtain analytical results for finite-dimensional models, except on the Nishimoriline [5]. While analytical approach [6] is making little progress in two-dimensional systems, analyses for threedimensional systems have been largely untouched except for numerical analysis.
In ferromagnetic spin models, correlation inequalities play an important role in non-perturbative analysis and give us rigorous results for unsolvable models. Correlation inequalities are also valid for the Ising model with random field. Recent study [7] proved that, based on the Fortuin-Kasteleyn-Ginibre inequality, there is no spin-glass phase in the random-field Ising model with two-body interaction for all lattice and field distribution. Therefore, it is expected that the concept of correlation inequalities plays an essential role in rigorous analysis of spin-glass models, and it is a very important problem to establish correlation inequalities for spin-glass models.
There are some previous studies on correlation inequalities in spin-glass models. Recent study [8,9] showed that the response of the quenched average of the partition function with respect to the variance is always positive, which is considered as the counterpart of the Griffiths first inequality in spin-glass models. In addition, for various bond randomness including the Gaussian distribution and the binary distribution, it is shown that the counterpart of the Griffiths second inequality holds on the Nishimori-line [10,11]. However, correlation inequalities as in ferromagnetic spin models have not been obtained in general, and rigorous analysis based on correlation inequalities has not been done at the satisfactory level for spin-glass models.
In this paper, we obtain a lower bound on the average of local energy for the Ising model with quenched randomness. Although the result of the previous study [12] was limited to symmetric distribution, we generalize it to asymmetric distribution. Furthermore, as a simple application of attained inequality, we obtain correlation inequalities for the Gaussian distribution. We show that the expected value of the square of the correlation function always has a finite lower bound at any temperature. As a consequence, we prove that the spin-glass order parameter has a finite lower bound in the Ising model with the Gaussian-random field, regardless of the form of other interactions.
The organization of the paper is as follows. In Sec. II, we define the model and obtain the lower bound on the average of local energy for the Ising model with quenched randomness. In Sec. III, attained inequality is applied when the randomness of interactions follows the Gaussian distribution. Finally, our conclusion is given in Sec. IV.
II. LOWER BOUND ON LOCAL ENERGY FOR ASYMMETRIC DISTRIBUTION OF RANDOMNESS
Following Ref. [12], we consider a generic form of the Ising model, where V is the set of sites, the sum over B runs over all subsets of V among which interactions exist, and the lattice structure takes any form. The probability distribution of random interactions J B is represented by P B (J B ). The probability distributions can be generally different from each other, P B (J B ) = P B (J B ), and it is also allowed that the probability distribution has no randomness, The correlation function for a set of fixed interactions {J B } is given by The configurational average over the distribution of randomness of interactions is written as For example, the expected value of the correlation function is given by Our result is the following theorem.
Theorem1. 1. When the distribution function of randomness satisfies for any even function f (J A ) ≥ 0, the system defined above satisfies the following inequality, We note that the right-hand side of Eq. (7) does not depend on other interactions. When the distribution function is symmetric which coincides with the existing result in Ref. [12]. In this case, the intuitive explanation of the inequality is possible: the local energy is always larger than or equal to the energy in the absence of all other interactions. However, for β NL = 0, it is difficult to find an intuitive explanation because we do not give simple physical meaning of the second term of the right-hand side of Eq. (7).
Proof. We define Z(β, J A ) and σ A J A as We note that where Γ(β, J A ) is defined as Since Eq. (11) is the reciprocal of Eq. (12), we obtain On the other hand, from Eq. (13), we immediately find .
Furthermore, for any even function f where E[· · · ] stands for the configurational average over randomness of other interactions than J A , and we used Eq. (14) in the third identity and used Eq. (12) in the last inequality. Thus, Eqs. (15) and (16) gives Eq. (7).
III. APPLICATION TO GAUSSIAN SPIN-GLASS MODEL
In this section, we apply Eq. (7) to spin-glass model with the Gaussian distribution. First, we consider the special case, P A (J 0,A − J A ) = P A (J 0,A + J A ). Then, we obtain the following result.
Corollary 2. When the distribution function of randomness satisfies for any even function f (J A ) ≥ 0, the system defined above satisfies the following inequality, Proof. If we regard P A (J 0,A + J A ) as a new probability distribution P A (J A ), P A (J A ) is symmetric. Therefore, using Eq. (7) for β NL = 0, we prove Eq. (18).
In the following, using Eq. (18), we obtain several inequalities.
A. Correlation inequality for Gaussian spin-glass
Next, we consider the case where all of interactions follows the Gaussian distribution with mean J 0,B and variance Λ 2 B . Each J 0,B and Λ 2 B can take different values. We denote the configurational average over the distribution of randomness of interactions as E [· · · ] {J0,B,Λ 2 B } . Then, we obtain the following result. Corollary 3. For the expected value of the square of the correlation function, we obtain a lower bound, We note that the left-hand side of Eq. (19) is independent of mean {J 0,b }.
Proof. For the Gaussian distribution with mean J 0,B and variance Λ 2 B , and f (J A ) = 1, Eq. (18) is reduced to Furthermore, using integration by parts, we obtain Eq. (19) A similar calculation is possible for higher order terms. Taking f (J A ) = J 2 A in Eq. (18), we obtain Using integration by parts and Eq. (19), we get a lower bound on the expected value of the fourth power of the correlation function, Therefore, it is expected that the following relation holds for any natural number k, However, we have not obtained a general proof or counter example.
B. Lower bound on spin-glass order-parameter in Gaussian random-field Ising model Finally, we show that the spin-glass order-parameter in the Ising model with the Gaussian random-field always takes a finite value at any temperature, regardless of the form of other interactions.
We consider the case where a random-field {h i } is independently applied to all sites and {h i } follows the Gaussian distribution with mean J 0 and variance Λ 2 . The Hamiltonian is given by where interaction J B other than {h i } takes any form. Then, Eq. (19) is reduced to Furthermore, because the same inequality holds for all sites, we obtain the following result.
Corollary 4.
For the spin-glass order-parameter q, the system (24) satisfies the following inequality, Thus, when the Gaussian random field is applied, the spin-glass order-parameter has generally a non-zero lower bound. In ferromagnetic models, the ferromagnetic order parameter, that is, the magnetization, has a finite value when a magnetic field is applied. Equation (27) implies that a similar phenomenon occurs in the Ising model with the Gaussian random field. This is a natural consequence, but the existence of a finite lower bound is not obvious.
In addition, we note that Eq. (27) does not means that there is a spin-glass phase in the Ising model with the Gaussian random field.
IV. CONCLUSIONS
We have obtained the lower bound on the local energy for the Ising model with quenched randomness. We emphasize that obtained inequality (7) is independent of other interactions. Our result is a natural generalization of Ref. [12] where symmetric distribution was considered.
Applying obtained inequality to the Gaussian spin-glass model, we find that the expected value of the square of the correlation function always has a finite lower bound at any temperature. As a consequence, the spin-glass orderparameter in the Ising model with the Gaussian random field always takes a finite value at any temperature, which is a natural but not obvious result.
It is an interesting question whether a similar inequality as Eq. (19) holds for general distribution function of random interactions or not. Our proof relied on the property of the Gaussian distribution, and we have not found a proof for other distribution. | 2,270.8 | 2020-01-29T00:00:00.000 | [
"Physics"
] |
Improving Thermochemical Energy Storage Dynamics Forecast with Physics-Inspired Neural Network Architecture
Thermochemical Energy Storage (TCES), specifically the calcium oxide (CaO)/calcium hydroxide (Ca(OH)2) system is a promising energy storage technology with relatively high energy density and low cost. However, the existing models available to predict the system’s internal states are computationally expensive. An accurate and real-time capable model is therefore still required to improve its operational control. In this work, we implement a Physics-Informed Neural Network (PINN) to predict the dynamics of the TCES internal state. Our proposed framework addresses three physical aspects to build the PINN: (1) we choose a Nonlinear Autoregressive Network with Exogeneous Inputs (NARX) with deeper recurrence to address the nonlinear latency; (2) we train the network in closed-loop to capture the long-term dynamics; and (3) we incorporate physical regularisation during its training, calculated based on discretized mole and energy balance equations. To train the network, we perform numerical simulations on an ensemble of system parameters to obtain synthetic data. Even though the suggested approach provides results with the error of 3.96 × 10−4 which is in the same range as the result without physical regularisation, it is superior compared to conventional Artificial Neural Network (ANN) strategies because it ensures physical plausibility of the predictions, even in a highly dynamic and nonlinear problem. Consequently, the suggested PINN can be further developed for more complicated analysis of the TCES system.
Thermochemical Energy Storage
Energy storage systems have become increasingly important in the shift towards renewable energy because of the fluctuation inherent to renewable energy generation [1,2]. Thermochemical Energy Storage (TCES) stores and releases energy in the form of heat as chemical potential of a storage material through a reversible endothermic/exothermic chemical reaction. TCES is favourable compared to sensible and latent heat storage [3][4][5] because it features a high energy density, low heat losses and the possibility to discharge the system at a relatively high and constant output temperature [6].
In general, the chemical processes occurring on the storage material can be classified into: redox of metal oxides, carbonation/decarbonation of carbonates and hydration/dehydration of hydroxides [14]. The choice of materials depends on many criteria, one of which is the application of the energy storage. For example, in integration with Concentrated Solar Power (CSP) plants, manganese oxide is not suitable because of its high reaction temperature [6,15]. Another important aspect to consider is the practicability of the process; for example, in a calcium carbonate system, CO 2 as the side effect of the reaction has to be liquefied and results in a high parasitic loss [6,14]. Additionally, there are many more criteria to consider, such as cyclability, reaction kinetics, energy density and, most importantly, safety issues. For comprehensive reviews of varying storage materials, we refer to [14][15][16].
Recently, experimental investigations have been conducted specifically for the calcium oxide (CaO)/calcium hydroxide (Ca(OH) 2 ) system. One experiment investigated the material parameters (such as heat capacity and density) and the reaction kinetics [17], another experiment focused on studying the operating range, efficiency and the cycling stability of the system [18], and there was also an experiment on the feasibility of integration with concentrated solar power plants [19]. All these experiments show that CaO/Ca(OH) 2 is a very promising candidate as TCES storage material. Furthermore, it is more attractive compared to other storage materials because it is nontoxic, relatively cheap and widely available [20,21]. The system stores the heat (is charged) during the dehydration of Ca(OH) 2 by injecting dry air with higher temperature. Charging results in an endothermic reaction along with the formation of H 2 O vapour and lower temperature at the outlet. It releases the heat (is discharged) during the hydration of CaO. This is achieved by injecting air with higher humidity (H 2 O content) and relatively lower temperature, resulting in an exothermic reaction (see Figure 1). Note that in this case, the hydration process occurs at lower temperature relative to the dehydration process, but both processes occur at high operating temperature [22]. The reversible reaction is written as: A robust operational control of this system needs an accurate and real-time capable model to predict its state of charge and health. Similar models are operationally used, namely for batteries in mobile devices [23,24]. Accordingly, numerical TCES modelling studies were conducted to predict the system's behaviour [6,18,20,21,25]. However, the PDEs that describe the system are dynamic, highly nonlinear and strongly coupled, making the numerical simulation computationally expensive. This poses a significant hindrance on a more thorough and complex analysis of the TCES system. Estimation of the system's state of health, for example, requires a 2D or 3D model to study the effect of the structural change due to agglomeration [26]. With increasing spatial dimension, the computational time also increases strongly. Consequently, the system is not ready yet for commercial and industrial use until a faster and accurate model is developed. In this work, we consider using Artificial Neural Network (ANN) as a cheaper alternative to the expensive existing models.
Physics-Inspired Artificial Neural Networks
Artificial Neural Networks (ANNs) have been studied and applied intensively in the past few decades. They have become very popular alongside linear regression and other techniques such as Gaussian Process Regression (GPR) and Support Vector Machine (SVM) [27]. ANNs have advantages in terms of their flexibility and better applicability to model nonlinear problems compared to linear regression and GPR [28]. Additionally, it has better scalability to larger data compared to SVM [29]. However, a detailed performance comparison of ANN with other machine learning techniques is out of the scope of this paper.
ANNs have a wide range of applications, such as image and pattern recognition, language processing, regression problems and data-driven modelling [30]. In this paper, we focus on data-driven modelling, where an ANN is trained to predict the physical behaviour of a TCES system based on available data. ANNs have been used for data-driven modelling in different fields. In hydrology [31], ANNs have been successfully applied, for example to predict rainfall-runoff [32,33], groundwater levels [34] and groundwater contamination [35] . Moreover, ANNs have been used in energy system applications [36], for example to predict the performance of [37], reliability of [38] and design [39] renewable energy systems . All these examples show that ANNs have a potential to be a quick decision making tool which is useful for many engineering and management purposes.
In previous applications of ANNs in data-driven modelling, the ANN was treated as a black box [40,41] that learns only the mathematical relationship between the input and output. In such a process, the physical relationships and scientific findings that were previously used to build governing equations of the modelled systems are completely neglected. This issue is very troublesome and needs to be addressed because real data are noisy with measurement errors, and fitting the ANN to the noisy data without any physical constraint might lead to overfitting problems [30]. Additionally, in many cases, observation data is difficult and expensive to obtain, providing users with only a limited amount of data to train the ANN. Without any physical knowledge, ANNs perform poorly when trained with a low amount of data [42,43]. Furthermore, ANNs have a very poor interpretability [44,45], meaning that there might be different combinations of ANN elements (width, depth, activation functions and parameters) that fit the training data with similar likelihood, but not all of them are physically meaningful and robust. As a result, ANN predictions might be misleading.
Implementing physical knowledge to build the ANN structure and regularisation is a potential solution to solve this issue. By combining a black box model with a white box (fully physical) model, we obtain a grey box model. In such an approach, physical theories are used in combination with observed data to improve the model prediction and plausibility [46]. Moreover, the data will help to include complex physical processes that may not be captured in currently existing white box models. There are at least two motivations to do so: to obtain a reliable surrogate model for the physics-based model for the sake of speed in real-time environments and to address situations where the underlying physics of the system are incompletely understood, so that ANNs can build on, and later exceed, the current state of physical understanding.
Several works have been conducted to develop the so-called Physics Inspired Neural Networks (PINN). In general, PINNs can be grouped into two distinguishable motivations as mentioned above. The first one applies ANNs to infer the parameter values in the governing Ordinary Differential Equations (ODEs) or Partial Differential Equations (PDEs) as well as the constitutive relationships and the differential operators [43,47], assuming that the ODEs or PDEs perfectly describe the modelled system. The second one treats the system as a complex unit that is not sufficiently represented only with simplified equations. It trains the ANN based on observation data while constraining the ANN using physically-based regularisation [42,48,49]. We aim mostly at the second motivation and ask ourselves how much of the useful knowledge contained in the PDEs can be used to inform ANNs before proceeding to train them on observation data.
Despite the success of PINN implementations in this direction, there are still some open issues that need to be addressed: (I) There is no well-defined alignment between the structure of the governing equations with the structure of the ANN. For example, most, if not all of the applications are for dynamic systems. Nevertheless, the structures of the ANNs applied do not resemble the dynamic behaviours of the systems and do not consider recurrency. (II) The focus in PINN development is more on getting high accuracy with limited amount of training data rather than improving the physical plausibility of the predictions. (III) The implementations of PINNs in previous works are mainly for relatively simple problems, and implementation to more complex problems (featuring multiple nonlinear coupled equations) has not been evident yet. In our current study, we address these three open issues.
Approach and Contributions
For dynamic and complex systems with coupled nonlinear processes such as the TCES system, we need an advanced approach to solve it using ANNs. Our approach implements physical knowledge of the system into building the ANN such as: (I) we use a Nonlinear Autoregressive Network with Exogeneous Inputs (NARX) structure. This is a form of Recurrent Neural Network (RNN), and we use deeper recurrence to account for the system's long-term time scales and nonlinear dynamics; (II) we train the network with recurrence structure to improve the long-term predictions in the dynamic system; and (III) we add physical regularisation terms in the objective function of training to enforce physical plausibility of the predictions.
NARX is suitable to model time series of sequential (time-dependent) observations y(t) [50,51], which are equispaced time series. There are several reasons why NARX is preferable to alternative ANN structures: the included feedback loop in NARX enables it to capture long-term dependencies [34] and the possibility to provide exogenous inputs improves the results compared to networks without them [52]. Thinking in terms of PDEs, the exogeneous inputs resemble time-dependent boundary conditions, and the feedback provides access to preceding time steps of the PDE solution. With deeper recurrence, even integro-differential equations can be resembled, which is important for hysteretic systems or for system descriptions on larger scales.
There are two different methods to train NARX, namely Series-Parallel (SP, also known as open-loop) and Parallel (P, also known as closed-loop). In SP training, each time step in the time series is used as an independent training example. This means the recurrency in the ANN structure is ignored, and the preceding data values from the time series are provided as feedback inputs instead of the predicted values. The feedback loop is closed only after completing the training to perform multistep ahead predictions [52,53]. The independency of the training examples makes the training much easier; however, the trained network performs much worse after closing the feedback loop [52].
Most, if not all studies conducted with NARX have used SP structure to train the network. In this paper, we argue that P training resembles the dynamic system better. The reason is that P training optimizes the ANN exactly for the later prediction purpose over longer time horizons: it accounts for error propagation over time and for the time-dependency of the predictions between time steps. As a downside, it requires more time to train the network in P mode. We are readily willing to accept this trade-off, because once trained, the network can still calculate its outputs in high speed [54]. We also propose to use a deeper recurrency to train the network by feeding back predictions of multiple preceding time steps. This accounts for the nonlinearity of the system and for possible higher-order memory effects in the system. In terms of PDE-governed systems, this corresponds to the time delay between system excitations at one system boundary and the system's reaction at a remote boundary.
Regularisation in training ANNs is useful to prevent overfitting. Here, as an addition to the commonly used L2 regularisation, simple regularisation terms are added that align with the physics. Several examples include, but are not limited to, monotonicity and non-negative values (examples in this case are volume and mole fractions) of the internal states. This follows the works presented in [46,55]. We suggest to use Bayesian Regularisation (BR) to optimally calculate the hyperparameters (normalising constants) of all terms in the loss function, unlike in previous works, where the hyperparameters were calibrated manually. Furthermore, regularisation terms with discretized balance equations are also used. This regularisation is a way to feed the training with fundamental human knowledge previously used to build PDEs. It helps the network realise the extensive relationships between inputs (previous states and boundary conditions) and outputs (future states of the system) in complex problems and to prevent physically implausible predictions.
To test our ANN framework, a Monte Carlo ensemble of numerically simulated time series of system states is used, which we generate from random samples of uncertain system parameters. White noise is then added to the simulation results to emulate the actual noisy measurement data. Then, this ensemble (both parameters and time series) is used to train the network. We use synthetic data instead of experimental data because the former allow more exhaustive and controlled testing; this does not imply that our main purpose is only surrogate modelling. As optimization algorithm for training, the Levenberg-Marquardt (LM) algorithm [56][57][58] is implemented to obtain an optimum set of NARX parameters (which consist of the so-called weights and biases).
This paper is organised as follows: in Section 2 we introduce the governing equations used for numerical simulation of the TCES internal states, the alignment between the dynamic of CaO/Ca(OH) 2 and the NARX structure, as well as how we implement the physical knowledge into the regularisation. In Section 3, we discuss the results of our test, and Section 4 concludes the findings in the work.
Governing Equations
This study serves as an initial step towards enabling a more complex analysis of the TCES system, focusing on predictions of the system's dynamic internal states that change during the endothermic/exothermic reaction process. The analysis of the system's integration with the energy source is out of scope of this paper.
To set up the prediction model, we consider the CaO/Ca(OH) 2 TCES lab-scale reactor of 80 mm length along the flow direction as described in [20]. Assuming the system properties and parameters to be homogeneous, the simulation was conducted in 1D. The system was modelled as a nonisothermal single-phase multicomponent gas flow in porous media with chemical reaction acting as the source/sink terms and can be described using mole and energy balance equations. The inlet temperature and outlet pressure were fixed and defined with Dirichlet boundary conditions, and Neumann conditions were used to define the gas injection rates. The solid components forming the porous material are CaO and Ca(OH) 2 , and the gases are H 2 O and N 2 . The latter serves as an inert component to regulate the amount of H 2 O mole fraction in the injected gas. Full explanation in detail can be found in [20], and we offer a brief overview only in this section.
The mole balance equation was formulated for the solid component (subscript s) as: where ρ n denotes the molar density, ν the volume fraction, q the source/sink term, t the time and the subscript n refers to molar properties. Note that ν s is the volume fraction of each solid component with regard to the full control volume, and therefore ∑ s ν s = 1 − φ. In Equation (2), there is no effect of advection or diffusion (no fluxes) because the solid is assumed to be immobile. The change of solid component is solely caused by the chemical reaction through the reaction source/sink term q s , assuming the solid is immobile. The change in the gas component (subscript g), however, is affected both by advective and diffusive mass transfer and by a source/sink term for the reactive component H 2 O, as is defined in the mole balance equation: Here, x denotes the molar fraction, φ the porosity, K the absolute permeability of the porous medium, µ the gas viscosity, p the pressure and D the effective diffusion coefficient.
The energy balance equation was formulated assuming local thermal equilibrium. It accounts for internal energy change of both solid and gas phase, convective and conductive heat flux as well as source/sink term from the reaction. It was defined as: where ρ m is the mass density, u g is the gas specific internal energy, c p,s is the specific heat capacity of the solid material (CaO and Ca(OH) 2 ), T is temperature, h g is gas specific enthalpy and λ e f f is the average thermal conductivity of both solid materials and gas components. Reaction rates must be specified to determine the source/sink term for each equation. Based on [20,21], simple reaction kinetics were used, described as: whereρ m,SR is the mass reaction rate, k H R and k D R are hydration and dehydration reaction constant, respectively and T eq is the equilibrium temperature. Hydration process occurs when T < T eq , which is also called the discharge process and is the exothermic part of the reaction; and dehydration process occurs when T > T eq , also known as charge process and the endothermic part of the reaction. At the beginning of each reaction, the storage device is assumed to be in chemical equilibrium, corresponding to ν Ca(OH) 2 = 0 and ν CaO = 0 for hydration and dehydration, respectively.
The relation between the reaction rate and the source/sink terms for the mole balance equations were defined as: withρ n,SR the molar reaction rate (obtained fromρ m,SR using the molar mass of each respective component). The energy balance source/sink term q e was calculated accounting for the reaction enthalpy ∆H and the volume expansion work [59] according to: Note that a negative sign is necessary to calculate q e , so that its value is in proportion to q Ca(OH) 2 , and in reverse to q H 2 O and q CaO . This negative sign can be explained by the fact that to form Ca(OH) 2 from CaO and H 2 O in the hydration process, energy is released into the system. Correspondingly, a decrease in the molar amount of CaO and H 2 O (and an increase in the molar amount of Ca(OH) 2 ) results in a positive source term. The opposite holds for the dehydration process.
Input and Output Variables
The numerical model used in this work was developed using DuMu x (Distributed and Unified Numerics Environment for Multi-{Phase, Component, Scale, Physics, ...} [60]). As input to the simulator, we need the material parameters such as CaO density (ρ CaO ), Ca(OH) 2 density (ρ Ca(OH) 2 ), CaO specific heat capacity (c p,CaO ), Ca(OH) 2 specific heat capacity (c p,Ca(OH) 2 ), CaO thermal conductivity (λ CaO ) and Ca(OH) 2 thermal conductivity (λ Ca(OH) 2 ); porous medium parameters such as absolute permeability (K) and porosity (φ); reaction kinetics parameters such as reaction rate constant (k r ) and specific reaction enthalpy (∆H); and initial and boundary conditions such as N 2 molar inflow rate (ṅ N 2 ,in ), H 2 O molar inflow rate (ṅ H 2 O,in ), initial pressure (p init ), outlet pressure (p out ), initial temperature (T init ), inlet temperature (T in ) and initial H 2 O mole fraction (x H 2 O,init ). In the TCES system application, one of the main goals is to estimate the state of charge of the device that is implied in the CaO volume fraction ν CaO . The device in fully charged condition corresponds to ν CaO = 1 and vice versa. We are also interested in the output variables p, T and x g,H 2 O (H 2 O mole fraction). The behaviour of these variables, especially p, is very nonlinear. Therefore, it is interesting to see the prediction of the ANN for these nonlinear variables. Additionally, these variables are also important to assist in the system understanding. Therefore, our main output variables of interest were defined as in the following vector y as a function of time t: All input-output data samples are available as supplementary materials on https://doi.org/10.18419/darus-633.
Aligning the ANN Structure with Physical Knowledge of the System
ANN representation via NARX has two different training architectures, namely Series-Parallel (SP) and Parallel (P) structure. The network outputŷ SP (t + 1) of the SP structure is a function of the observed target values of previous time steps y (t) up to a feedback delay d y and of the so-called exogenous inputs u:ŷ In this work, u was assumed to be constant over time, meaning there is no disturbance signal throughout the whole simulation period.
In P structure, the difference lies in the fed-back values. Here, the network outputs of the P structureŷ P (t) are fed-back instead of the original given data y(t): Note that, in terms of notation, the difference only lies in the hats above the fed-back values. Apparently, the P-structure in NARX resembles an explicit time-discrete differential equation (ODE or PDE) in a simplistic case, for example using the Adams-Bashforth discretization scheme [61,62] which can be described as:ŷ (t + 1) ≈ŷ(t) + ∆t · g(u,ŷ(t),ŷ(t − 1), . . . ,ŷ(t − d y )). (11) whereŷ (t + 1) is an explicit function ofŷ (t) . . .ŷ t − d y . In Equation (10), the NARX function , u can be seen as an approximation ofŷ(t) (11). Based on this reason, we propose to train using P-structure for solving dynamic problems whenever possible. Additionally, training in P-structure helps the network to learn that there is dependency between predicted values at different time steps. While both architectures were considered for NARX training, only P architecture was used for testing, as for longer-term forecasting, real data of previous time steps are not available [52]. For better understanding of the difference between P and SP, Figure 2 illustrates both architectures. Feedback delay is also an important property, because Equations (2)-(4) are not elliptic, and hence there will be a time delay for the effect of input change to change the output. Because of this memory effect and its nonlinearity, we propose to use a deeper recurrence in NARX to enable the network to learn the system's latency. In this work, feedback delay values d y ranging from 1 to 5 were tested to get the optimum value. Additional to an appropriate ODE-like structure, the hyperbolic tangent (tanh) function was chosen as activation function within the neurons of NARX. Tanh is a nonlinear activation function (named tansig function in MATLAB [63]). Aside from the nonlinearity, this choice was driven by the assumption that all the input parameters and the targets depend on each other via smooth functions (differentiable). This knowledge results, among others, from the presence of a diffusion term in all relevant transport equations, and from the absence of shock waves in the solutions to Equations (2)-(4). Hence, for each hidden layer l as shown in Figure 2, the layer output a [l] is computed in the feedforward procedure described in: For the output layer L, a linear function is assigned for scaling, leading to:
Physical Constraints in the Training Objective Function
As loss function for training, the most commonly used performance measure is a Mean Squared Error (MSE). Equation (14) shows the MSE for n training datasets (here, we use the subscript D for "data" to label the error term E D in the loss function as the data-related term): where i = 1 . . . n indicates a specific sample in the training dataset and t = 1 . . . n t indicates a specific time step. However, using MSE alone in the loss function is not enough most of the time. The optimization problem to be solved in training is typically an ill-posed problem in many instances [30]. Thus, regularisation is required to prevent overfitting. In this work, the L2 regularisation method was used to increase the generalisation capability of the ANN [64]. L2 regularisation is also known as weight decay or ridge regression [65]. The goal of L2 regularisation is to force the network to have small parameter values (choosing the simpler network over the more complex one). This effectively adds a soft constraint to the loss function to prevent the network from blindly fitting the possible noise in training datasets: where N is the total number of network parameters (weights and biases), and θ ∈ R N are the network parameters.
To improve the network prediction and the physical plausibility even more, known physical laws were inserted as part of the network regularisation: where the subscript k identifies a specific physical law, for example a mole balance equation, and e phy,i,t,k is the physical error listed in Table 1. For example, the term e phy,i,t,1 corresponds to the mole balance equation for dehydration/hydration. The mole balance equation used for this regularisation is the H 2 O mole balance, because it has the most complete storage, flux and source/sink term (the solid components are assumed to be immobile, and N 2 is inert). The mole balance error can be written as: where n H 2 O is the molar amount of H 2 O, the subscript out, in, sto and q denote outflow, inflow, storage and source/sink term, respectively. The mole balance error was used as a contraint e phy,i,t,1 and is equal to 0 if the mole balance is fulfilled. Putting this equation as a regularisation term penalises the network if the mole balance is not satisfied. Similarly, the corresponding energy balance equation also has to be fulfilled: where Q is the energy in the system. It was used as a regularisation in e phy,i,t,2 . A more detailed derivation of the mole balance error e phy,i,t,1 and the energy balance error e phy,i,t,2 can be found in Appendix B.
Dehydration Hydration
Further relations of the form F (ŷ) ≤ 0 (monotonicity and non-negative values) were implemented using the Rectified Linear Units (ReLU) function, so that the physical error was then calculated with e phy,i,t,k = ReLU F (ŷ) [46]. The ReLU function returns 0 as output value for negative arguments and linearly increases for positive arguments. Hence, it punishes positive values in proportion to their magnitudes. Examples of these ReLU constraints are e phy,i,t,3 through e phy,i,t,13 . They define non-negativity and monotonicity of the predicted target variables. For both dehydration and hydration process, negative fractional valuesν CaO andx H 2 O are physically and mathematically impossible. Therefore, in e phy,i,t,3 and e phy,i,t,4 , the network is punished for predicting negative values for these targets. Additionally, for both processes, e phy,i,t,5 provides an additional constraint forν CaO , to limit the amount of CaO volume in relation with the porosity (ν CaO ≤ 1 − φ). All these monotonicity assumptions originated from the fact that the system's material parameters are considered to be constant throughout operation of the system. Therefore, the system's behaviour should be monotonic and bounded in the specified aspects. Specific for the dehydration process,p,T and ν CaO are expected to not decrease throughout the simulation. This results in the corresponding monotonicity constraints e phy,i,t,6 to e phy,i,t,8 . The system temperature must also be lower or equal to the injected temperature as constrained in e phy,i,t,9 , because the injected temperature is higher than the initial temperature. Specifically for the hydration process, the monotonicity constraints e phy,i,t,6 to e phy,i,t,9 for the dehydration process are reversed, because the hydration process is the reverse of the dehydration process.
Obtaining Optimum Network Parameters
The complete loss function defined in Section 2.4 including MSE and all the regularisation term is written as: Here, α and β are normalising constants of E θ and E D , respectively; and λ k is a normalising constant for each physical regularisation k. All error and regularisation terms, therefore, are evaluated in a normalised metric. These normalising constants, also known as the hyperparameters, determine the importance given to each term. For example, a high β means that it is more important for the network to fit the training datasets than to generalise better. In many cases, the hyperparameters are determined manually from trial-and-error. In this work, Bayesian Regularisation was adopted to calculate them overall using a maximum likelihood approach to minimise L(θ) [66,67]. Bayesian Regularisation reduces the subjectivity arising from manual choice of hyperparameters. First, all the hyperparameters α, β and λ k along with the network parameters θ were initialised. The hyperparameters were initialised by setting β = 1, α = 0 [68] and also λ k = 0, while the network parameters were initialised using the Nguyen-Widrow method [69,70]. The Nguyen-Widrow method initialises the network parameters so that each neuron contributes to a certain interval of the whole output range (added with some random values).
The complete derivation for updating α and β can be found in [66,67]. Here, we only give the calculation of λ k . After each iteration, they are updated according to the following relation: where J phy,k is the Jacobian of physical error E phy,k with respect to the network parameters θ. The approximate Hessian matrix H of the overall loss function L(θ) was defined as follows: where J D is the Jacobian of the MSE (E D ) with respect to the network parameters and I is the N × N identity matrix (N is the number of network parameters θ).
The network parameters are also updated after each iteration according to the Levenberg-Marquardt algorithm: where µ > 0 is the algorithm's damping parameter. Its value is increased when an iteration step is not successful, and is decreased otherwise. The Levenberg-Marquardt algorithm was chosen because of its faster convergence rate compared to Steepest Decent and higher stability relative to the Gauss-Newton algorithm [58]. In absence of our physical regularisation terms and with fixed α, β, the procedure would simplify to a plain (nonlinear) least squares training, which would be the standard approach for training ANNs. Values of the trained network parameters and the normalising constants at the end of the training are given as supplementary materials on https://doi.org/10.18419/darus-634.
Results and Discussion
Our hypothesis is that applying physical knowledge of the modelled system into the construction of ANNs would lead to an improved physical plausibility of the prediction results. In this section, the prediction of the TCES system using ANNs is assessed and three relevant aspects that support our hypothesis are discussed: (I) the effect of feedback delay on the prediction result to account for the system's nonlinearity and long-term memory effect (Section 3.1), (II) the comparison between training in SP and P architecture (Section 3.2) and (III) the improved physical plausibility from using physical regularisation (Section 3.3). The results are illustrated only for the dehydration process, because the hydration provides very similar results.
The complete workflow of the ANN application is shown in Figure 3. In general, the workflow can be divided into: training, validation and testing of the ANN. To train the ANN, first an ensemble of exogeneous input u was generated based on selected probability distributions. These distributions are based on different values used in literature [6,[17][18][19][20][21]. The complete list of exogeneous inputs u with their corresponding distributions is listed in Appendix A. This ensemble was then plugged into the numerical model in DuMu x and was simulated until t = 5000 s to obtain an ensemble of target data y(t). The governing equations are provided in Section 2.1. White noise was then added to these targets by generating normally distributed random numbers with zero mean and a standard deviation of 0.05 times the target values. Lastly, both exogenous inputs and targets were normalised to the range [−1, 1] to help the stability of the training [71]. Then, we set up the NARX ANN as described in Section 2. The training was then conducted using the built-in functions for NARX in the MATLAB Neural Network Toolbox [63], in which the loss function calculation was modified based on the equations provided in Section 2.5. It was conducted in batch mode both for dehydration and hydration process with a total of 100 training datasets.
Without physical regularisation, we obtain the lowest MSE value when the NARX is trained using 1000 training datasets, as shown in Figure 4. However, it is interesting to see how the physical knowledge can further improve the performance of NARX with limited training data. Therefore, we conducted the training in batch mode both for dehydration and hydration process with a total of only 100 training datasets.
For conciseness, the choice of the number of hidden layers, number of nodes per hidden layer and choice of activation function is not discussed because there is no existing uniform and systematic method to calculate the appropriate or the best combination [72]. Based on trial and error, we found that for this specific problem, 2 hidden layers with 15 and 8 nodes at each layer gives reasonable results. An example of ANN prediction using this configuration is provided in Appendix C. The stopping criteria are the dampening factor µ > 10 10 (see Equation (22)) or the loss function gradient g = ∂L(θ) ∂θ < 10 −7 , both of which are the default values proposed in the toolbox. Additionally, a maximum epochs is set for the training. Since training error converges mostly before 500 epochs, this limit is sufficient. Different initialization values often lead to different network response as the algorithm might fail to always locate the global minimum [65]. Therefore, the network was retrained with 50 different initialisations. This number was set to find a compromise between a reliable result and a reasonable computational time. After each training, the network was validated on 20 validation datasets, and the trained network with the lowest MSE (E D evaluated against validation data) was selected.
Finally, the network was tested with data contained neither in the training nor in the validation set on a test set with 800 time series. Figure 5 shows the influence of feedback delay on the MSE evaluated on both the training (dashed lines) and test datasets (solid lines) for networks trained using P structure. As shown in Figure 5, for d y > 1 the test MSE is lower than the training MSE. This is generally because the network was trained using additional white noise-producing more errors in the training, while the test datasets were smooth for reference. In Figure 5, while the training error seems to remain constant at d y > 2, the test error keeps decreasing with increasing d y . This clearly illustrates that including more depth in the recurrence improves the generalisation capability and therefore improves the ANN prediction. As the best MSE test was obtained with d y = 5, this value will be used from here onwards. From Figure 5 we can also analyse that a time delay of at least 3 previous time steps is useful to train the network. Moreover, we do not see the value of using d y > 5, as judging by the MSE trend in Figure 5, there is no significant improvement expected. Figure 6 compares the training time of P compared to SP structure. Moreover, plotting the gradient and performance as function of training epochs allows us to analyse the difference between both training characteristics. As expected and shown in Figure 6 (dashed lines), both SP and P structure training time increases nearly linearly with the number of epochs (iterations). However, the slope is steeper for P training which is caused by higher computational cost using Backpropagation Through Time (BPTT).
SP Versus P Training Structure
In Recurrent ANNs, BPTT is used to calculate the derivative of the loss function instead of the normal backpropagation method. BPTT is technically the same as the normal backpropagation method but with the RNN unfolded through time being the main difference [30]. The gradient is then backpropagated through this unfolded network. Unfolding the recurrent network increases its size, and therefore the optimization problem becomes computationally more expensive and more difficult to solve.
After every epoch in P training, the output values change, and consequently the feedback values also change. The constantly changing feedback values cause additional changes of the gradient values along the iteration (dotted lines in Figure 6). This makes the training a more nonlinear problem. Correspondingly, the training performance (smaller MSE) increases much slower. In SP training, the gradient strongly decreases during the first 20 epochs, showing that the SP training is more computationally stable. However, the MSE (solid lines in Figure 6) was evaluated during training for the structure the network was trained with, meaning that the training performance does not consider the closed-loop conversion error for SP training. For that reason, the MSE values shown in Figure 6 seem to be better for SP training. Regardless of this difference, both training procedures converge with strictly monotonic decay of their MSE. Next, the prediction performances of both training architectures are compared. In Figure 7, the results of the SP-trained NARX (dashed lines) are shown compared to the target values obtained from the simulation (solid lines) as reference solutions. Here, all target variables are calculated with differing inlet temperature. After a few time steps with relatively precise forecasts, the NARX predictions for T, p and ν CaO diverge from real values and are highly fluctuating over time, which is nonphysical. Note that, in Figure 7, the results are shown only up to time step 100 instead of 1000. This is because the NARX prediction results after time step 100 have even higher fluctuations as the error propagates, hence making the comparison unclear visually. The forecast for x g,H 2 O is reasonably accurate for t < 100 but more erroneous for longer forecast periods. The forecast error is caused by the closed-loop-instability, meaning the inaccuracy caused by converting the network structure from SP to P. In other words, training using SP structure gives increasingly erroneous results with increasing time horizon. On the contrary, training with P structure provides clearly more accurate forecasts compared with SP, as shown in Figure 8. The NARX predictions (dotted lines) correspond really well to the reference targets (blue solid lines) throughout the whole simulation time for 1000 time steps, with inlet temperatures covering the whole range of its input distribution.
The comparison of P and SP structure shows that, while training in P structure seems to be more unstable, it provides significantly better long-term predictions because it trains the network to realise the time-dependency of output variables in a dynamic system. As shown by Equation (11), NARX resembles an explicitly discretized ODE, which is known to be conditionally stable. In cases where the discrete ODE is unstable, the error grows exponentially through time [73]. By training the network using P structure, the same structure is used for both training and testing, hence minimising the error propagation. The higher computing time needed to train in P structure (in comparison with training in SP structure) should not be a problem because the training only needs to be conducted once. Once the optimum network parameters are obtained, NARX can give reasonably accurate predictions with fast computational time. In fact, almost all studies in the area of surrogate models are willing to accept high computational costs during training (called offline costs) if the accuracy and speed for prediction (online) are good [54,74,75].
To make a fair comparison, all networks were trained with the same set of initial network parameters. In this test, we used only P training and feedback delay d y = 5, because they clearly showed preferable performance in the previous sections. The comparison is summarised in Table 2. While the MSE on training data is in a comparable range for all loss functions, differences in MSE test are observed. L2 regularisation helps to reduce overfitting of training data, resulting in a lower test error in "MSE + L2" compared to "MSE". At first glance, the additional physical regularisation does not seem to further improve the results. MSE test of "MSE + PHY" is slightly worse compared to "MSE", and MSE test of "MSE + L2 + PHY" is in the same order with "MSE + L2" because another constraint is added in the objective function, while the performance is measured only based on MSE. Moreover, using only the physical (MSE + PHY) instead of only L2 regularisation (MSE + L2) leads to a test performance decrease.
Even though the performance for both "MSE" and "MSE + L2" are better than "MSE + PHY", they both fail to produce physically plausible predictions in several test datasets (outliers) as shown in Figures 9 and 10 (the label "Reference" for the blue line refers to the synthetic test data obtained from the physical model), the clearest one being negative fraction values of CaO and H 2 O. One important aspect that needs to be considered is that the ANN was trained using only 100 training datasets, compared to almost 500 parameters that exist inside the network. This made the optimisation problem an ill-posed one, leading to clear overfitting in the network with "MSE" and "MSE + L2". Physical regularisation tackles this problem even for relatively sparse training data, which is valuable once experiments are costly, and therefore, not much data are often available to train the network.
Even though it produces the worst overall test performance, physical regularisation alone (MSE + PHY) is able to produce physically plausible results despite no application of L2 regularisation, see Figure 11. The figure illustrates the worst prediction result of all test sets obtained using "MSE + PHY". Even in its worst prediction with high error, the network is much more stable. With the addition of L2 regularisation in the "MSE + L2 + PHY" scheme, the prediction error (MSE) is further reduced so that it lies within the same range as "MSE + L2". The major difference here is illustrated in Figure 12, where the worst prediction result produced by "MSE + L2 + PHY" is far more physically plausible, shown by the absence of unstable fluctuations as well as the relatively higher accuracy.
We trained the ANN using numerical simulation results which indirectly imbues the physics from the formulated governing equations into the ANN. When the ANN was not trained using numerical simulation results but with real observation data (which could follow more complex, scientifically unexplored equations), physical regularisation helps to constrain the ANN training at least to fundamental, confirmed laws and prevent unnecessary overfitting to the irregular and noisy observation data. As such, implementation of the method we present will be even more beneficial for applications with real observation data.
Conclusions
We adopted a PINN framework as an example of grey-box modelling to predict the dynamic internal states of the TCES system. Our approach aligns with the motivation of PINN that sees the modelled system as a complex unit that is underrepresented by the governing equations used in the physical model. We do not construct the ANN only as a surrogate model for the expensive numerical model, unlike in other PINN approaches that use ANN to infer the governing equations of the modelled system or the parameter values, assuming that the physical model describes the system perfectly. Our method was designed for application with real observation data that imply more complex processes than the simplified physical model. In this paper, however, we used synthetic data to train the ANN as a proof of concept.
As a key contribution, we propose to implement physical knowledge about a system into building the structure, choosing the training mode and designing the regularisation of ANNs to assure the physical plausibility and to increase the performance of the TCES dynamic internal state predictions. The alignment between the system's behaviour (dynamic and nonlinear) and the ANN structure is described. The ANN predictions using different regularisation strategies are also compared to show the improvement provided by our method.
We show that, while training in P structure is computationally more expensive and unstable, the result is superior to training using SP structure, because P training resembles the dynamic of the governing differential equations better. Additionally, we found that, due to the nonlinearity and long-term memory effects implied by the system equations, deeper recurrency is necessary. A moderate depth of feedback delay produces better prediction performance, resulting from the network ability to capture the latency of the system. However, using too much feedback delay is also counter-productive, as it does not give significant improvement anymore, only increasing computational cost.
We also show that including physical regularisation to train the network improves the physical plausibility of the network predictions, even for worst-case scenarios. Physical regularisation helps the network to learn about relationships between different input and target variables, as well as the time-dependency between them. This includes mole and energy balance equations that serve as the building blocks of the system's behaviour, along with simple monotonicity and non-negative constraints. However, physical regularisation alone is not enough to improve the generalisation capability of the network, and therefore, L2 regularisation is also necessary.
A very common issue with using ANNs in data-driven modelling is that obtaining experimental or operational data can be very costly, and therefore, there is often no sufficient data available to train the ANN. Our work shows that even with only a relatively small amount of training data (compared to the number of network parameters), using P training with a moderate amount of feedback delay d y , combined with physical regularisation helps to prevent overfitting in optimising ill-posed problems and it produces relatively accurate and physically plausible predictions of the CaO/Ca(OH) 2 TCES system internal states. Further work is required for more sophisticated analysis of the system, for example with spatial distribution of the internal system, dynamic exogeneous input and uncertainty quantification of the predictions.
Conflicts of Interest:
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript: Table A1 lists all of the exogeneous input with their corresponding distributions. The exogeneous input distributions are centred around the values taken from [20]. Table A1. Input distributions for exogenous inputs, with µ and σ being the mean and standard deviation used to generate the data, respectively; while the superscript D and H refer to the dehydration and hydration process, respectively.
Appendix B. Mole and Energy Balance Error
Physical constraints in e phy,i,t,1 and e phy,i,t,2 use mole and energy balance equation, respectively. Both balances are calculated in a simplified way discretized for time steps of 5 s with spatially averaged values and a local thermal equilibrium of the gas and the solid. For clarity, all the specific sample indicators in the training dataset i are omitted in this section.
The mole balance is formulated for H 2 O (assuming that the density can be calculated with ideal gas law) with the in-and outflowing moles n H20,in and n H20,out , the storage term in the gaseous phase ∆n H20,sto and the source/sink term ∆n H20,q . The in-and outflowing moles of H 2 O both are known values from the simulation or input data. The storage term ∆n H20,sto can be calculated from the change in H 2 O molar fraction, x g,H 2 O (t) −x g,H 2 O (t − 1) multiplied with the H 2 O molar density and the pore volume. The complete definition is written as: ∆n H 2 O,sto (t) = φV(x g,H 2 O (t) −x g,H 2 O (t − 1))ρ n,H 2 O (t).
The source/sink term ∆n H20,q is calculated with the molar amount of CaO formed. Based on the stoichiometry ratio, with every 1 mole of CaO forme, 1 mole of H 2 O is also formed. The molar amount of CaO is determined by the change in CaO volume fraction,ν CaO (t) −ν CaO (t − 1), multiplied with the molar density and the volume. The calculation for ∆n H20,q is written as: ∆n H 2 O,q (t) = V(ν CaO (t) −ν CaO (t − 1))ρ n,CaO . (A2) Finally, Equations (A1) and (A2) are substituted into Equation (17). For the energy balance formulation, the energy of the inflowing and outflowing gas Q in and Q out are also known from simulation or from input data. The energy storage in the gaseous phase is neglected as its contribution is negligible. Only the solid contribution is used in the calculation of ∆Q sto . The solid energy change is calculated as the change in both CaO and Ca(OH) 2 mass multiplied by the temperature and specific heat capacity. The definition is written as: where Q sto (t) is defined as: The source/sink term for the energy balance equation, ∆Q q , is calculated based on the change in molar amount of CaO multiplied by the specific reaction enthalpy and subtracted with the volume expansion work. The negative sign corresponds to the definition in Equation (7). The calculation is written as: Equations (A3) and (A5) are then substituted into Equation (18). Figure A1 shows the best prediction of the ANN using 2 hidden layers with 15 and 8 nodes at each layer using only MSE and L2 regularisation to define the loss function. Figure A1. An example of the best prediction sample (red) obtained using 2 hidden layers with 15 and 8 nodes at each layer and reference solution obtained from the physical model (blue). | 11,823 | 2020-07-29T00:00:00.000 | [
"Physics",
"Engineering",
"Computer Science",
"Materials Science"
] |
A new approach for retrieving the UV – vis optical properties of ambient aerosols
Atmospheric aerosols play an important part in the Earth’s energy budget by scattering and absorbing incoming solar and outgoing terrestrial radiation. To quantify the effective radiative forcing due to aerosol–radiation interactions, researchers must obtain a detailed understanding of the spectrally dependent intensive and extensive optical properties of different aerosol types. Our new approach retrieves the optical coefficients and the single-scattering albedo of the total aerosol population over 300 to 650 nm wavelength, using extinction measurements from a broadband cavity-enhanced spectrometer at 315 to 345 nm and 390 to 420 nm, extinction and absorption measurements at 404 nm from a photoacoustic cell coupled to a cavity ring-down spectrometer, and scattering measurements from a three-wavelength integrating nephelometer. By combining these measurements with aerosol size distribution data, we retrieved the timeand wavelength-dependent effective complex refractive index of the aerosols. Retrieval simulations and laboratory measurements of brown carbon proxies showed low absolute errors and good agreement with expected and reported values. Finally, we implemented this new broadband method to achieve continuous spectraland time-dependent monitoring of ambient aerosol population, including, for the first time, extinction measurements using cavity-enhanced spectrometry in the 315 to 345 nm UV range, in which significant light absorption may occur.
Introduction
Atmospheric aerosols affect Earth's energy balance directly via their interactions with incoming solar radiation and indirectly by altering cloud microphysical and optical proper-ties.The effective radiative forcing (ERF) due to aerosolradiation interactions (ERFari) encompasses attenuation of solar flux to the surface due to direct scattering and absorption and rapid adjustments of the atmospheric temperature profile (IPCC, 2013).The latter is caused by energy released as heat by light-absorbing aerosols that can influence cloud lifetime (Hill and Dobbie, 2008;Davidi et al., 2009;Allen and Sherwood, 2010;Koch and Del Genio, 2010;Nabat et al., 2014).Irradiation changes from ERFari and the ERF due to aerosol-cloud interactions (ERFaci) are still two of the largest uncertainties in the understanding of anthropogenic radiative forcing (IPCC, 2013).
Radiative transfer models use aerosol optical depth, single-scattering albedo (SSA), the scattering phase function, and the asymmetry parameter to describe the interaction between aerosols and solar radiation.An accurate representation of the complex refractive index (RI; m = n + ik), which is an intensive optical property of aerosol types and components, is required to calculate these parameters.
Aerosol optical properties are typically measured as scattering, absorption, or extinction coefficients (α sca , α abs , and α ext , respectively) by a variety of in situ techniques.These include integrating and reciprocal nephelometry (Nakayama et al., 2010), filter-based absorption measurements (Guyon et al., 2003;Zhang et al., 2013), photoacoustic spectrometry (PAS) (Lack et al., 2012), extinction cells, and cavityenhanced spectrometry (CES) (Varma et al., 2013).Many of these methods are restricted to a single or a few discrete wavelengths and their ability to provide wavelengthdependent measurements is limited.Broadband CES instruments were recently developed for aerosol extinction measurements at wavelength ranges of about 30 to 40 nm per cavity (Washenfelder et al., 2013(Washenfelder et al., , 2015;;Zhao et al., 2013; Published by Copernicus Publications on behalf of the European Geosciences Union. N. Bluvshtein et al.: A new approach for retrieving the UV-vis optical properties of ambient aerosols Flores et al., 2014).White-type extinction cells with a UVvis light source and a grating spectrometer were recently used for aerosol extinction measurements over a wide range of wavelengths from below 250 nm up to 700 nm.However, White-type extinction cells suffer from short optical path length (tens of meters, as opposed to several kilometers in CES instruments), which leads to the detection limit being an order of magnitude higher than that of CES (Chartier and Greenslade, 2012;Jordan et al., 2015).
Organic particulate matter that has strong wavelengthdependent light absorption characteristics, with higher absorption at near-ultraviolet and blue wavelengths (Andreae and Gelencser, 2006;Laskin et al., 2015), is known as atmospheric brown carbon (BrC).BrC is mostly composed of anthropogenic or biogenic secondary organic aerosols and aerosols from biomass burning (Spracklen et al., 2011).The contribution of BrC to radiative forcing still poses one of the largest uncertainties in our understanding of climate forcing.BrC may be the dominant light absorber downwind of urban and industrialized areas and in biomass burning plumes (Feng et al., 2013).In the atmosphere, it is found either internally or externally mixed with inorganic particles and black carbon (Cappa et al., 2012).If internally mixed with black carbon, BrC may cause absorption enhancement through the lensing effect (Bond et al., 2006).A better quantification of the spectral dependency of optical properties of BrC aerosols is required in order to reduce the uncertainty surrounding the ERFari.
The interaction of atmospheric fine particulate matter with sunlight was shown to resemble a power law dependence on wavelength decades ago (Ångström, 1929, 1930, 1961, 1964).The Ångström exponent was since widely used to describe this wavelength dependency and to characterize aerosols size (Valenzuela et al., 2015), composition (Russell et al., 2010), and source (Garg et al., 2016).This monotonic increase in extinction (scattering and absorption) with increasing particle size or with decreasing wavelength is observed for particles with radii smaller than the incident wavelength (or size parameter < 2π ).For larger particles (or shorter wavelengths) the ripple and interference structures of the Mie curves significantly reduce the monotonic increase pattern (Bohren and Huffman, 2007a).This also depends on the complex RI.Increasing the real part limits the power law behavior to smaller particles while increasing imaginary part, dampening the ripple and interference structures, allowing power law behavior for larger particles.Particulate matter with molecular absorption bands in the actinic flux spectrum would also deviate from the power law spectral dependency.In an ambient dust-free atmosphere as well as in many laboratory and chamber experiments, particles are rarely larger than several hundreds of nanometers, making the power law wavelength dependency assumption a reasonable one (Kirchstetter et al., 2004;Hoffer et al., 2006;Sun et al., 2007;Chen and Bond, 2010;Washenfelder et al., 2015).
In this study, we present a new approach for retrieving a broadband UV-vis spectrum (300 to 650 nm) of the total aerosol population, including its α sca , α abs , and α ext , its SSA, and, together with size distribution measurements, its effective wavelength-dependent complex RI.The new retrieval approach utilizes a combination of several different instruments: CES, PAS, and nephelometer.We validate the method with computer simulations and laboratory measurements and show how it can be applied for continuous spectral-and timedependent monitoring of an ambient polydisperse aerosols population.To the best of our knowledge, we also report the first implementation of broadband CES for aerosol extinction measurements in the 315 to 345 nm UV range (being a range in which significant light absorption may occur) for laboratory and ambient aerosols.
Approach
To obtain the optical properties of light-absorbing ambient aerosols in the 300 to 650 nm wavelength range, we used measurements of α ext , α sca , and α abs .The extinction coefficients were obtained with a broadband cavity-enhanced spectrometer (BBCES) measuring at two distinct wavelength ranges: 315 to 345 nm and 390 to 420 nm.Extinction and absorption coefficients at λ = 404 nm were measured using a homebuilt multi-pass PAS (Lack et al., 2012) coupled to a cavity ring-down spectrometer (CRD-S) (Bluvshtein et al., 2012;Flores et al., 2012) (PA-CRD-S).The scattering coefficients at 457, 525, and 637 nm were measured using an integrating nephelometer (IN; model IN100, AirPhoton, USA).The α ext , α sca , and α abs measurements were used together with the aerosol size distribution and the aerosol number concentration in a novel procedure to obtain the broadband optical coefficients, the SSA, and the broadband effective complex RI in the 300 to 650 nm wavelength range.The procedure is described in detail in Sect.2.3.The term effective complex RI is used to represent that we derive the RI for the entire aerosol size distribution.It is the complex RI from which, for the corresponding size distribution, we derive the optical coefficients that agree most closely with the measured or input values.
Broadband cavity-enhanced spectroscopy
We use a dual-channel BBCES to measure the aerosol optical extinction between 315 to 345 nm and 390 to 420 nm (at a 0.5 nm resolution).The 315 to 345 nm cavity uses a new, laser driven Xe lamp, and its design is similar to that described by Washenfelder et al. (2016).The 390 to 420 nm cavity is as described in Flores et al. (2014).Only a brief description and the main differences are highlighted here.
For the 390 to 420 nm cavity (BBCES-407), we use a lightemitting diode (LED) centered at 407.1 nm with a measured optical power output of 0.450 W (M405D2, Thorlabs, Newton, NJ, USA).The LED is temperature-controlled and powered by a constant-current power supply to achieve a stable optical power output.The output from the LED is collimated using a single F/1.2 fused silica lens and optically filtered using a bandpass filter (FB400-40, Thorlabs, Newton, NJ, USA) before entering the optical cavity formed by two mirrors, 2.54 cm in diameter and 1 m radius of curvature (Advanced Thin Films, Boulder, CO, USA).For the 315 to 345 nm cavity (BBCES-330; see Washenfelder et al. (2016) for a detail description), we use a broadband light source (EQ-99FC LDLS; Energetiq, Woburn, MA, USA) consisting of a continuous wave diode laser at 974 nm that pumps a Xe plasma (Islam et al., 2013).The light source is air-cooled and temperature-controlled using water circulation through an attached aluminum plate to prevent intensity drifts.This light exits through a 600 µm optical fiber and it is collimated and coupled into the optical cavity using an off-axis parabolic mirror with a 0.36 numerical aperture (RC04SMA-F01; Thorlabs, Newton, NJ, USA).To remove stray light, the light passes through two colored glass filters (Schott Glass WG320 and UG11) before entering the cavity.This cavity also consists of two mirrors, 2.54 cm in diameter and 1 m radius of curvature (Layertec GmbH, Mellingen, Germany).
The typical measured mirror reflectivities for the BBCES-330 and BBCES-407 cavities are 0.99940 and 0.99994 at 330 and 420 nm, respectively.After exiting each cavity, the light is directly collected using a 0.1 cm F/2 fiber collimator (74-UV, Ocean Optics, Dunedin, FL, USA) into one lead of a two-way 100 µm core HOH-UV-VIS fiber (SR-OPT-8015, Andor Technology, Belfast, UK) that is linearly aligned along the input slit of the grating spectrometer.
The spectra are acquired using a 163 mm focal length Czerny-Turner spectrometer (Shamrock SR-163, Andor Technology, Belfast, UK) with a charge coupled device (CCD) detector (DU920P-BU, Andor Technology, Belfast, UK) maintained at −50 • C. The spectrometer is temperaturecontrolled at 32.0 ± 0.1 • C. Dark spectra are acquired with the input shutter (SR1-SHT-9003, Andor Technology, Belfast, UK) closed prior to each set of spectra.The wavelength is calibrated using a Hg/Ar pen-ray lamp.
The α ext of the aerosol is determined as the difference in light intensity between a filled cavity and a particle-free cavity, taking into account the mirror reflectivity and the Rayleigh scattering of the carrier gas (Washenfelder et al., 2013).
Photoacoustic spectrometer coupled to a cavity ring-down spectrometer
A single-wavelength PA-CRD-S (Fig. 1) is used to directly measure both α ext and α abs at λ = 404 nm.The PA-CRD-S system described in this section (Fig. 1) is composed of a 110 mW 404 nm diode laser (iPulse, Toptica Photonics, Munich, Germany) modulated at the measured PAS resonance frequency at 50 % duty cycle.The laser beam (blue arrow in Fig. 1) is split into two separate optical paths (directed to the CRD-S and the PAS, respectively) using a variable beam splitter composed of a quarter waveplate (1/4 λ) and a polarizing beam splitter (PBS).With the current setup, turning the 1/4 λ between 0 0 and 90 0 varies the intensity ratio between the two optical paths from 0 : 1 to 1 : 1 CRD-S to PAS, respectively.The beam directed to the PAS is turned and aligned into the PAS cell through a set of two planoconvex lenses (focal lengths of 30 and 50 mm) used as a telescope in order to collimate the beam into a diameter of about 1.5 mm.The beam directed to the CRD-S passes through another 1/4 λ plate, which together with the PBS serves as a variable attenuator protecting the laser head from the beam reflected back by the highly reflective mirror of the CRD-S.This back-reflected beam is transmitted through the PBS into a photodiode (PD) (as shown by the dashed arrow in Fig. 1), with the PD serving as an external trigger source for the CRD-S decay measurement.The forward beam is then turned and aligned into the CRD-S cavity by a set of turning mirrors.While the sensitivity of the PAS is related to the power intensity of the laser, the CRD-S system requires only the minimal laser power needed by the PD.This allows us to divert approximately 78 % of the laser power (about 86 mW) to the PAS cell and thus optimize its sensitivity.
a. Photoacoustic spectrometer
In a PAS, modulated laser light is absorbed by a sample of particles or gas, generating a modulated acoustic wave whose intensity is proportional to the energy absorbed by the sample.This acoustic wave, which is detected by a sensitive microphone, has a characteristic radial and longitudinal resonance when the light source is modulated at the cavity resonance frequency.For a more detailed description of the PAS method for aerosol light absorption measurement see Arnott et al. (1999) and Nagele and Sigrist (2000).
We use a multi-pass astigmatic PAS cell (see Lack et al., 2012, for a detailed description).Briefly, the PAS is composed of dual half-wavelength resonators (11 cm long, 1.9 cm diameter) capped on either end with quarter-wavelength acoustic notches.The total sample cell volume is 185 cm 3 .While both resonators are open to sample flow, only one is exposed to the modulated laser light; the other is used for noise cancellation.Microphones are placed at the antinode of the sound wave in the center of each resonator and the speaker is placed at the background resonator.The resonance frequency specific to this system is found by producing white noise using a speaker in the reference resonator.Each segment is sampled by the microphones at a 100 kHz rate and analyzed by a fast Fourier transform algorithm.
The astigmatic optical configuration consists of two 5.08 cm diameter, high-reflectivity mirrors (ARW Opti- cal, Wilmington, NC, USA; dielectric coating R > 99.5 %) spaced 35 cm apart and mounted on adjustable mirror mounts.The laser side mirror has a cylindrical radius of curvature of 43 cm and a 2 mm hole drilled in the center.The back mirror has a cylindrical radius of curvature of 47 cm and is rotated 90 • to the radius of curvature of the laser side mirror.Astigmatic alignment is achieved by aligning the laser through the 2 mm hole drilled in the center of the first mirror and onto an off-center target on the second mirror.Each following reflection should also be directed to an off-center target on the other mirror.The PAS cell is mounted within the path of the laser multi-pass.The laser light passes through the PAS cell via two 1 mm thick windows (CVI Laser, Albuquerque, NM, USA), each with a high-transmissivity (T > 99.5 %) anti-reflective coating.The laser power is continuously monitored using a photodiode behind the back side mirror and used to cancel variations in acoustic signal related to laser power fluctuations.
The PAS calibration procedure is described in detail in a companion paper.In short, the complex RI of dry nigrosine films is measured at 404 nm using spectroscopic ellipsometry with the interference enhancement technique (Hilfiker et al., 2008).Nigrosine dye is dissolved into an aqueous solution and nebulized using a constant output atomizer (model 3076, TSI, 35 psi, flow of 2.5 standard L min −1 ), with dry particle-free nitrogen, generating a polydispersed distribution of droplets.The aerosol population is subsequently dried (relative humidity (RH) < 5 %) using two silica gel diffusion dryers and size-selected with an electrostatic classifier (differential mobility analyzer (DMA) model 3085, TSI), operating with a particle-free, dry nitrogen sheath flow of 3 to 15 standard L min −1 .A 10 : 1 ratio of sheath flow to sample flow is maintained.An impactor is used on the DMA inlet to reduce the contribution from multiply charged particles.Nigrosine particles of several sizes (200 to 400 nm mobility diameter) and number concentrations (counted by a condensation particle counter; CPC; model 3775, TSI) were flowed through the PAS cell and its signal was compared to the aerosol α abs calculated using the complex RI retrieved from the dry film measurements and a Mie algorithm.
b. Cavity ring-down spectrometer
The CRD-S method has been extensively described in previous publications (Sappey et al., 1998;Vander Wal and Ticich, 1999;Smith and Atkinson, 2001;Bulatov et al., 2002;Thompson et al., 2002;Strawa et al., 2003;Pettersson et al., 2004;Riziq et al., 2007;Bluvshtein et al., 2012).Briefly, a single-wavelength laser light source is modulated and coupled into the high-finesse optical cavity.The cavity transmission is coupled to an optical fiber and focused onto a photomultiplier tube (PMT), which measures the decay of the light intensity due to aerosol absorption and scattering (extinction).Measuring the light time constant, with an empty cavity and with a cavity filled with aerosols, allows the direct measurement of α ext .
The two optical cavities of the BBCES and the optical cavity of the CRD-S were assembled together in a rigid optical cage to minimize alignment and instabilities.Aerosol flow was introduced to the center of each cavity and pulled from its sides (Fig. 1, downward arrows into the CRD-S and BBCES).This setup eliminates a significant source of error in determining the extinction coefficient, namely, uncertainty in the length of the aerosol sample within the cavity length (Miles et al., 2011).The optical cage and the PAS cell are mounted on vibration-isolated breadboard in a temperature controlled environment (23 ± 0.25 • C).
Integrating nephelometer
The AirPhoton IN100 IN is a component of the global Surface PARTiculate Aerosol Network (SPARTAN), whose purpose is to evaluate and enhance satellite-based estimates of ground level particulate matter.The nephelometer is a continuous sampling, LED-based device measuring total α sca at three optical channels (red, green, and blue) centered on 637, 525, and 457 nm over an angular range of 7 to 170 • (Snider et al., 2015).
Size distribution and number concentration
Aerosol number concentration is measured with a condensation particle counter (CPC; model 3775, TSI) and the particle size distributions are obtained by a scanning mobility particle sizer (SMPS; 3085 DMA and 3775 CPC, TSI).
Retrieval methodology
We have developed a two-step method to derive continuous values of α ext , α sca , α abs , and SSA in the 300 to 650 nm wavelength range.Using these values of α ext , α sca , and α abs together with the measured aerosol size distribution and the number concentration, we also retrieve the effective complex RI.A flow chart of the retrieval methodology is shown in Fig. 2.
Broadband extinction, scattering, absorption, and SSA retrieval methodology
First, the extinction data from the BBCES measurements (315 to 345 nm, 390 to 420 nm) are fitted with a power law function.The fitting procedure is weighted by the uncertainties of the measured BBCES extinction coefficients.This fit is used to extrapolate α ext and its uncertainty to the nephelometer wavelengths.Next, data for α abs at 404 nm are used together with an initial guess of a power law coefficient to calculate α abs at the three nephelometer wavelengths (637, 525, and 457 nm) also considering the uncertainty of the measured α abs at 404 nm.These three α abs values, together with the three α ext values from the power law fit of the extinction data, are used to calculate α sca_calc at the nephelometer wavelengths.Uncertainties of α sca_calc are propagated from uncertainties of the extrapolated α ext and α abs .
Then, the minimum square difference (χ 2 ) is calculated between α sca_calc and the measured α sca_meas using the uncertainties of both parameters, and the power law coefficient used to extrapolate the α abs data is varied iteratively until the minimum difference between α sca_calc and α sca_meas is found (Fig. 2a).This procedure is repeated with an exponential function to extrapolate α ext and with an exponential coefficient to extrapolate α abs and two additional times to cover all four possible combinations of exponential and power law representations of α ext and α abs .At each repetition, the minimum difference between α sca_calc and α sca_meas is found iteratively and the global minimum difference from the four combinations is selected.This information is used to calculate α ext , α abs , and their associated uncertainties in the wavelength range of 300 to 650 nm (Fig. 2a) using the best-fitted power law or exponential curves, and α sca is then calculated by subtraction.Finally, the size-weighted SSA is calculated as SSA = α sca /α ext .
Methodology for retrieving the effective complex RI of the total particle size distribution
Using the retrieved α ext and α sca and their associated uncertainties described in Sect.2.3.1 together with the measured size distribution and the aerosol number concentration, the effective complex RI of the total particle size distribution is retrieved at each individual wavelength (300 to 650 nm).The effective complex RI in the context of this work is one that, for a given size distribution and number concentration, would satisfy a minimum difference between the theoretical values of α ext and α sca (based on a Mie theory calculation) and the extrapolated or measured α ext and α sca .A specialized Mie theory algorithm was written in order to retrieve the effective complex RI for the total size distribution of the parti- Schematic of the method for retrieving the effective complex refractive index using the total particle size distribution.α abs , α ext , and α sca are wavelength-dependent absorption, extinction, and scattering coefficients, respectively; BBCES is broadband cavity-enhanced spectrometer; the subscripts calc and meas indicate a calculated or measured value, respectively; D p is particle diameter; n is the real part of the complex refractive index; k is the imaginary part of the complex refractive index; N is particle number concentration; PAS is photo-acoustic spectrometer; RI is effective complex refractive index; σ ext_calc and σ sca_calc are theoretical extinction and scattering cross sections weighted by the size distribution; χ 2 is minimum square difference; ω is size-weighted single-scattering albedo (SSA).
cles.Briefly, an array of initial guesses is used to initiate an iterative converging search of a theoretical complex RI for which a Mie calculation (Bohren and Huffman, 2007b) with a given size distribution, number concentration, and wavelength produces a best-fitted pair of theoretical α ext and α sca (see Fig. 2b).The best fit is determined by minimizing the χ 2 function: where α ext_meas and α sca_meas are the retrieved or measured α ext and α sca , N is the particle number concentration, σ ext_calc and σ sca_calc are the theoretical extinction and scattering cross sections weighted by the size distribution, and d denotes the uncertainty on the associated parameter.
To estimate the retrieval uncertainties in n and k ( n and k), the algorithm returns the values of n and k that satisfy χ 2 0 ≤ χ 2 ≤ χ 2 0 + 1, where the value 1 denotes 1σ deviation from the minimum χ 2 (χ 2 0 ) (in the case of 1 degree of freedom) (Press et al., 1992).
Validation of the retrieval methodology
In order to test this new approach and evaluate its merits, we performed three different tests: (1) a computer simulation of time-dependent extinction, scattering, and absorption measurements at the instruments' wavelengths; (2) laboratory measurements of two BrC proxy materials; and (3) a 24 h ambient aerosol measurement.
Computer simulation
For the computer simulation, 100 different synthetic data sets of complex RIs in the 300 to 650 nm range were composed at a resolution of 1 nm.The real part (n(λ)) ranged from 1.692 at 300 nm to 1.586 at 650 nm, and the imaginary part (k(λ)) ranged from 8.156 × 10 −2 at 300 nm to 1.781 × 10 −3 at 650 nm.Both n(λ) and k(λ) were scaled by two incoherent sine waves to simulate temporal variability.The sine waves amplitude ranged from 1 to 1.05 for n(λ) and from 1 to 1.1 for k(λ).A log-normal size distribution with a mode at 80 nm, a geometric standard deviation of 1.33, and a number concentration of 10 4 cm −3 were assumed to calculate α ext , α sca , and α abs at the instrumental wavelengths.An additional error was assigned to each calculated α ext , α sca , and α abs at the instrumental wavelengths, randomly out of a normal distribution with ±2 % (1 standard deviation).This value of error represents typical uncertainty values associated with our instrumental measurements.The α ext , α sca , α abs , SSA, and effective complex RI were then retrieved as described above for all the synthetic instrumental data sets.Figure 3 shows a single modeled example of the procedure.The retrieved α ext , α sca , α abs , and SSA are shown in Fig. 3a and the broadband complex RI in Fig. 3b.
Laboratory measurements
For the laboratory measurements, two BrC proxy materials were used to test the retrieval method: Suwannee River fulvic acid (SRFA; 1S101F, International Humic Substances Society (IHSS), Saint Paul, MN, USA) and Pahokee Peat fulvic acid (PPFA; IHSS).SRFA and PPFA (aquatic/terrestrialbased fulvic acids standards) have been used as proxy materials for atmospheric BrC (Dinar et al., 2008) since macromolecular organic substances in aerosols began to be analyzed (Hoffer et al., 2006;Gaffney et al., 2015) and are recognized as similar to humic and fulvic acids.Although Graber and Rudich (2006) highlighted significant chemical and physical differences between atmospheric humic-like substances and terrestrial or aquatic humic substances, for the purpose of simulating UV-vis light absorption by atmospheric BrC, terrestrial/aquatic-based humic substances standards remain useful.
Each substance was separately dissolved into an aqueous solution and nebulized as described in Sect.2.2.2a.A complementary N 2 flow of 1.3 standard L min −1 was added and mixed with the sample flow, which was then introduced to the IN and subsequently split equally to the SMPS, CPC, PAS, and a three-cavity optical cage that contained the CRD-S and BBCES (see Fig. S2).There was no additional dilution of the sample flow once it was introduced to the IN to avoid differences in the sampled particle concentration between the different instruments.Aerosol temperature, pressure, and RH were measured continuously.
Ambient aerosol measurement
To demonstrate the application of the retrieval methods to field applications, ambient aerosols were sampled during a 24 h period.Using conductive tubing, ambient air was pulled from the roof of the Department of Earth and Planetary Sciences building at the Weizmann Institute of Science through a PM 10 sampling inlet.Sampled air was dried (RH < 17 %) with diffusion dryers and pulled isokinetically through the PA-CRD-S and BBCES.A total flow of 16.7 standard L min −1 was pulled through the PM 10 inlet as specified by the manufacturer.The CRD-S and the two BBCES systems sampled at a 0.2 standard L min −1 flow rate and the PAS at a 0.6 standard L min −1 flow rate (Fig. S2).The flow scheme was set up in a manner that ensured isokinetic splitting at every tube junction.Sampling was undertaken continuously for a 24 h period from 18:30 LT (local time) on 21 June 2015 to 18:30 LT of the following day.
While each instrument has a different sampling rate, in order to simplify data analysis, the data outputs of the PAS-CRD-S, BBCES, IN, and CPC were set to represent 2 min averages.
Because the IN continually samples untreated ambient aerosols as part of the SPARTAN network, it remained on the roof.For this reason, IN measurements had to be corrected for particle hygroscopic growth.Snider et al. (2015) suggest correcting for increased scattering due to the growth of humidified particles using the relative change in particle volume, but this does not take into account the decrease in effective RI of the particles due to water uptake.See the Supplement for a detail description of the correction we used.
Computer simulations
The results of the computer simulations are summarized in Fig. 4. Box plots of the absolute value of the percent errors for the retrieved variables from the 100 different synthetic data sets are shown.Overall, the retrieved values are in very good agreement with the synthetic data.Results show that expected errors in the size-weighted SSA (λ, t), α ext (λ, t), and α sca (λ, t) are less than 10 % for the full spectral range and less than 5 % in the 400 to 500 nm range.Expected erwww.atmos-meas-tech.net/9/3477/2016/Atmos.Meas.Tech., 9, 3477-3490, 2016 rors in the real part of the RI are less than 1 % throughout the entire spectrum.Relative errors in the imaginary part of the RI and in α abs (λ, t) are less than 60 % for the 300 to 400 nm range and are expected to grow with increasing wavelength as these parameters go to 0. For example, under the conditions of this simulation at 400 nm wavelength (namely, the complex RI, particle size distribution, and number concentration), a relative error of 60 % in the retrieved values translates into absolute errors of 1 to 3 Mm −1 on α abs and of 0.01 to 0.015 on k, respectively.An absolute error of 1 to 3 Mm −1 on α abs at 400 nm is between 1 and 10 % of measured absorption in polluted urban environments (Lack et al., 2012;Lan et al., 2013;Yuan et al., 2016).
Laboratory measurements
Figure 5 shows the measured optical coefficients of PPFA particles as well as the retrieved products of the broadband coefficients (Fig. 5a).The retrieved complex RI is also shown (Fig. 5b).The retrieved values for RI obtained by the established method of size-selecting aerosols (Lack et al., 2006;Riziq et al., 2007;Trainic et al., 2011;Bluvshtein et al., 2012;Flores et al., 2012Flores et al., , 2014;;Lavi et al., 2013;Washenfelder et al., 2013) using the BBCES-315 and CRD-S are overlaid on Fig. 5b.The imaginary part of the complex RI calculated from UV-vis absorption measurements of the diluted aqueous solution (Sun et al., 2007) is shown as a shaded area.The upper and lower limits of this area represent the range of assumed material density (1.1 to 1.3 g cm −3 ) used in this calculation.There is very good agreement between all the retrieved values, with a slight difference between them only with respect to the real part of the RI in the 315 to 345 nm range.
Similarly to the PPFA measurements, Fig. 6 shows the measured and retrieved values for α ext , α sca , α abs , and SSA obtained using SRFA as the BrC proxy material.Overlaid on Fig. 6b are the published complex RI values retrieved from laboratory-generated SRFA.These retrievals were obtained from extinction measurements on size-selected particles.The accuracy of the new effective RI retrieval process is improved by incorporating direct absorption and scattering measurements as suggested by Zarzana et al. (2014).
Ambient aerosol measurement
We measured the optical properties of ambient aerosol during a 24 h period to demonstrate the application of the new retrieval methods.To check the reliability of the retrieval method, we first compared the SSA values at 404 nm de- rived from the direct α ext and α abs measurements (using the PA-CRD-S), with the retrieved SSA (calculated from retrieved α ext and α sca , Fig. 7) accounting for the uncertainties in both variables.Figure 7 shows an excellent correlation (slope = 1.031 ± 0.291; R 2 = 0.977) between the measured and retrieved SSA throughout the measurement period.Figures S3 (and S4) in the Supplement shows the good agreement between the retrieved and measured scattering (and extinction) coefficients at the nephelometer wavelengths (and at the center wavelengths of the two BBCES cavities).The good agreement between the measured and retrieved SSA is an indication that the broadband extinction retrieval procedure has little to no error at the wavelength at which the aerosol absorption is constrained.The full spectral retrievals of α ext (λ, t), α sca (λ, t), α abs (λ, t) and SSA(λ, t) over time are shown in Fig. 8 and the retrieved absorption and extinction Angstrom exponents (AAE and EAE) are shown in Fig. 9.The retrieved effective complex RI is shown in Fig. 10.Discontinuity in the sampling data is due to routinely performed reflectivity and zero air measurements required to avoid data drifts.Figure S5 shows the evolution of the total number concentration and the SMPS size distributions, normalized to the concentration of the mode diameter.
Between 06:30 and 08:00 during the morning rush hour, an increase in α abs and a decrease in the size-weighted SSA is not apparent in the extinction and scattering coefficients (Fig. 8).This could be related to increased emissions of ultrafine light-absorbing combustion particles from traffic.This is supported by the decrease in the aerosol size shown in Fig. S5 and by the fact that SSA is directly related to particle size.The apparent effect of transportation during the morning rush hour is also evident in a decrease of the AAE in Figure 9. BC from car emissions can increase the absorption throughout the spectrum but decrease its spectral dependence.A notable increase in the extinction and scattering coefficients seen between 10:00 and 11:00 (Fig. 8) corresponds to the increase in particle concentration seen in Fig. S5 and is due probably to relatively large particles, as the SSA values are relatively high.
Considerable variations in particle number concentration and size distribution occur during the daytime (Fig. S5) and are probably due to transportation on nearby roads.The SMPS size scan duration was set to 5 min while large concentration variations occurred at shorter timescales.Unlike the α ext (λ, t), α sca (λ, t), α abs (λ, t), and SSA(λ, t) data, the retrieval of the effective complex RI strongly depends on accurate representation of the size distribution and aerosol particle number concentration.For this reason, Fig. 10 shows effective RI retrieval results between midnight and 08:00, when variations in number concentration and size distribution data were not as frequent.The figure shows that the ambient ur- ban atmosphere is dominated by slightly absorbing aerosol (k about 0.03 in the short wavelengths), with a few peaks of more absorbing aerosol with higher k values (up to 0.04 in the shorter wavelengths).Figure 10 also demonstrates higher k values in the long wavelengths with lower spectral dependency (lower AAE) during the morning hours probably due to contribution of BC.In field applications, a large volume with long residence time of the sampled air can be added to the system to reduce variations in aerosol concentration.
In a recent review, Moise et al. (2015) compiled the optical properties data of secondary organic aerosols (SOAs) and BrC from laboratory and chamber experiments and from ambient measurements reported in various studies.The compiled data were reported based on formation, aging pathways, and chemical composition.Various measurements of laboratory-and chamber-generated anthropogenic / biogenic SOAs at 405 nm wavelength produced complex RIs with 1.45 ≤ n ≤ 1.7 and 0 ≤ k ≤ 0.04.The retrieval presented in Fig. 10 mostly falls within the lower limit of this wide range.As Moise et al. (2015) point out, laboratory-and chambergenerated SOAs are mostly not as oxidized as reported ambient aerosols and, although some studies showed a possible inverse relation between the real part of the complex RI and O / C ratio (Nakayama et al., 2012;Lambe et al., 2013), others reported the opposite (Cappa et al., 2011;Nakayama et al., 2013).The spectral-dependent data compiled by Moise et al. (2015) clearly indicate that measurements of the broadband optical properties of ambient aerosols are scarce.This new retrieval approach is expected to contribute to future understanding of the optical properties of atmospheric aerosols.
Conclusions
We have developed a new approach to retrieve the timedependent broad spectrum optical properties of ambient aerosols between 300 and 650 nm.Obtaining these properties over such a broad wavelength span and at a high time resolution can contribute considerably to our understanding of aerosol optical properties and their subsequent contribution to radiative forcing.Our approach is based on fitting and extrapolating instrumental extinction, scattering, and absorption data.In this study, the extinction coefficients were ob-tained using a homebuilt broadband cavity-enhanced spectrometer measuring at two distinct wavelength ranges: 315 to 345 nm and 390 to 420 nm.Extinction and absorption coefficients at λ = 404 nm were measured using a photoacoustic spectrometer coupled to a cavity ring-down spectrometer.The scattering coefficients at 457, 525, and 637 nm were measured using an IN.Although the basic principles of the presented calculations may be used to represent ambient aerosol optical properties over any spectral range, depending on available instrumentation, it is important to investigate the weaknesses of the approach at spectral ranges beyond the instrumental wavelengths.
Computer simulations showed that at the selected spectral range (300 to 650 nm) and for a wide atmospherically relevant range of effective complex refractive indices, the expected errors in the size-weighted, wavelength-and timedependent single-scattering albedo, extinction, and scattering coefficients and in the real part of the effective complex RI are mostly less than ±10 %.Although the relative errors in the imaginary part of the effective complex RI and in the wavelength-and time-dependent absorption coefficients are expected to grow with increasing wavelength (as these parameters diminish), for a total column radiative transfer calculation the corresponding absolute errors would be negligible.For example, under the conditions of this simulation, at 400 nm, the absolute errors on retrieved α abs and k are in the range of 1 to 3 Mm −1 and 0.01 to 0.015, respectively.
In previous CRD-S and BBCES studies, the complex RI of aerosols was retrieved from extinction measurements of sizeselected aerosols.Our measurements of ambient aerosols and lab-generated brown carbon proxy aerosols demonstrate the effectiveness of the new approach for laboratory, chamber, and ambient studies, where aerosol size selection may not be achieved due to low number concentrations or a lack of sufficiently large particles.Retrieving the total distribution complex effective RI is significantly faster compared with size selective measurements, which are often used in order to derive aerosol refractive indices.It also minimizes possible errors arising from multiply charged particles (Miles et al., 2011) and from partial representation of the total size distribution due to limitations in the selectable size range.This study presents a first comparison between these two measurement approaches.
Application of the method to the continuous monitoring of ambient aerosols provides extensive and intensive timeand spectrally dependent aerosol optical properties that may be applied in a variety of studies, such as investigations of the effect of chemical aging and SOA formation mechanisms or of hygroscopicity on the spectral dependency of optical properties.It also emphasizes the sensitivity of the retrieval of the total distribution effective complex RI to fast changes in particle size distribution and concentration.
The Supplement related to this article is available online at doi:10.5194/amt-9-3477-2016-supplement.
Figure 1 .
Figure 1.Schematic of the photo-acoustic spectrometer (PAS) coupled to a cavity ring-down (CRD) spectrometer (PA-CRD-S) and of the broadband cavity-enhanced spectrometer (BBCES), with channels for 315 to 345 and 390 to 420 nm.The optical cavity of the CRD-S and the two optical cavities of the BBCES were assembled together in a rigid optical cage to minimize alignment stability issues.CCD is charge coupled device; PBS is polarizing beam splitter; PD is photodiode; PMT is photomultiplier tube; TEC is thermoelectric cooler.The small black arrows indicate the entrance of the purge flows, and the thicker black arrows indicate the direction of the aerosol flow.
Figure 2 .
Figure 2. (a) Flow diagram for the new retrieval methodology.(b)Schematic of the method for retrieving the effective complex refractive index using the total particle size distribution.α abs , α ext , and α sca are wavelength-dependent absorption, extinction, and scattering coefficients, respectively; BBCES is broadband cavity-enhanced spectrometer; the subscripts calc and meas indicate a calculated or measured value, respectively; D p is particle diameter; n is the real part of the complex refractive index; k is the imaginary part of the complex refractive index; N is particle number concentration; PAS is photo-acoustic spectrometer; RI is effective complex refractive index; σ ext_calc and σ sca_calc are theoretical extinction and scattering cross sections weighted by the size distribution; χ 2 is minimum square difference; ω is size-weighted single-scattering albedo (SSA).
Figure 3 .
Figure 3. (a) Simulated wavelength-dependent extinction (α ext ; circles), scattering (α sca ; triangles), and absorption (α abs ; square) coefficients from the synthetic complex refractive index (RI) shown in the lower panel (blue lines).The retrieved broadband α ext (black line), α sca (dash-dot line), and α abs (dashed line) and singlescattering albedo (SSA; grey line and right axis) are also shown.(b) The synthetic complex RI (blue line) and the retrieved effective complex RI (inverted triangles), divided into their real and imaginary parts.
Figure 4 .
Figure 4. Box plot of the absolute value percentage errors for the retrieved variables from 100 different synthetic data sets of effective complex refractive indexes.The horizontal line within the box indicates the median, the boundaries of the box indicate the 25th and 75th percentile values, and the whiskers indicate the 5th and 95th percentile values of the results.Real and imaginary parts refer to the real and imaginary components of the refractive index.SSA is size-weighted single-scattering albedo; α abs , α ext , and α sca are wavelength-dependent absorption, extinction, and scattering coefficients, respectively.
Figure 5 .
Figure 5. (a) Measured extinction (circles), scattering (triangles), and absorption (square) coefficients (α ext , α sca , and α abs , respectively) for Pahokee peat fulvic acid (error bars representing measurement standard error are partially smaller than the symbols).The retrieved values for broadband extinction (black line), scattering (dash-dot line), absorption (dashed line), and single-scattering albedo (SSA; grey line) are also shown with shaded areas represent propagated uncertainty.(b) Retrieved broadband complex refractive index for Pahokee peat fulvic acid using (1) the retrieved RI from the data shown in (a) (inverted triangles) and (2) size selection measurements for the broadband cavity-enhanced spectrometer (BBCES-315; grey line) and the cavity ring-down spectrometer (CRD-S) at 404 nm (blue triangles).The imaginary part of the refractive index calculated from UV-vis absorption measurements is indicated by the red shaded area.
Figure 6 .
Figure 6.(a) Measured extinction (circles), scattering (triangles), and absorption (square) coefficients (α ext , α sca , and α abs , respectively) for Suwannee River fulvic acid (error bars representing measurement standard error are smaller than the symbols).The retrieved broadband extinction (black line), scattering (dash-dot line), absorption (dashed line) and single-scattering albedo (SSA; grey line) are also shown with shaded areas represent propagated uncertainty.(b) Retrieved broadband complex refractive index (RI) for Suwannee River fulvic acid using (1) the retrieved RI from the data shown in (a) (inverted triangles) and (2) size selection measurements for the broadband cavity-enhanced spectrometer (BBCES-315; orange line) and the cavity ring-down spectrometer (CRD-S) at 404 nm (blue triangles); and (3) from the published data of Washenfelder et al. (2013) (purple line) and Flores et al. (2014) (green line).The imaginary part of the refractive index calculated from UV-vis absorption measurements is indicated by the red shaded area.
Figure 7 .
Figure 7.Comparison between the retrieved and measured singlescattering albedo (SSA) values at 404 nm.The retrieved SSA is calculated from the retrieved extinction and scattering coefficients (α ext (t) and α sca (t), respectively), while the measured SSA is calculated from the values of α ext (t) and α abs (t) obtained through direct measurement by the single-wavelength photo acoustic spectrometer coupled to a cavity ring-down aerosol spectrometer (PA-CRD-S).
Figure 8 .
Figure 8.Time series of the retrieved coefficients for extinction (a), scattering (b), absorption (c), and of the single-scattering albedo (SSA) (d) for the 300 to 650 nm wavelength range of dried ambient aerosols.
Figure 9 .
Figure 9.Time series of the retrieved absorption and extinction Angstrom exponents (AAE and EAE) for the 300 to 650 nm wavelength range.
Figure 10 .
Figure10.Time series (nighttime hours) of the real and imaginary components of the retrieved effective complex refractive index for the 300 to 650 nm wavelength range of dried ambient aerosols. | 10,263.4 | 2016-08-01T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Thematic fit evaluation: an aspect of selectional preferences
In this paper, we discuss the human thematic fit judgement correlation task in the context of real-valued vector space word representations. Thematic fit is the extent to which an argument fulfils the se-lectional preference of a verb given a role: for example, how well “cake” fulfils the patient role of “cut”. In recent work, systems have been evaluated on this task by finding the correlations of their output judgements with human-collected judgement data. This task is a representation-independent way of evaluating models that can be applied whenever a system score can be generated, and it is applicable wherever predicate-argument relations are significant to performance in end-user tasks. Significant progress has been made on this cognitive modeling task, leaving considerable space for future, more comprehensive types of evaluation.
Introduction
In this paper, we discuss a way of evaluating real-valued semantic representations: human thematic fit judgement correlations. This evaluation method permits us to model the relationship between the construction of these semantic representation spaces and the cognitive decision-making process that goes into predicate-argument compositionality in human language users. We focus here on verb-noun compositionality as a special case of thematic fit judgement evaluation.
A verb typically evokes expectations regarding the participants in the event that the verb describes. By generalizing over different verbs, we can create a scheme of thematic roles, which characterize different ways to be a participant. Schemes vary, but most contain agent, patient, instrument, and location (Aarts, 1997). The verb "cut" creates an expectation, among others, for a patient role that is to be fulfilled by something that is cuttable. This role-specific expectation is called the patient selectional preference of "cut". The noun "cake" fulfils the patient selectional preference of "cut", "form" less so. As such, we can see that selectional preferences are likely to be graded.
We define thematic fit to be the extent to which a noun fulfils the selectional preference of a verb given a role. This can be quantified in thematic fit ratings, human judgements that apply to combinations of verb, role, and noun 1 . One of the goals of this type of evaluation is both for cognitive modeling and for future application. From a cognitive modeling perspective, thematic fit judgements offer a window into the decision-making process of language users in assigning semantic representations to complex expressions. Psycholinguistic work has shown that these introspective judgements map well to underlying processing notions (Padó et al., 2009;Vandekerckhove et al., 2009).
One of our goals in developing this type of evaluation is to provide another method of testing systems designed for applications in which predicateargument relations may have a significant effect on performance, especially in user interaction. This particularly applies in tasks where non-local dependencies have semantic relevance, for example, such as in judging the plausibility of a candidate coreferent from elsewhere in the discourse. Such applications include statistical sentence generation in spoken dialog contexts, where systems must make plausible lexical choices in context. This is particularly important as dialog systems grow steadily less task-specific. Indeed, applications that depends on predicting or generating match-ing predicate-argument pairs in a human-plausible way, such as question-answering, summarization, or machine translation, may benefit from this form of thematic fit evaluation.
Both from the cognitive modeling perspective and from the applications perspective, there is still significant work to be done in constructing models, including distributional representations. We thus need to determine whether and how we can find judgements that are a suitable gold standard for evaluating automatic systems. We seek in this paper to shed some light on the aspects of this problem relevant to vector-space word representation and to highlight the evaluation data currently available for this task.
This task differs from other ways of evaluating word representations because it focuses partly on the psychological plausibility of models of predicate-argument function application. Analogy task evaluations, for example, involve comparisons of word representations that are similar in their parts of speech (Mikolov et al., 2013b). Here we are evaluating relations between words that are "counterparts" of one another and that exist overall in complementary distribution to one another. There are other forms of evaluation that attempt to replicate role assignments or predict more plausible role-fillers given observed text data (Van de Cruys, 2014), but this does not directly capture human biases as to plausibility: infrequent predicate-argument combinations can nevertheless have high human ratings. Consequently, we view this task as a useful contribution to the family of evaluations that would test different aspects of general-purpose word representations.
Existing datasets
The first datasets of human judgements were obtained in the context of a larger scientific discussion on human sentence processing. In particular, McRae et al. (1998) proposed incremental evaluation of thematic fit for the arguments in potential parses as a method of parse comparison. Human judgements of thematic fit were needed for incorporation into this model. McRae et al. (1997) solicited thematic fit ratings on a scale from 1 (least common) to 7 (most common) using "How common is it for a {snake, nurse, monster, baby, cat} to frighten someone/something?" (for agents) and "How common is it for a {snake, nurse, monster, baby, cat} to be verb role-filler agent patient accept friend 6.1 5.8 accept student 5.9 5.3 accept teenager 5.5 4.1 accept neighbor 5.4 4.4 accept award 1.1 6.6 admire groupie 6.9 1.9 admire fan 6.8 1.7 admire disciple 5.6 4.1 admire athlete 4.8 6.4 admire actress 4.6 6.4 frightened by someone/something?" (for patients).
A small sample of scores from this dataset is given in Table 1. Each (role-filler, verb, role) triple received ratings from 37 different participants. The 37 ratings for each triple were averaged to generate a final thematic fit score. The verbs were all transitive, thus allowing an agent rating and patient rating for each verb-noun pair. As shown, many nouns were chosen such that they fit at least one role very well. This meant that some verbroles in this dataset have no poorly-fitting rolefillers, e.g., patients of "accept" and "agents of "admire". This had strong ramifications for the "difficulty" of this dataset for correlation with automatic systems because extreme differences in human judgements are much easier to model than fine-grained ones. (1998), has two animate role-fillers for each verb. The first was a good agent and a poor patient, and the other a poor agent and a good patient. The ratings were still well-distributed, but these conditions made correlation with automatic systems easier. Ferretti et al. (2001) created a dataset of 248 instrument ratings (F-Inst) and a dataset of 274 location ratings (F-Loc) using questions of the form "How common is it for someone to use each of the following to perform the action of stirring?" (instruments) and "How common is it for someone to skate in each of the following locations?". 40 participants supplied ratings on a seven point scale.
Ken McRae, Michael Spivey-Knowlton, Maryellen MacDonald, Mike Tanenhaus, Neal Pearlmutter and Ulrike Padó compiled a master list of thematic fit judgements from Pearlmutter and MacDonald (1992), Trueswell et al. (1994), McRae et al. (1997), a replication of Binder et al. (2001) [Experiment B], and follow-up studies of Binder et al. (2001) [Experiment C]. These studies had slightly different requirements for the kinds of verbs and nouns used and significant overlap in stimuli due to collaboration. This represents the largest to-date dataset of agent-patient thematic fit ratings (1,444 single-word verb/noun judgements), referenced herein as MSTNN.
Padó (2007) created a new dataset of 414 agent and patient ratings (P07) to be included in a sentence processing model. The verbs were chosen based on their frequencies in the Penn Treebank and FrameNet. Role-fillers were selected to give a wide distribution of scores within each verb. The final dataset contains fine-grained distinctions from FrameNet, which many systems map to familiar agent and patient roles. Judgements were obtained on a seven point scale using questions of the form "How common is it for an analyst to tell [something]?" (subject) and "How common is it for an analyst to be told?" (object).
Finally, Greenberg et al. (2015a) created a dataset of 720 patient ratings (GDS-all) that were designed to be different from the others in two ways. First, they changed the format of the judgement elicitation question, since they believed that asking how common/typical something is would lead the participants to consider frequency of occurrence rather than semantic plausibility. Instead, they asked participants how much they agreed on a 1-7 scale with statements such as "cream is something that is whipped". This dataset was constructed to vary word frequency and verb polysemy systematically; the experimental subset of the dataset contained frequency-matched monosemous verbs (GDS-mono) and polysemous verbs (GDS-poly). Synonymous pairs of nouns (one frequent and one infrequent) were chosen to fit a frequent sense, an infrequent sense (for polysemous verbs only), or no senses per verb.
Evaluation approaches
The dominant approach in recent work in thematic fit evaluation has been, given a verb/role/noun combination, to use the vector space to construct a prototype filler of the given role for the given verb, and then to compare the given noun to that prototype (Baroni and Lenci, 2010). The prototype fillers are constructed by averaging some number of "typical" (e.g., most common by frequency or by some information statistic) role-fillers for that verb-the verb's vector is not itself directly used in the comparison. Most recent work instead varies in the construction of the vector space and the use of the space to build the prototype.
The importance of the vector space A semantic model should recognize that cutting a cake with an improbable item like a sword is still highly plausible, even if cakes and swords rarely appear in the same genres or discourses; that is, it should recognize that swords and knives (more typically used to cut cakes) are both cutting-instruments, even if their typical genre contexts are different.
Because of their indirect relationship to probability, real-valued vector spaces have produced the most successful recent high-coverage models for the thematic fit judgement correlation task. Even if cakes and swords may rarely appear in the same discourses, swords and knives sometimes may. A robust vector space allows the representation of unseen indirect associations between these items. In order to understand the progress made on the thematic fit question, we therefore look at a sample of recent attempts at exploring the feature space and the handling of the vector space as a whole.
Comparing recent results In table 2, we sample results from recent vector-space modeling efforts in the literature in order to understand the progress made. The table contains: BL2010 Results from the TypeDM system of Baroni and Lenci (2010). This space is constructed from counts of rule-selected dependency tree snippets taken from a large web crawl corpus, adjusted via local mutual information (LMI) but is otherwise unsupervised. The approach they take generates a vector space above a 100 million dimensions. The top 20 typical role-fillers by LMI are chosen for prototype construction. Some of the datasets presented were only created and tested later by Sayeed et al. (2015) (*) and Greenberg et al. (2015a) (**). BDK2014 Tests of word embedding spaces from Baroni et al. (2014), constructed via word2vec (Mikolov et al., 2013a). These are the best systems reported in their paper. The selection of typical role-fillers for constructing the prototype role-filler comes from TypeDM, which is not consulted for the vectors themselves. Table 2: Spearman's ρ values (×100) for different datasets with results collected from different evaluation attempts. All models evaluated have coverage higher than 95% over all datasets.
GSD2015 The overall best-performing system from Greenberg et al. (2015b), which is TypeDM from BL2010 with a hierarchical clustering algorithm that automatically clusters the typical role-fillers into verb senses relative to the role. For example, "cut" has multiple senses relative to its patient role, in one of which "budget" may be typical, while in another sense "cake" may be typical. GSD2015 The overall best-performing system from Greenberg et al. (2015a). This is the same TypeDM system with hierarchical clustering as in GSD2015, but applied to a new set of ratings intended to detect the role of verb polysemy in human decision-making about role-fillers. SDS2015-avg Sayeed et al. (2015) explore the contribution of semantics-specific features by using a semantic role labeling (SRL) tool to label a corpus similar to that of BL2010 and constructing a similar high-dimensional vector space. In this case, they average the results of their system, SDDM, with TypeDM and find that SRL-derived features make an additional contribution to the correlation with human ratings. Prototypes are constructed using typical role-fillers from the new corpus, weighted, like TypeDM, by LMI. SDS2015-swap This is similar to SDS2015-avg, but instead, the typical role-fillers of SDDM are used to retrieve the vectors of TypeDM for prototype construction.
It should be emphasized that each of these papers tested a number of parameters, and some of them (Baroni and Lenci, 2010;Baroni et al., 2014) used vector-space representations over a number of tasks. Baroni et al. (2014) found that trained, general-purpose word embeddings-BDK2014-systematically outperform count-based representations on most of these tasks. However, they also found that the thematic fit correlation task was one of the few for which the same word embedding spaces underperform. We confirm this by observing that every system in Table 2 dramatically outperforms BDK2014.
One hint from this overview as to why trained word embedding spaces underperform on this task is that the best performing systems involve very large numbers of linguistically-interpretable dimensions (features) 2 . SDS2015-avg involves the combination of two different systems with high-dimensional spaces, and it demonstrates top performance on the high-frequency agent-patient dataset of Padó (2007) and competitive performance on the remainder of evaluated datasets. SDS2015-swap, on the other hand, involves the use of one high-dimensional space with the typical role-filler selection of another one, and performs comparatively poorly on all datasets except for instrument roles. Note that the typical rolefillers are themselves chosen by the magnitudes of their (LMI-adjusted) frequency dimensions in the vector space itself, relative to their dependency relationships with the given verb, as per the evaluation procedure of Baroni and Lenci (2010). In other words, not only do many meaningful dimensions seem to matter in comparing the vectors, the selection of vectors is itself tightly dependent on the model's own magnitudes.
What these early results in thematic fit evaluation suggest is that, more so than many other kinds of lexical-semantic tasks, thematic fit modeling is particularly sensitive to linguistic detail and interpretability of the vector space.
Future directions
In the process of proposing this evaluation task, we have presented in this paper an overview of the issues involved in vector-space approaches to human thematic fit judgement correlation. Thematic fit modeling via real-valued vector-space word representations has made recent and significant progress. But in the interest of building evaluations that truly elucidate the cognitive underpinnings of human semantic "decision-making" in a potentially application-relevant way, there are a number of areas in which such evaluations could be strengthened. We present some suggestions here: Balanced datasets In order to investigate the apparent relationship between the linguistic interpretability of the vector space dimensions and the correlations with human judgements, we need more evaluation data sets balanced for finegrained linguistic features. The data collected in Greenberg et al. (2015a) is a step in this direction, as it was used to investigate the relationship between polysemy, frequency, and thematic fit, and so it was balanced between polysemy and frequency. However, a thematic role like location-on which all systems reported here perform poorly-could be similarly investigated by collecting data balanced by, for example, the preposition that typically indicates the location relation ("in the kitchen" vs. "on the bus").
Compositionality Both the currently available thematic fit judgements and the vector spaces used to evaluate them are not designed around compositionality, as they have very limited flexibility in combining the subspaces defined by typical role-filler prototypes (Lenci, 2011). Language users may have the intuition that cutting a budget and cutting a cake are both highly plausible scenarios. However, if we were to introduce an agent role-filler such as "child", the human ratings may be quite different, as children are not typical budget-cutters. The thematic fit evaluation tasks of the future will have to consider compositionality more systematically, possibly by taking domain and genre into account.
Perceptuomotor knowledge A crucial question in the use of distributional representations for thematic fit evaluation is the extent to which the distributional hypothesis really applies to predicting predicate-argument relations. Humans presumably have access to world-knowledge that is beyond the mere texts that they have consumed in their lifetimes. While there is evidence from psycholinguistic experimentation that both forms of knowledge are involved in the neural processing of linguistic input (Amsel et al., 2015), the boundary between world-knowledge and distributional knowledge is not at all clear. However, thematic fit judgement data represents the output of the complete system. An area for future work would be to see whether the distinction between these two types of knowledge (such as image data or explicitly-specified logical features) can be incorporated into the evaluation itself. However, the single rating approach has its own advantages, in that we expect an optimal vectorspace (or other) representation will also include the means by which to combine these forms of linguistic knowledge.
Rating consistency 240 items, containing the most frequent verbs from the MSTNN dataset, were deliberately included in the GDS-all dataset, in order to evaluate consistency of judgements between annotators, especially when the elicitation method varied. There was a significant positive correlation between the two sets of ratings, Pearson's r(238) 95% CI [0.68, 0.80], p < 2.2 × 10 −16 . The residuals appeared normal with homogeneous variances, and the Spearman's ρ was 0.75. This high correlation provides a possible upper-bound on computational estimators of thematic fit. The fact that it is well above the state of the art for any dataset and estimator configuration suggests that there is still substantial room for development for this task. | 4,194.4 | 2016-08-01T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Low-cost FDM 3D-printed modular electrospray/electrospinning setup for biomedical applications
Here, we report on the inexpensive fabrication of an electrospray/electrospinning setup by fused deposition modelling (FDM) 3D printing and provide the files and parameters needed to print this versatile device. Both electrospray and electrospinning technologies are widely used for pharmaceutical, healthcare and bioengineering applications. The setup was designed to be modular, thus its parts can be exchanged easily. The design provides a safe setup, ensuring that the users are not exposed to the high voltage parts of the setup. PLA, PVA, and a thermoplastic elastomer filament were used for the 3D printing. The filament cost was $100 USD and the rig was printed in 6 days. An Ultimaker 3 FDM 3D printer was used with dual print heads, and the PVA was used as a water-soluble support structure. The end part of the setup had several gas channels, allowing a uniform gas flowing against the direction of the nanoparticles/nanofibers, enhancing the drying process by enhancing the evaporation rate. The setup was tested in both electrospray and electrospinning modes successfully. Both the .sldprt and .stl files are provided for free download.
Introduction
Nanotechnology has emerged as a state-of-the-art tool for biomedical applications and has attracted biotechnology, pharmaceutical, and healthcare industries during recent decades [1]. Electrohydrodynamic atomization (EHDA) is a popular technique for producing nano-sized objects by applying high voltage for applications in the biomedical field. Both electrospray ionization deposition and electrospinning techniques are based on the EHDA technique.
The electrospray technique (also called electrospray ionization deposition) is one of the most efficient ways for the preparation of nanospheres and nanoparticles as it is a simple and inexpensive method [1][2][3]. Electrosprayed nanoparticles are often used for pharmaceutical, biological or medicinal applications [4][5][6]. For example, electrospray can be used to fabricate nanoparticles loaded with drugs for nanoparticle drug delivery or loaded with cell growth factors for tissue engineering [4,7,8].
Electrospinning is a widely used method for pharmaceutical, medicinal or biological applications [9][10][11][12] as it can process solutions, melts, or even suspensions into long nano/micro-fibers [13]. It is the only technique for scaling up continuous nanofiber production [14]. Electrospinning is a modern technique in medicine that can fabricate nanostructures, which are mimicking our body's extracellular matrix by the high surface area, providing an excellent scaffold for cell attachment [15]. This makes electrospinning an attractive technique for tissue engineering applications, including vascular graft fabrication [16,17]. It is also widely used in medical diagnosis and drug delivery as they can immobilize the recognition element or active pharmaceutical ingredient due to the large surface area and porosity [15,18,19]. Recently, electrospinning has been also used for replica molding and producing three-dimensional scaffolds [20,21].
Both electrospray and electrospinning methods use analogous technology for the production of nanostructures. The different modes are determined by the properties of the applied solution in the process. A typical laboratory electrospinning setup can be capable of both the electrospray and electrospinning modes. In general, the setup is made of 4 main parts ( Fig. 1): (i) a syringe, which is placed inside a syringe pump for continuous solution flow; (ii) a metallic nozzle; (iii) a high voltage power supply (which is connected to the nozzle); (iv) and a collector (which is conductive to attract the charged nanoparticles/nanofibers, and is placed opposite to the high voltage electrode) [3,22].
Depending on the viscosity and electrical conductivity of the solution, the setup can be used in either electrospray or electrospinning modes. In both cases, the liquid that is ejected from the nozzle forms a specific cone geometry, called Taylor cone [23]. In the electrospray mode, highly charged droplets are ejected as a form of a jet from the Taylor cone, and upon solvent evaporation, solid nanoparticles can be collected [3]. While in electrospinning mode, continuous fibers are emitted from the Taylor cone, and the nanofibers solidify after the complete solvent evaporation [24]. Ideally, both drying processes occur before the nanoparticles/nanofibers would reach the collector. Figure 1 shows the differences and similarities between the two modes.
Even though the experimental setup for electrospray and electrospinning methods are fairly simple, the price range of a commercial setup is usually between $17,000 -$300,000 USD [25]. Many researchers all around the world are using unsafe home-built experimental setups, where the users can be exposed to electric shock from the high voltage components. FDM 3D printing offers a low-cost solution to print a setup that offers similar reliability and reproducibility of the results as the commercial ones.
This paper describes the 3D printing process of a safe, modular electrospray/electrospinning setup. The detailed method for 3D printing this device that includes engineered air channels for enhanced solvent evaporation is described in detail. The optimal printing parameters are given, and both the .sldprt and .stl files are provided. The chemical smoothening and assembly of the 3Dprinted parts are also described.
Materials
Both the polylactic acid (PLA) and polyvinyl alcohol (PVA) filaments (diameter 2.75 mm) were manufactured by Ultimaker B.V., The Netherlands. A polyester-based thermoplastic elastomer filament (diameter 3 mm) was manufactured by Mitsubishi Kagaku Media Co., Ltd., Japan, and marketed under the name 'Verbatim Primalloy'. All three filaments were purchased from Create Education Limited, UK.
3D printer
An Ultimaker 3 FDM 3D printer was purchased from Ultimaker B.V., The Netherlands for printing the parts of the setup. The printer is equipped with a dual extrusion nozzle system. Thus, it is capable of using a second extruder for water-soluble support structure printing. Print cores with the 0.4 mm nozzles were used for both materials.
CAD design and G-code
The model of the electrospray/electrospinning chamber was designed by SolidWorks software. The SolidWorks Part files (.sldprt file format) can be downloaded at the journal website of the Hypertext Markup Language (HTML) format article. The files were exported as .stl files (can also be downloaded at the website of the HTML format article), and Ultimaker's slicer software, Cura (ver. 3.20), was used to generate the G-codes. The print settings can be found in Table 1.
Results and discussion
The electrospray/electrospinning setup was assembled as shown in Fig. 2. The chamber consisted of four main parts: a safety cap, a nozzle holder, a central chamber part, an end part with gas channels and stands to keep the setup in place. Either a stationary rod collector or a rotating drum collector was used to collect the nanoparticles/nanofibers (Fig. 2 shows the rotating drum collector). PVA was used as support material during printing the parts with complex structures, e.g. the safety cap, the larger nozzle holding part, the chamber and the end part. After a part with PVA support was printed, 30°C water was used to dissolve the PVA in a water bath. It took approximately 24 h to fully dissolve the PVA at this temperature. All the larger parts were printed with a sheet of paper attached to the open front part of the Ultimaker 3 printer, in order to reduce the temperature fluctuations during the 3D printing process. The rig shown in Fig. 2 was printed in 6 days, with the filaments costing $100 USD.
3D printing the nozzle holder and the safety cap
The nozzle holder consisted of two parts. The larger part was 3D-printed using PLA, as it is a widely used lowcost thermoplastic. However, due the sensitivity of PLA to solvents, heat and moisture would make other thermoplastic materials, like ABS or PEEK, more suitable for long-term use of the setup. While ABS is only 12% more expensive than PLA [26], the price of PEEK filament is over 17 times higher than that of PVA of the same weight [27]. Furthermore, the used PLA material was not able to hold the metal nozzle capillary securely. Therefore, the electrospray/electrospinning nozzle capillary was held in place by a thermoplastic elastomer material, which is marketed under the name 'Verbatim Primalloy'. This material has outstanding heat, oil and abrasion resistance as well as superior mechanical strength [28]. The 3D-printed white rubber disk was attached to the nozzle holder part (blue part in Fig. 3) using six M5 nylon screws. The blunt nozzle pierced the rubber disk in the middle and was kept securely. The Teflon tube from the syringe pump and the high voltage cable from the power supply were attached to the nozzle (Fig. 3 Right). The safety cap is a crucial element, as it prevents users from electric shock. This part was printed with two small openings: one central hole for the Teflon tube, and one opening for the high voltage cable. The CAD drawing and a photograph of the part can be seen in Fig. 4. This part was connected to the nozzle holder part via threads.
3D printing the central chamber parts
Two different designs were made for the central chamber parts. One design was for using the setup with a flat stationary collector, while the second design was for using it with a rotating drum collector (Fig. 5). The two parts were based on the same design, but the part for the rotating collector electrode had two additional openings, where the DC motor and the bearing was able to slide, making the working distance of the collector adjustable.
The central chamber was the largest part of the setup, with 180 mm height and 180 mm diameter. It took 48 h to print this part of the setup. Figure 6 shows the 3D printing process of this part, and the part after completion. Plexiglass was cut to match the size of the opening and glued with transparent silicone in place.
Rotating drum and sliding stationary flat collector design
In order to provide good electrical conductivity and chemical resistance against solvents, both collector parts were machined from stainless steel, using computer numerical control (CNC). The stationary flat collector consisted of a cylinder part with a diameter of 40 mm (Fig. 7) and a rod. The cylinder part was attached to a 300 mm long rod by threads, and the working distance (distance between the nozzle and the collector surface) was adjusted by sliding the rod part of the collector inside the chamber. The rotating collector drum offers the possibility to collect align fibers, thicker mat than what can be obtained with the stationary collector, and increased production rate [24,29]. It had a total length of 160.5 mm, with shafts with different diameters on both (Fig. 7). The shaft with 6 mm diameter was connected to a 100 rpm DC motor via a metallic coupler. The stainless steel collector was grounded via the DC motor. The shaft with 9 mm diameter was inserted into a metallic ball bearing (Fig. 7 Right) to facilitate the high-speed rotation without extensive friction. The working distance was adjusted by sliding the bearing along with the DC motor. The wider part of the collector drum was 25 mm in diameter, and it provided 50 mm flat surface for collecting the nanoparticles/ nanofibers.
3D printing the end part with gas channels
The electrospray/electrospinning chamber was closed with an 'end part', which allowed gas to be entered into the chamber. The cap had a showerhead-like design with multiple holes to diffuse the gas blown into the chamber (Fig. 8). This part had several gas channels, allowing a uniform gas flow. The direction of the gas is opposing the direction of the nanoparticles/nanofibers, enhancing the evaporation rate, thus the drying process.
Support stands
The setup was kept in place and prevented from rolling by two 3D-printed support stands (Fig. 9). The stands were printed from PLA, without using any support materials.
Chemical surface smoothening
In order to smoothen the threads of the modular parts, chloroform vapor was used to treat the 3D prints. It has been previously demonstrated that chloroform can not only reduce the roughness, but also increases the tensile strength of specimens in the upright build direction, increasing the overall material quality [30]. About 50 mL chloroform was poured in a glass beaker inside a fume cupboard, and heated to 250°C. The beaker size was slightly larger in diameter than the printed part that was treated. The parts were fixed above the beaker, and each side was left for 10 min above the chloroform. The chemical treatment showed a slight improvement in the surface roughness, which helped the part assembly at the threads.
Assembly and testing
The setup was assembled in both electrospray and electrospinning modes. Application of a lubricant on the threads helped the assembly and disassembly process of the parts. Figure 10 shows the assembled setup in electrospray mode, with the syringe pump.
Conclusion
3D printing offers a low-cost way to manufacture easily a safe and reliable experimental setup that is similar to the commercial ones. This paper presented a method for 3D printing a modular electrospray/electrospinning setup using an inexpensive FDM 3D printer. Both electrospray and electrospinning techniques are widely used for drug delivery, tissue engineering, biosensing, or replica molding applications in the recent years. PLA, PVA and thermoplastic elastomer filaments were used for the 3D printing process, with filaments costing only $100 USD. An Ultimaker 3 printer (with dual print heads) was employed and the PVA was used as water-soluble support. The electrospray/electrospinning rig was printed in less than a week. Due to the modular nature of the setup, the parts can be exchanged easily, offering easy configuration for different applications. The cap part had several gas channels, allowing a uniform gas flowing against the direction of the nanoparticles/nanofibers, enhancing the evaporation rate. The setup was tested in both electrospray and electrospinning modes successfully. However ABS, PEEK, or ceramic materials would be recommended for 3D printing the central chamber part in order to increase the chemical resistivity. Both the .sldprt and .stl files are provided for download. | 3,186.4 | 2020-04-14T00:00:00.000 | [
"Engineering",
"Medicine",
"Materials Science"
] |
Application of Artificial Ground Freezing Technology in Modern Urban Underground Engineering
Based on typical water-rich sandy gravel strata in Beijing, in order to explore the application of the artificial ground freezing method (AGF) in urban large-scale underground engineering, the formation and development of freezing body were analyzed when multirow freezing pipes were working together, and the group effect exhibited during the freezing process was also revealed in this paper. On this basis, the basin-shaped freezing method (BFM) is put forward as an application of AGF used in underground engineering. BFM structure consists of two parts: the frozen curtain (basin wall) around the excavation scope and the horizontal frozen body (basin bottom) at the bottom of the station. Physical model test and numerical simulation were conducted to study temperature field expansion of BFM under two different conditions. The results show the following: (1) The group effect refers to the cooling effect of different freezing pipes influencing each other during freezing process. Under the condition of still water, the group effect expands the freezing area, and it shows the gradual development of freezing from back water surface to front water surface under seepage condition. (2) BFM can effectively play the role of water proofing, and although different parts of basin structure show different frozen order under different conditions, both basin wall and basin bottom can form an effective thickness during the freezing process.
Introduction
Artificial ground freezing technology (AGF) uses the refrigeration system to turn natural soil frozen, increase its strength, and isolate groundwater. First applied in mining engineering, AGF was introduced into municipal engineering because of its flexible, controllable, and environment-friendly characteristics [1].
In the 1970s, AGF was first applied to a subway in China with a freezing length of 90 m, and, after that, it has been successfully used for bypass of many subway lines in Shanghai and Beijing since 1993 [2]. In 1997, the first horizontal AGF tunnel was carried out in Beijing [3]. In 2003, Guangzhou Metro Line 2 passed through a fractured zone, and AGF was used to strengthen the unstable stratum [4]. In recent years, AGF has become one of the main methods for underground engineering in soft water-bearing strata to build bypass, deal with accidents, and strengthen strata.
According to engineering requirements, experts and scholars have proposed different freezing pipes layout and studied their temperature field expansion, respectively. Among them, Yang and Pi [5] established a mathematical model of a single freezing pipe and studied the influence of parameters on its limit value; Zhou [6,7] studied the closure time and temperature field of freezing pipes around the wellbore under seepage condition by numerical simulation and model test; Gao et al. [8] established the numerical model of double freezing pipes in fractured rock under the seepage flow and analyzed the influence of fracture on temperature field and seepage field; Li et al. [9] established a test system of double-row freezing pipes with quincunx arrangement under seepage condition, and the main factors of the frozen wall were studied by orthogonal test; Liu et al. [10] studied the frozen wall formation under horizontal flow with the pipes in rectangular layout and compared frozen thickness and closure time at upstream and downstream frozen wall. At present, the freezing scope of these projects is quite limited, and there is no case of applying AGF to largescale underground projects.
Nowadays, China is continuously strengthening groundwater protection. As a pilot city for water resources reform [11], Beijing has imposed heavy taxes on the pumping of groundwater for construction projects. As a result, it has become very important to carry out research on no pumping (or less pumping) for large-scale underground engineering, and now AGF is believed to effectively solve the problem.
In this paper, based on the typical water-rich sandy gravel strata in Beijing, in order to solve the difficulty of large-scale underground engineering water proofing in urban sensitive areas, the formation and development of freezing body were analyzed when multirow freezing pipes were working together, and the group effect exhibited was also revealed. On this basis, the basin-shaped freezing method (BFM) is put forward as an application of AGF used in large-scale underground engineering; temperature field expansion of this method is carefully studied, especially freezing sequence of different positions of basin structure under different seepage conditions. It provides a basis for the application of BFM in underground engineering and has certain engineering significance.
Basin-Shaped Freezing Method
In traditional mine shaft construction, freezing pipes were arranged vertically around the wall to form a ring-shaped frozen wall to resist water and soil pressure, so, as for the bypass, the freezing pipes with nearly horizontal inclination are used to isolate groundwater between the two tunnels and form a frozen zone for construction. But, for large-scale underground engineering, these freezing methods are no longer applicable.
In order to explore the application of AGF in urban large-scale underground engineering, more freezing pipes are needed. When multiple rows of freezing pipes are working as a group, a horizontal frozen body can be formed as shown in Figure 1.
For large-scale underground engineering in sensitive areas of urban cities, not only the bottom but also around the excavation area is supposed to be isolated groundwater. On this basis, the basin-shaped freezing method (BFM) is put forward as an application of AGF used in underground engineering. BFM consists of two parts, the basin wall and the basin bottom. e basin wall is located around the excavation area, forming a closed circular freezing curtain by arranging vertical freezing pipes, while the basin bottom is regularly arranged with multirow freezing pipes below the excavation area; part-freezing method was adapted to form a horizontal freezing plate connecting with the basin wall, which is shown in Figure 2.
BFM is suitable for large-scale projects with a certain depth in the excavated area and no effective water barrier. e basin wall and basin bottom jointly resist the hydraulic connection to form a water proof interval and finally ensure the smooth progress of the project.
Group Effect of Multirow Pipes Freezing
e artificial freezing process is affected by environmental factors, and the freezing process will also change those factors and have an impact conversely, so the freezing development eventually shows a double nonlinear relationship. In addition, the flow of groundwater will drive the migration of cool water, making the multirow pipe freezing problem more complicated. Numerical simulation is used to analyze the group-effect in the case of multirow pipe freezing.
Numerical Implementation
Method. AGF is a multifield coupling problem involving temperature field and seepage field. is practical problem includes physical processes such as phase transition and moving boundary. Equations (1) and (2) are followed during the process [12]. Due to the complexity of the problem, the following assumptions are made in the numerical simulation process: (1) Soil is a homogeneous, continuous, and isotropic saturated porous medium, and the pores are interconnected with each other; in the process of seepage, the microprocess of groundwater flowing in the soil void is not considered, but the average seepage velocity of the section is only concerned. (2) Assuming that the ice-water phase transition only occurs in the interval [−1, 0], the latent heat capacity method is used to treat the heat released by freezing water into ice, during the latent heat capacity of the phase transformation process, ignoring the migration of unfrozen water in the freezing process. (3) e permeability coefficient in porous media is considered as a function of temperature. When the temperature is lower than 0°C, the permeability coefficient decreases to about 0 to simulate the freezing process of water to ice: where ρ L is the density of water; u is the velocity field vector of fluid seepage; C L is the specific heat capacity of water; Q is the system heat source; L is the latent heat value released when the unit mass of water changes phase; θ L is the content of liquid water; t is the time; f is the mass force received by the fluid unit; c is the gravity of water.
In the process of artificial freezing, the heat capacity of porous media can be expressed by the volume weighted average value of heat capacity of mixture, namely, the equivalent heat capacity C ef . Similarly, the thermal conductivity of porous media can also be expressed as the equivalent thermal conductivity K ef of soil skeleton-waterice mixture, which can be calculated by the following equations, respectively: 2 Advances in Materials Science and Engineering where φ is the porosity of porous media; ρ s is the density of soil skeleton; ρ I is the density of ice; C S is the specific heat capacity of soil skeleton; C I is the specific heat capacity of ice; K L is the heat conduction of water. During the freezing process, (1) is a heat transfer equation including phase change, which is used to solve temperature T, while (2) is an N-S equation of momentum conservation of incompressible fluid, which is used to solve pressure p and velocity u in freezing process. Parameters involved in the test process are shown in Table 1.
Group Effect under Still
Water. COMSOL Multiphysics was used to establish the multirow freezing pipes model when their spacing is 2 m, as shown in Figure 3. During the freezing process, the freezing column radius of one pipe among the group is recorded and compared with that of a single freezing pipe under the same conditions, as shown in Figure 4.
It can be seen from Figure 4 that, during the freezing process of a single pipe, the radius of its freezing column gradually stabilized at a fixed value after the initial rapid development. e radius of the freezing column is 0.45 m in the end under this working condition. For one pipe in group, in the initial stage of freezing, the distance between the columns of frozen soil is large, so the mutual influence is limited, and so the freezing process is very similar to that of a single pipe at this stage. Subsequently, the radius is expanding, which reduces ambient temperature and promotes the development of freezing. After that, the freezing column radius expanded rapidly, and finally the adjacent frozen soil columns were connected.
By comparison, it is found that, under still water, the freezing column radius of a single pipe is much smaller than that of one pipe in group when the freezing development is stable, and the latter is equal to half the spacing. e difference between them is that, for one pipe in group, the freezing process will be strengthened by the superposition of the cooling effect of the surrounding freezing pipes.
erefore, under the condition of still water, the group-effect is manifested by the expansion of each pipe freezing column radius and acceleration of the overall freezing process.
Group Effect under Seepage Condition.
A 10-row freezing pipes array with a spacing of 2.0 m was established, and the freezing process became stable after 40-day simulation under seepage conditions (0.5 m/d). In the end, the pipes group froze as a whole, and the temperature field distribution at each stage is shown in Figure 5.
During the first 8 days, the freezing trend is almost parallel to the direction of the water flow, and the freezing Advances in Materials Science and Engineering soil column gradually connected, which increases along the flow, as shown in Figure 6. It can be considered that the freezing column around each pipe is the superposition of the cooling effect of all the pipes upstream of it. According to the frozen form, under seepage conditions, the cooling capacity of each freezing pipe is transmitted outward at an angle.
With the development of freezing, the cooling capacity diffusion ranges of two adjacent pipes in a row will overlap Advances in Materials Science and Engineering 5 each other; as a result, the freezing columns of rear pipes intersect, as shown in Figure 5(c). After 15 days, the overall multirow pipes freezing shows a trend of developing from back water surface to front water surface, until all the freezing columns intersect. e reason is that, after the downstream freezing columns intersect, the seepage field in the freezing area is affected. As shown in Figure 7, a certain range of deceleration zones appeared upstream of the frozen area, thereby promoting the development of freezing in the corresponding area.
After all freezing columns are connected together, the temperature of the frozen core area gradually decreases. During this process, it was found that the first row of freezing pipes was relatively lagging in the freezing development, and the process of connecting it to the frozen area has greatly extended the overall time of the entire group. In the actual project, partial encryption of the first row of freezing pipes can speed up the overall freezing process.
As shown in Figure 5(f ), after the temperature field stabilizes, the isotherm changes sharply on the front water surface and is relatively flat on back water surface, and the lowest temperature in the core freezing area is around −25°C.
e zero-temperature line surrounds the entire area and forms a bulge on the back water surface.
Under the seepage condition, the cooling capacity of the upstream freezing pipes will follow the flow downstream, reducing the ambient temperature around the downstream freezing pipes, thereby promoting the freezing of the downstream part. When the row number is sufficient, the downstream freezing column will first connect, which will reduce the seepage velocity through upstream freezing pipes and promote the development of its freezing, and eventually the group shows the In summary, the group-effect refers to the cooling effect of different freezing pipes influencing each other when multirow freezing pipes are working together. Under the condition of still water, the group-effect expands the frozen area, while it shows the gradual development of freezing from back water surface to front water surface under seepage condition. e superposition of the temperature field promotes the change of the freezing effect. e development of the overall freezing is not a simple copy of the freezing process of multiple single pipes but as a whole shows a phenomenon different from the freezing of single pipe.
Basin-Shaped Freezing Method
Physical model test and numerical simulation were used to study the freezing process and temperature expansion of BFM. Usually, model test can only satisfy the similar relationship on the main conditions, follow the principle of highlighting the core elements of simulation, and eliminate the boundary effect to the greatest extent. e geometric similarity ratio selected is C l � 1 : 10. According to π theory, the similarity ratio of parameters involved in the analysis process is shown in Table 2.
Among them, l represents the geometric size including the length of the freezing pipe and the spacing of the freezing pipes; T is temperature; t is time; v is the seepage velocity. In addition, the angle p represents the prototype and m represents the model.
Physical Model.
e physical model test uses the seepage freezing model test platform for operation. e platform is composed of model test box, water tank, brine tank, refrigeration compressor, and cooling towel. Among them, the water tank provides a circulating water system to simulate groundwater seepage condition; in the brine tank, the brine is circulated in the freezing pipe after cooling by the compressor. e test box is the main area for constructing strata, arranging freezing pipes, and conducting physical model tests.
e model box here is 10 m long, 3 m wide, and 2.5 m high, as shown in Figure 8. e layout of the stratum in the model box is shown in Figure 9. From top to bottom, there are the silty sand layer, the upper sealing water layer, the sandy gravel layer, the lower sealing water layer, and the sandy gravel layer. Water enters from one side of the tank and flows out from the other side to simulate groundwater seepage. e effective flow section height of this model is 1.5 m. e layout of the freezing pipes is shown in Figure 10. Parameter Symbol Deduction Value Geometry C l C l � l m /l p 1 : 10 Advances in Materials Science and Engineering 7 pipes on basin wall and 180 freezing pipes on basin bottom, for a total of 270 freezing pipes. Among them, the spacing between basin wall freezing pipes is 0.135 m, the freezing pipes inside the basin bottom are evenly arranged, the spacing is 0.2 m, and the encryption freezing pipes are arranged at corner points. Basin bottom freezing pipes are arranged in a regular array on the right side of the axis of symmetry and quincunx arrangement on the left side. In the subsequent analysis, only the right side is analyzed. e freezing pipe at basin bottom adopts the method of partial freezing. Its effective freezing length is 0.5 m, and that of the basin wall is 1.3 m. Figure 11 is the physical model test box before capping, from which you can see the arrangement of freezing tubes.
Freezing Sequence under Still Water.
e physical model test was mainly carried out two times under the conditions of still water and seepage velocity of 5 m/d. According to the similar theory, those two tests correspond to the conditions of still water and seepage velocity of 0.5 m/d in actual engineering. e physical model test can only monitor a limited number of temperature measurement points, while the numerical simulation can describe the whole temperature field more intuitively and completely. In the following, the physical model test mainly recorded 3 key cross sections of the structure, and they are shown in Figure 12.
e physical model test records the temperature at different positions during the test, while, in numerical simulation, the freezing range is depicted on the same section with a red-dashed line according to the test results, as shown in Figure 13. Under still water, all the freezing columns are frozen and connected within 15 hours. e analysis selected three key time points of 5 hours, 10 hours, and 15 hours. According to the similar theory, they represent about 20 days, 40 days, and 60 days in actual engineering. As can be seen from the figure, the freezing range of the cross sections determined according to the numerical simulation results is basically consistent with the temperature of the measurement points in physical model test, which verifies the validity of the simulation.
After freezing for 20 days, the corner freezing pipes at the lower end and upper end surfaces of the basin bottom were Advances in Materials Science and Engineering heat exchange; on the upper end of the basin wall, most of the freezing pipes connected and temperature was around −1°C. After freezing for 60 days, the measuring points of all the areas on the 3 sections have fallen below 0°C, which can be considered as the formation of a basin-shaped freezing structure. e numerical simulation results are shown in Figure 14, from which we can see that the basin structure with certain thickness is finally formed under the condition of still water.
According to the comparison results, the freezing development is closely related to the spacing of the freezing pipes. Space at basin wall is small, freezing will occur first, the temperature will gradually decrease with the subsequent development, and the freezing range will gradually expand. When the freezing ranges at the upper and lower end surfaces of the basin bottom are connected, it can be considered that the basin bottom is formed. During the test, the lower end of the basin bottom is in full contact with 5 (3) Key C-S 3 (t = 20d) (1) Key C-S 1 (t = 20d) (2) Key C-S 2 (t = 20 d) groundwater, which is conducive to the diffusion of cooling, and the freezing is slower than that of the upper end of the basin bottom. erefore, under still water conditions, the basin wall will be formed before the basin bottom.
Freezing Sequence under Seepage Condition.
e freezing process of the key section under the seepage condition (0.5 m/d) is shown in Figure 15. Four time points of 5 hours, 10 hours, 15 hours, and 20 hours were selected in the physical model test process. According to the similar theory, they, respectively, represent 20 days, 40 days, 60 days, and 80 days.
After freezing for 20 days, the upper and lower end surfaces of the basin bottom were connected only at the corners, similar to those under still water conditions; the upper end surface of the basin wall has not been frozen. After freezing for 40 days, except for the first row on front water surface, all the freezing pipes at the lower end of basin bottom connected and showed the trend of developing from the surrounding to the central part; on the upper end of basin bottom, only local connection occurred around the downstream basin wall, and the freezing was slower than that of the lower end of the basin bottom; the downstream basin wall on the upper end of the basin wall connected first. After freezing for 60 days, only the first row freezing pipe on the front water surface did not connect on the lower and upper end surfaces of the basin bottom; after downstream basin wall freezing on the upper end surface of basin wall, it gradually extends to adjacent basin wall. After freezing for 80 days, all the measurement points on these 3 cross sections fell below 0°C, and it can be considered that a basin-shaped freezing structure has been formed. e numerical simulation results are shown in Figure 16.
erefore, under seepage condition, the order of freezing connection at different positions is as follows: downstream basin wall, basin bottom, back water basin wall, and front water basin wall. From the comparison results, it can be seen that the freezing range of the numerical simulation is basically consistent with the temperature measurement points of the physical model test. A few unfrozen measurement points are also near 0°C, and the error is very small. e established numerical model can effectively reflect the development process of basin freezing and predict the development of temperature field effectively. After the model test, the final basin-shaped structure (one side) is shown in Figure 17.
Conclusion
In this paper, based on the typical water-rich sandy gravel strata in Beijing, in order to solve the difficulty of large-scale underground engineering water proofing in urban sensitive areas, the formation and development of freezing body were analyzed when multirow freezing pipes were working together, and the group effect exhibited was also revealed. On this basis, the basin-shaped freezing method (BFM) is put forward as an application of AGF used in underground engineering; temperature field expansion of this method is studied, especially freezing sequence of different positions of basin structure under different seepage condition. e main conclusions are as follows: (1) e group effect refers to the cooling effect of different freezing pipes influencing each other during freezing process. Under the condition of still water, the group effect expands the freezing area, and it shows the gradual development of freezing from back water surface to front water surface under seepage condition. Advances in Materials Science and Engineering as follows: downstream basin wall, basin bottom, back water basin wall, and front water basin wall.
Data Availability
e data used to support the findings of the study are included in the article.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 5,483.8 | 2020-08-07T00:00:00.000 | [
"Engineering"
] |
Deformer: Denoising Transformer for Improved Audio Music Genre Classification
: Audio music genre classification is performed to categorize audio music into various genres. Traditional approaches based on convolutional recurrent neural networks do not consider long temporal information, and their sequential structures result in longer training times and convergence difficulties. To overcome these problems, a traditional transformer-based approach was introduced. However, this approach employs pre-training based on momentum contrast (MoCo), a technique that increases computational costs owing to its reliance on extracting many negative samples and its use of highly sensitive hyperparameters. Consequently, this complicates the training process and increases the risk of learning imbalances between positive and negative sample sets. In this paper, a method for audio music genre classification called Deformer is proposed. The Deformer learns deep representations of audio music data through a denoising process, eliminating the need for MoCo and additional hyperparameters, thus reducing computational costs. In the denoising process, it employs a prior decoder to reconstruct the audio patches, thereby enhancing the interpretability of the representations. By calculating the mean squared error loss between the reconstructed and real patches, Deformer can learn a more refined representation of the audio data. The performance of the proposed method was experimentally compared with that of two distinct baseline models: one based on S3T and one employing a residual neural network-bidirectional gated recurrent unit (ResNet-BiGRU). The Deformer achieved an 84.5% accuracy, surpassing both the ResNet-BiGRU-based (81%) and S3T-based (81.1%) models, highlighting its superior performance in audio classification.
Introduction
Developments in multimedia technology have resulted in a sharp increase in the variety of digital music and its listening volume, necessitating urgent advancements in music information retrieval (MIR), which involves utilizing computer technology to automatically analyze, recognize, retrieve, and understand music.Audio music genre classification is a MIR task that involves assigning labels to each piece of music based on characteristics such as genre [1,2], mood [3,4], and artist type [5,6].Audio music genre classification enables the automatic categorization of audio music based on different styles or types, facilitating a deeper understanding and organization of music libraries.
The evolution of deep learning has profoundly affected music genre classification, ushering in an era of automatic feature learning.Convolutional neural networks (CNNs) are proficient in discerning the complex spatial features inherent in audio data [7][8][9][10].However, they are limited in their ability to account for the long-term temporal information inherent in musical compositions.To address this limitation, convolutional recurrent neural networks (CRNNs) [11][12][13], which combine the strengths of both CNNs and recurrent neural networks (RNNs), are employed in music classification.In the specific context of music genre classification, CRNNs have demonstrated a marked advantage over CNNs, proficiently discerning both localized features and short-term temporal inter-relationships.Unfortunately, CRNNs still struggle to capture the long-term temporal dependencies that are often crucial in complex musical compositions.
Transformer-based music genre classification approaches, which are fortified with attention mechanisms, have been introduced to address these issues; they have achieved success, particularly in recognizing long-term information in music.Various transformer-based models, such as MusicBERT [14] and MidiBERT [15], have been developed to focus on different aspects of music genre classification.MusicBERT is equipped with specialized encoding and masking techniques that capture complex musical structures, whereas MidiBERT focuses on single-track piano scores.These models can effectively recognize long-term dependencies in music, especially in the context of symbolic music data such as the Musical Instrument Digital Interface (MIDI).Most existing transformerbased models for music classification are primarily tailored for symbolic music data such as MIDI, and there is a notable lack of models that can handle continuous audio data.
A Swin transformer-based approach has emerged as a targeted solution to solve the issues of traditional transformer-based models in handling continuous high-dimensional audio data [16].This advanced architecture employs a pre-training strategy known as momentum contrast (MoCo), which is a form of contrastive learning.This strategy aims to create similar representations for similar data points while pushing dissimilar data points apart in the feature space by maintaining a dynamic dictionary.Unfortunately, the MoCo pre-training strategy presents its own set of challenges.First, it incurs significantly increased computational costs, owing to the need to maintain and update this large dictionary.Second, the contrastive loss function can be sensitive to hyperparameter choices, thereby complicating the model optimization process.Third, MoCo-based approaches typically suffer from low interpretability, making it difficult to understand the model decisions or identify the learned features that contributed to the classification results.
Additionally, denoising has been extensively researched.Denoising approaches based on self-supervised learning [17,18] via the noise-removal process can effectively capture features and learn deep representations.They have many similarities with selfsupervised pre-training strategies, thus making the integration of the denoising concept into pre-training feasible.
In this paper, a novel method for audio music genre classification is proposed.The proposed method is characterized by denoising, which not only reduces computational costs compared to MoCo-based strategies but also offers a robust performance that is uninhibited by hyperparameter dependency.Uniquely, the proposed method incorporates a prior decoder, which substantially enhances the interpretability of the decisionmaking process.The main contributions of this method are as follows.
•
The proposed method includes a novel pre-trained model called Deformer and utilizes unsupervised learning to fully leverage unlabeled data for pre-training.
•
The proposed method design includes a prior decoder that assists Deformer in completing the pre-training effectively; it harnesses the potential of transformers in processing image-like audio data.Notably, this prior decoder improves the interpretability of the results obtained by the method.
•
The proposed method was experimentally proven to not only lower the computational cost but also achieve better results compared with existing approaches.
The remainder of this paper is organized as follows.Section 2 describes related work on audio-based music genre classification, and Section 3 introduces the proposed music-classification method based on audio data.Then, Section 4 details the experimental process and results.Finally, Section 5 concludes the proposed paper.
Related Works
This section provides an overview of the evolution of classification techniques for musical audio data, tracing the transition from methods relying on manual feature extraction to end-to-end models.
Music Genre Classification Based on Audio Data
Researchers achieved music classification by converting audio data into spectrograms and Mel Frequency Cepstral Coefficients (MFCC) images and subsequently applied texture analysis approaches for feature extraction.Various classifiers, such as Knearest neighbors (KNN), Gaussian models, and support vector machines (SVMs), were utilized for classification.Notably, the KNN algorithm was effective in classifying classical music [19].This approach was extended by introducing local binary patterns (LBPs) to extract textural features from spectrograms [20].The extended version explored partitioning techniques to capture local information, emphasizing the importance of local features in enhancing the classification performance.Although these approaches were effective in specific scenarios, they were generally constrained by their focus on timbral features and failed to capture aspects of music that were potentially crucial for a more comprehensive understanding and music classification.
Informative musical patterns could be automatically identified using CNNs [7].Nonetheless, these rudimentary CNN models were restricted in generalizing previously unencountered music datasets.To overcome the limitations of these models in generalization and handling long-term temporal information, researchers proposed a hybrid model that combined residual neural networks (ResNets) and gated recurrent units (GRUs).This model used visual spectrograms as inputs and aimed to analyze music data more comprehensively.This approach [13] could improve the performance of musicrecommendation systems through more accurate genre classification, thereby addressing the shortcomings of traditional machine learning and basic CNN models in handling the complexity of music data.
By contrast, an approach has been proposed using S3T [16], which is a selfsupervised pre-training approach with the Swin Transformer that effectively handles long-term information in music classification.This approach primarily aimed to learn meaningful music representations from a large corpus of unlabeled music data.It employed the momentum-based paradigm MoCo to serve as a feature extractor in the time-frequency domain of music and utilized a music data-augmentation pipeline and two specially designed preprocessors to further optimize the learning of music representations.However, this approach faced challenges such as an increased computational overhead owing to the management of large dynamic dictionaries, sensitivity to hyperparameter selection in the optimization process, and a lack of model interpretability.
Progress in music classification had shifted from a focus on timbral features to more advanced and automated feature identification and extraction.Each stage of this evolution was associated with different challenges, ranging from limited generalization and a narrow focus on certain musical aspects to increased computational demands and complex training requirements.This underscored the ongoing need for obtaining more accurate and computationally efficient solutions and overcoming the persistent challenges of balancing model complexity and interpretability.
Comparison of Music Genre Classification Based on Audio Data
The comparison primarily considered two dimensions: one from the perspective of data types and another from the perspective of model structures.Regarding the perspective of input data, some researchers treated music classification as a visual task, converting audio music data into spectrograms [13,16,19,20] and MFCC images [7].These approaches emphasized the visual characteristics of music data.Spectrograms could capture the local characteristics of audio signals in a time-frequency dimension in an intui-tive and computationally efficient manner, thereby providing a robust and informationrich feature representation for music-classification tasks.
Second, there was an evolution from initially employing classifiers [19], such as KNN, the Gaussian Mixture Model (GMM), and SVM, to incorporating deep CNN structures and combining them with RNNs.Regarding RNNs, they faced challenges such as difficulties in learning long-term dependencies.These shortcomings were addressed by introducing S3T-based models, which offered advantages in capturing long-range dependencies.Unfortunately, these models faced challenges in terms of computational costs and robustness as they evolved from simple to complex and singular to multifaceted.Especially when considering how to effectively capture and process the long-term information of music, these technological variances and advancements became pivotal in addressing the challenge.Notably, the proposed method offered a distinct approach that specifically addressed current limitations by expanding upon existing methodologies.The specifications of the proposed and existing approaches are listed in Table 1.
Denoising Transformer-Based Audio Music Genre Classification
The architecture and training strategies for the Deformer-based method are detailed next.First, the data representation techniques are discussed; then, the pre-training and fine-tuning stages are outlined.
Overview
A method utilizing pre-training techniques based on Deformer was proposed for audio music genre classification.The proposed method consists of pre-training and finetuning stages, as shown in Figure 1.First, unlabeled or labeled audio data are preprocessed into a normalized Mel spectrogram, and a noise-injection operation is applied in the pre-training stage, during which Deformer learns deep representations of audio music from unlabeled audio data.For this, a prior decoder is utilized to restore the denoised Mel spectrogram from the low-dimensional hidden states, which is obtained from Deformer.In the fine-tuning stage, the pre-trained Deformer and classifier are further trained using labeled audio data to perform music genre classification.The flowchart for the proposed method is shown in Figure 2.
Preprocessing and Noise Injection
The preprocessing is employed in the pre-training and fine-tuning stages, as illustrated in Figure 3. Initially, audio data are converted into Mel spectrograms with dimensions and , corresponding to time and frequency, respectively.These spectrograms are then resized using the librosa library to new dimensions, and , which are determined based on the experimental hardware.It is worth noting that the values of and should be carefully chosen; excessively large dimensions may incur a larger calculation resource usage.Finally, the resized Mel spectrograms are normalized by scaling the values to fit within a range from zero to one.
Pre-Training Stage
The objective of the pre-training stage is to enable Deformer to understand the deep representation of audio music through unsupervised denoising.The role of the prior decoder within this framework is to restore patches from low-dimensional hidden states obtained from Deformer during pre-training.An autoencoder (AE), which comprises an encoder and a decoder, was designed to train the decoder, as shown in Figure 4.The encoder consists of max-pooling and convolutional layers, and the decoder consists of convolutional and up-sampling layers.The encoder compresses the input patches into lowdimensional vectors, and the decoder restores the low-dimensional hidden states to the original patches.The mean squared error (MSE) loss is calculated to update the AE parameters.
Fine-Tuning Stage
In the fine-tuning stage, the Deformer that has already learned the deep representation of audio music is applied to music genre classification.Figure 6 shows the process of fine-tuning the pre-trained Deformer to a classification network.Different from the pre-training stage, the fine-tuning stage is not unsupervised learning, and the prior decoder is not utilized.Normalized Mel spectrogram patches , , , . . ., , … , without added noise are fed into the model, along with a classifier token () [21].The token serves as a condensed representation of all input patches.As opposed to pretraining, which involves decoding layers, fine-tuning employs a genre prediction layer connected to the final Deformer position.This layer is a linear component that uses a SoftMax function to predict the probability distribution for the genre classes based on the token's hidden state.The cross-entropy loss is then computed using the predicted and target genres to fine-tune the Deformer and the genre prediction layers, enhancing Deformer's ability to classify music genres effectively.
Experiments and Results
Three experiments, namely, prior decoder training, Deformer pre-training, and Deformer fine-tuning, were performed to thoroughly evaluate the effectiveness of the proposed Deformer-based method in terms of audio music genre classification.First, in the prior decoder training experiment, an autoencoder was trained to convert lowdimensional hidden states into patches of normalized Mel spectrograms.In the Deformer pre-training experiment, Deformer was pre-trained to understand musical deep representations by restoring the original Mel spectrograms from noisy Mel spectrograms.Finally, in the Deformer fine-tuning experiment, the pre-trained Deformer classified the audio music genre through supervised learning.To evaluate the performance and effectiveness of the proposed method, two baseline models are introduced for comparison.The first model [13] utilizes a residual neural network-bidirectional gated recurrent unit (ResNet-BiGRU), while the second relies on S3T [16].These models serve as benchmarks, helping to underscore the advantages of the proposed technique for music genre classification.
Experimental Environment
Table 2 summarizes all the hyperparameters used in the three experiments.The autoencoder comprises an encoder and a decoder; the encoder consists of three convolutional layers utilizing the same kernels but with different channels.The Deformer hyperparameters include 196 patches, a patch size of 16 × 16, a hidden size of 768, four intermediate multiplications, 12 hidden layers, and 12 attention heads.
During the training of the three models, the resized Mel Spec ( × ) size was set to 224 × 224, which can be adjusted according to the hardware of the experimental environment.As mentioned before, noise injection was operated in pre-training.The noise injection ratio was determined by experimental results, given that the highest classification performance was obtained when was set to 0.75.Similarly, 0.75 was also used as the mask parameter in [22], which similarly achieved good results.As the input to the prior decoder is a patch, it allows for a higher batch size compared to others.The parameters of the AdamW optimizer were nearly similar.When setting the learning rate, it was considered that pre-training requires warmup.Unlike other approaches that use a fixed learning rate, pre-training employed a dynamically changing learning rate based on the WarmupDecayLR scheduler.
The experiments were conducted on a system running Windows 10 with 2 Xeon(R) Silver 4310 CPUs, 4 NVIDIA GeForce RTX 3090 GPUs, and 128 GB of DDR4 RAM.The proposed method was developed in Python 3.10.12and implemented using the PyTorch 2.0.0 platform, complemented by the DeepSpeed acceleration engine for enhanced performance.In addition to conducting these experiments, a comparative assessment was performed with two baseline models.The first baseline model [13] employed a hybrid approach, combining ResNet18 and Bi-GRU.ResNet18 utilizes residual connections, comprising 18 weighted layers, including an initial convolutional layer, a max-pooling layer, 4 convolutional blocks (each with 2 convolutional layers), an average pooling layer, and a fully connected layer.Bi-GRU is a recurrent neural network designed for processing sequential data, consisting of a GRU layer and a fully connected layer.
The second baseline model [16] adopted S3T, leveraging the Swin Transformer as a feature extractor in the time-frequency domain of music.It integrates a momentumbased MoCo paradigm for enhanced performance.The feature extractor follows the Swin-T configuration, using the compact version of the Swin Transformer with a hidden channel number of 96.Each block comprises 2, 2, 6, and 2 layers, ensuring increased efficiency.
Experimental Data
Two distinct datasets, each divided into an 80% training set and a 20% test set, were employed in the genre classification experiment involving audio music data.The MAESTRO dataset [23] was used for feature extraction via an autoencoder and for pretraining Deformer.This dataset encompasses a broad spectrum of musical instruments and styles, with contributions from both professionals and amateur musicians.Mel spectrograms derived from raw audio files were used as inputs.The training set was used for model optimization using techniques such as gradient descent, and the test set was designated for performance evaluation using metrics such as MSE.The GTZAN [24] music dataset was exclusively used to fine-tune Deformer.Renowned in genre classification, this dataset consists of one thousand 30 s audio segments across ten distinct genres, including blues, classical, and hip hop.The audio clips were transformed into Mel spectrograms to serve as inputs for the model.The training process involved iterative Deformer updates based on loss minimization, and the test phase assessed the genreclassification capabilities of Deformer in terms of the precision and recall metrics.
Experimental Results
The results from the autoencoder training, pre-training, and fine-tuning experiments were analyzed.The initial results indicated a rapid loss function convergence, validating the effectiveness of decoder training.Further findings from the pre-training and fine-tuning processes revealed that Deformer exhibited superior performance in music data processing, outperforming the baseline models in multiple key performance metrics.
Prior Decoder Training Results
Figure 7 shows the prior decoder training experiment results, which involved 8000 steps.The MSE loss decreased rapidly from 0.14 to 0.02.Subsequently, the loss continued to decrease at a slower pace, eventually converging to approximately 0.001.Figure 8 shows the test results of the prior decoder experiment and the reference for comparison.Figure 8a shows the Mel spectrograms assembled from the patch output reconstructed by the decoder, while Figure 8b shows the original Mel spectrograms used for comparison with the reconstructed version; subtle local differences can be observed in the areas marked with red boxes.Interestingly, the Mel spectrograms assembled from the reconstructed patches in Figure 8a were almost indistinguishable from the original Mel spectrograms in Figure 8b, demonstrating that the prior decoder could effectively reconstruct the Mel spectrograms from the low-dimensional hidden states.
Pre-Training Results
Figure 9 shows the two distinct phases in the loss curve during the model training process.Initially, the loss value rapidly decreased from a higher level to approximately 0.15, after which the rate of decline significantly decreased and eventually stabilized at approximately 0.01 after approximately 8000 steps.Figure 10 shows the pre-training stage of Deformer using Mel spectrograms constructed from the patches.Figure 10a shows the Mel spectrograms assembled from patches injected with noise, which served as the inputs to the model during the pretraining stage.Figure 10b shows the Mel spectrograms assembled from the denoised patches, which are the outputs of Deformer.Finally, Figure 10c shows the original noisefree Mel spectrograms.The principal features and trends shown in Figure 10c are successfully captured, as shown in Figure 10b, albeit with some loss of detail, demonstrating the capabilities of Deformer in terms of noise reduction and learning meaningful representations of music data.These observations further emphasize the effectiveness of the pre-training stage as well as the preparedness of the pre-trained Deformer for the next fine-tuning stage.To assess the classification efficacy of Deformer across different music genres, the confusion matrix depicted in Figure 12 is provided.The confusion matrix presents truepositive, true-negative, false-positive, and false-negative results, providing a clear classification performance evaluation.The fact that the predicted results are clearly distributed along the diagonal of the confusion matrix indicates that most of the predictions are correct.It can be seen that Deformer exhibited exceptional performance in the "classical" and "pop" categories, achieving impeccable accuracy with zero misclassifications within these genres.This outcome highlights its acute understanding of the unique attributes associated with these music genres.However, it exhibited inaccuracies within the "rock" and "blues" genres.Specifically, a few samples falling under the "rock" category were incorrectly classified as "blues" and "metal".Likewise, a subset of "blues" samples was inaccurately classified as "jazz" and "metal".These misclassifications suggested potential limitations of Deformer, particularly when differentiating between genres having nuanced or overlapping traits.Analyzing the confusion matrix is vital as it paves the way for prospective refinements and emphasizes the need to improve the discriminatory capabilities of the model when classifying music belonging to closely related genres such as "rock" and "blues".Table 3 presents the accuracy, precision, recall, and F1 scores for the proposed pretrained Deformer, Deformer without pre-training used for the ablation experiment, and ResNet-BiGRU and S3T as two baseline models.To complete the comparison, two additional results [25,26] are given, which demonstrated high accuracy in audio music genre classification.The pre-trained Deformer reached a classification accuracy of 84.5%, which is 3.4% higher than that of ResNet-BiGRU (81%), 3.3% higher than that of S3T (81.1%), 0.6% higher than that of M2D [25] (83.9%), and 4.8% higher than that of the Jukebox model pre-trained with CALM (79.7%) [26].The pre-trained Deformer significantly outperformed its non-pre-trained counterpart in terms of accuracy, precision, recall, and F1 score, with the latter only achieving an accuracy and recall of 0.37, a precision of 0.3334, and an F1 score of 0.3464.This comparison highlights the importance of pre-training in enhancing the performance of Deformer for music classification.It is worth noting that all data presented in Table 3 were obtained through testing on the GTZAN dataset.
Conclusions
A pre-trained model, Deformer, was introduced to address the specific challenges associated with existing Swin transformer-based approaches in music genre classification within the context of MIR.These challenges include the computational burden associated with managing large dynamic dictionaries, the finicky nature of the contrastive loss function with respect to hyperparameter choices, and the low level of model interpretability commonly observed in MoCo-based approaches.Utilizing a two-stage process of pre-training and fine-tuning, the proposed model leveraged unlabeled audio data during the pre-training stage.The experimental results underscore the significance of incorporating Deformer in the realm of deep learning architectures for audio music classification.The proposed method achieved an accuracy of 84%, outperforming the ResNet-BiGRU-based (81%) and S3T-based (81.1%) models.This highlights the substantial contribution of Deformer to superior performance in audio classification, marking a noteworthy advancement over traditional approaches.
Regarding its limitations, the proposed model was not assessed on larger or more diverse datasets, creating gaps in information regarding its generalizability.Future research directions could involve restructuring the architecture of the model to enable it to better handle genres that have subtle similarities, such as "rock" and "blues".The focus should be on enhancing the ability of the model to distinguish between closely aligned genres.Further improvements can be made to evaluate the performance of the model across a more diverse set of music genres and use cases.By pursuing these avenues, this research would not only add to the growing literature in the domain of music genre classification but also set a strong performance standard in subsequent investigations.
Figure 1 .
Figure 1.Overview of the method: pre-training and fine-tuning stages.
Figure 2 .
Figure 2. Flowchart of pre-training and fine-tuning under the proposed method.
Figure 3 .
Figure 3. Preprocessing and noise injection of data.Noise injection is operated additionally only in the pre-training stage.They are divided by equal-sized patches through a matrix-division operation.Each patch is derived by dividing the Mel spectrogram into sections, resulting in patches with dimen-
Figure 4 .
Figure 4. Structure of the autoencoder.To complete the denoising, patches of noisy Mel spectrograms , , , . . ., , … are passed into Deformer, as shown in Figure 5.The position embedding layer, which is trainable, utilizes absolute numerical embedding to integrate positional information into these patches.The transformer layers utilize multi-head selfattention and feed-forward neural networks to relate to this layer.Subsequently, the prior decoder restores these low-dimensional hidden states back into the restored patches * , * , * , . . ., * , … * .Then, the MSE loss is calculated based on the restored patches * , * , * , . . ., * , … * and original patches , , , . . ., , … , for training Deformer.This training strategy allows Deformer to gain a deep understanding of the contextual relationships and interdependencies among patches.
Figure 5 .Algorithm 1 3 :
Figure 5. Pre-training stage of Deformer.Algorithm 1 describes the pre-training stage, where Deformer(•) represents Deformer and represents Deformer parameters. denotes the number of training steps. represents one of the patches from the noisy Mel spectrograms, and * is the restored patch.MSE(•) represents the loss function to calculate the loss between * and , where * is only generated by the injected noise and is the patch of the normalized Mel spectrogram without noise. is updated based on a gradient, which is calculated as * ( ), where is the learning rate.
Figure 6 .Algorithm 2
Figure 6.Fine-tuning stage of Deformer.Algorithm 2 details the fine-tuning process, where represents the normalized patches, represents the target genre, represents the predicted genre of Deformer, and _(•) calculates the loss of and for performing updates.
Figure 7 .
Figure 7. MSE loss in the prior decoder training experiment.
Figure 8 .
Figure 8.Output of the decoder.(a) Reconstructed Mel spectrograms and (b) original Mel spectrograms.The red boxes indicate subtle differences between (a,b).
Figure 9 .
Figure 9. MSE loss in the pre-training experiment.
4. 3 . 3 .
Figure 11 shows the loss changes of Deformer during fine-tuning.The orange line (pre-trained) demonstrates a rapid decline in loss during fine-tuning, indicating a high level of learning efficiency.The blue line (without pre-training) exhibits a slower loss decrease.At 20,000 steps, the fine-tuning process based on pre-training demonstrated a significant performance advantage compared to that without pre-training.
Figure 11 .
Figure 11.Cross entropy loss in fine-tuning of Deformer and ablation experiment.
Figure 12 .
Figure 12.Confusion matrix of audio music genre classification performed by Deformer.
Table 1 .
Differences between existing approaches and the proposed method.
sions of √ ⁄ × √ ⁄ , where and are the width and height of the transformed spectrogram, respectively.Patches , , , . . ., , … , , follow the order from left to right and top to bottom.The noise ratio, %, dictates the fraction of patches that receive noise.The % of patches are injected with noise with a Gaussian distribution (0, 1); otherwise, they remain unchanged.Finally, a noisy Mel spectrogram, which is the combination of , , , . . ., , … , is obtained:
Table 2 .
Hyperparameters used in the autoencoder, pre-training, and fine-tuning experiments.
Table 3 .
Performance comparison: audio music classification models. | 6,188 | 2023-11-25T00:00:00.000 | [
"Computer Science"
] |
DocNLI: A Large-scale Dataset for Document-level Natural Language Inference
Natural language inference (NLI) is formulated as a unified framework for solving various NLP problems such as relation extraction, question answering, summarization, etc. It has been studied intensively in the past few years thanks to the availability of large-scale labeled datasets. However, most existing studies focus on merely sentence-level inference, which limits the scope of NLI's application in downstream NLP problems. This work presents DocNLI -- a newly-constructed large-scale dataset for document-level NLI. DocNLI is transformed from a broad range of NLP problems and covers multiple genres of text. The premises always stay in the document granularity, whereas the hypotheses vary in length from single sentences to passages with hundreds of words. Additionally, DocNLI has pretty limited artifacts which unfortunately widely exist in some popular sentence-level NLI datasets. Our experiments demonstrate that, even without fine-tuning, a model pretrained on DocNLI shows promising performance on popular sentence-level benchmarks, and generalizes well to out-of-domain NLP tasks that rely on inference at document granularity. Task-specific fine-tuning can bring further improvements. Data, code, and pretrained models can be found at https://github.com/salesforce/DocNLI.
Introduction
A fundamental challenge of natural language processing (NLP) lies in the variability of semantic expression, where the same meaning can be conveyed by, or inferred from, different pieces of text (Dagan et al., 2009). This phenomenon gives rise to the many-to-many mapping between textual expressions and meanings. Many NLP problems, such as information extraction, question answering, document summarization and machine translation, desire a system for this variability phenomenon so as to figure out that a particular meaning can be inferred from distinct text strings (Dagan et al., 2009). Natural language inference (a.k.a textual entailment (Dagan et al., 2005)) acts as a unified framework to study those NLP problems by casting the background text as a premise and the text of target meaning as a hypothesis. Then, a good NLI recognizer can be considerably translated to a well-performing system regarding respective NLP tasks.
NLI was first studied in (Dagan et al., 2005). Research in the early stages was mostly driven by the PASCAL Recognizing Textual Entailment (RTE) challenges which are annual competitions with benchmark datasets released. In the past few years, the study of NLI has moved forward rapidly along with the construction of large-scale datasets, such as SNLI (Bowman et al., 2015), the science domain SciTail (Khot et al., 2018) and multi-genre MNLI (Williams et al., 2018), etc.
However, some NLI datasets may not be suitable any more for solving downstream NLP problems since they were commonly crowdsourced in isolation from any end task 2 (Khot et al., 2018). In addition, most NLI datasets and studies paid attention merely to sentence-level inference -both the premises and hypotheses are single (and usually short) sentences. This makes them unsuitable for other open-ended NLP problems. For example, to verify the factual correctness of a document summary, sentence-level NLI systems cannot be of much help (Kryściński et al., 2019). Considering the fact-checking task FEVER (Thorne et al., 2018) as another example, in order to figure out the truth value of a claim against a Wikipedia article, NLI has to be done on individual sentences instead of using the whole article as the premise. In short, some NLP tasks require the reasoning of NLI to go beyond the sentence granularity, regarding both the premise and the hypothesis.
In this work, we introduce DOCNLI, a largescale dataset for document-level NLI. It is constructed by reformatting some mainstream NLP tasks, including question answering and document summarization, and integrating existing NLI in which the premises may be longer than single sentences. DOCNLI has the following characteristics: • DOCNLI is highly related with end NLP tasks.
A well-performing system to DOCNLI is expected to throw light on addressing other NLP challenges.
• Premises always have more than one sentence; the majority are natural documents such as news articles. Hypotheses cover a variety of lengths, ranging from a single sentence to a document with hundreds of words. By this setting, we hope the systems can learn to deal with future applications that need to infer the truth value of a piece of text regardless of its length.
• In contrast to some existing sentence-level NLI datasets, DOCNLI has pretty limited artifacts. We present a novel approach to disconnect the potential artifacts with the NLI task itself; a "hypothesis-only" baseline has difficulties in discovering some spurious correlations.
In experiments, we will show that a RoBERTa (Liu et al., 2019) system pre-trained on DOC-NLI demonstrates promising performance on conventional sentence-level NLI benchmarks such as MNLI and SciTail, and generalizes well to out-of-domain NLP tasks (e.g., fact-checking and multi-choice question answering) that necessitate document-level inference. Task-specific finetuning can further improve the performance and achieve new state of the art for some end tasks.
Related Work
To our knowledge, document-level NLI has attracted very little ink in the community, possibly because of the lack of labeled datasets. In this section, we mainly describe some prior NLI datasets that share some spirits with our DOCNLI.
End-task driven. As mentioned in Section 1, the RTE series were driven by downstream NLP tasks such as information retrieval, information extraction, question answering, and summarization. MCTest (Richardson et al., 2013) is a question answering task in which a paragraph is given as background knowledge, then each question is paired with a positive answer and some negative answers. The MCTest benchmark released an NLI version of this corpus by treating the whole paragraph as a premise and combining the question and answer candidates as hypotheses. SciTail (Khot et al., 2018) is also derived from the end QA task of answering multiple-choice school-level science questions. Unlike MCTest, the premises in SciTail are single sentences selected by an information retrieval approach. By casting an end NLP task as NLI, a good NLI recognizer therefore can be directly turned into a well-performing system for that NLP task. This can be even more attractive if we can learn a generalizable NLI system to solve some NLP problems that have limited annotations.
Going beyond the sentence granularity. The premises in MCTest are paragraphs, but MCTest has pretty limited size. Demszky et al. (2018) tried to convert the question answering benchmark SQuAD (Rajpurkar et al., 2016) into an NLI format by treating the paragraph as a premise and using a neural network to generate a hypothesis sentence given the question and answer span as inputs. Kryściński et al. (2019) created a (document, sentence) pair data "FactCC" to train a classifier for checking the factual correctness of single sentences in automatically generated summaries. FactCC is specific to the target summarization benchmark dataset, so it is unclear how well FactCC can generalize to other summarization benchmarks and other NLP problems. In addition, only single sentences act as hypotheses. Nevertheless, that literature exactly showed that document-level NLI, especially the inference of document-level hypotheses, is highly desirable. ANLI (Nie et al., 2020) also gather multi-sentence as premises. However, the sentence sizes in ANLI premises are pretty limited and the hypotheses in ANLI are single sentences consistently.
To our knowledge, our DOCNLI is the first dataset that uses hypotheses longer than single sentences, and stays closely with end NLP tasks. even a document, and the hypotheses cover a large range of granularity: from a single sentence to a longer paragraph (e.g., 250 words); (ii) Diverse domains; (iii) No severe artifacts; for example, we do not include the hypotheses that can be easily found "grammatically incorrect" by well-trained language models such as BERT (Devlin et al., 2019). (Nie et al., 2020), the question answering benchmark SQuAD (Rajpurkar et al., 2016) and three summarization benchmarks (DUC2001 3 , CNN/DailyMail (Nallapati et al., 2016), and Curation 4 (Curation, 2020)). Next, we describe how each data resource is integrated into DOCNLI.
Data Preprocessing
ANLI to DOCNLI. ANLI is a large-scale NLI dataset collected via an iterative, adversarial human-and-model-in-the-loop procedure. In each round, the best-performing model from the previous round is selected and then human annotators are asked to write "hard" examples that this model misclassifies. They always choose multi-sentence paragraphs as premises and write single sentences as hypotheses. Then a part of those "hard" examples join the training set so as to learn a stronger model for the next round. The remaining "hard" examples act as dev/test sets correspondingly. Totally three rounds were accomplished for ANLI construction. In the end, ANLI has train/dev/test sizes as 162,865/3200/3200 with three classes "entail", "neutral" and "contradict".
We keep premise-hypothesis pairs in ANLI unchanged, but unify the two classes "neutral" and "contradict" into a new class "not entail".
SQuAD to DOCNLI. SQuAD is a QA dataset in which a multi-sentence paragraph is accompanied by a couple of questions; each question has a text span from the paragraph as its answer. Demszky et al. (2018) converted SQuAD into NLI format by reformatting the question-answer pairs into declarative sentences (QA2D) by neural networks. The resulting sentences containing correct (resp. incorrect) answers are entailed (resp. not entail) by the paragraph. Human evaluation was conducted to make sure those declarative sentences have high quality on three criteria: grammaticality, naturalness, and completeness. In addition, Demszky et al. (2018) replicated some statistical analyses showing that this QA2D dataset does not have clear artifacts as SNLI or MNLI. In this work, we directly use this QA2D dataset and re-split it into train/dev/test by 50k/7,236/8,275.
Summarization to DOCNLI. Here we introduce the basics of the three summarization datasets (DUC2001, CNN/DailyMail and Curation), and explain how we convert them into DOCNLI in a unified approach.
• The DUC series are some of the earliest benchmarks for studying automatic document summarization. DUC2001 is on generic, singledocument summarization in the news domain. There are totally 600 documents along with humanwritten reference summaries of approximately 100 words. We split those document-summary pairs into train/dev/test by size of 400/50/150. doc Petrofac shares surged on Wednesday following reports that the Serious Fraud Office has abandoned a criminal investigation into three businessmen who were accused of paying brides in the energy industry. The SFO had been probing claims that Unaoil -a Monaco-based consultancy that worked with Petrofac, primarily in Kazakhstan between 2002 and 2009 -had paid multimillion pound brides to land contracts in the oil and gas industry. But The Guardian cited sources earlier as saying that the SFO has dropped the investigation into the trio. Compliance industry newsletter MLex was the first to report the news, saying on Tuesday that the probe had been halted after three years. The SFO launched an investigation into Petrofac in May 2017 as part of a wider probe into Unaoil. In February 2019, David Lufkin, Petrofac's former global head of sales, pleaded guilty to 11 counts of bribery linked to contracts worth more than $730m in Iraq and $3.5bn in Saudi Arabia. SFO spokesman Adam Lilley said the Unaoil investigation "remains active and is ongoing". "We do not comment on ongoing investigations," he said. [· · ·] real summ.
The Serious Fraud Office has reportedly dropped a criminal investigation into three businessmen who had been accused of conspiring to make corrupt payments to secure contracts in Iraq. The SFO launched an investigation into Petrofac in May 2017 as part of a wider probe into Monaco-based oil consultancy Unaoil.
fake summaries word repl.
The Serious financial Office has reportedly launched a criminal investigation into three businessmen who had been accused of conspiring to make corrupt payments to oil contracts in Iraq. The SFO launched an investigation into corruption in May 2017 as part of a wider investigation into Monaco-based financial consultancy firms. entity repl.
Unaoil has reportedly dropped a criminal investigation into three businessmen who had been accused of conspiring to make corrupt payments to secure contracts in Monaco . The SFO launched an investigation into Monaco in May 2017 as part of a wider probe into Petrofac-based oil consultancy The Serious Fraud Office. sent repl.
The Serious Fraud Office has reportedly dropped a criminal investigation into three businessmen who had been accused of conspiring to make corrupt payments to secure contracts in Iraq. A spokesman for the SFO said it was "unable to confirm or deny" that an inquiry had taken place. Table 2: An example of the Curation summarization dataset shows the original document, and the real summary written by humans. We used "word replacement", "entity replacement" and "sentence replacement" to form three types of "fake" summaries against the document. Texts in red are substitutes.
• CNN/DailyMail was gathered from news articles in CNN and Daily Mail websites; each article is paired with 3 to 4 sentences of abstractive summary bullets generated by humans. CNN/DailyMail has 286,817/13,368/11,487 articlesummary pairs in train/dev/test. The source articles in the training set have 766 words spanning 29.74 sentences on average while the summaries consist of averagely 3.72 sentences.
• Curation is a recent summarization dataset with 40,000 professionally-written summaries of news articles. We split it into train/dev/test as 20K/7K/13K.
All three summarization datasets align the documents with the human-written reference summaries. This enables us to obtain "entail" pairs of (document, reference summary). The remaining challenge lies in how to generate "not entail" pairs.
We adopt three types of manipulations on the "reference" (also referred as "real") summaries.
• Word replacement. We mask eight words whose part-of-speech tags are among {"VERB", "NOUN", "PROPN", "NUM"} by spaCy toolkit 5 , then use BERT to predict them. The most likely predicted word is used to replace a masked one. After word replacements, the resulting text is our "fake" summary.
• Entity replacement. We use spaCy for named 5 https://spacy.io entity recognition (NER). For an entity which is the only one of a specific NER type in the real summary, we search for a different entity with the same type from the document to replace it; otherwise, it will be replaced by the entity of the same type in the real summary. We do this operation for five entities. We skip entity-level manipulation for the instances that have fewer than five detected entities. After entity replacement, we get a "fake" summary.
• Sentence replacement. From the real summary, we randomly select a sentence, then forward its left context to CTRL (Keskar et al., 2019), a state-ofthe-art controllable text generator, to generate a new sentence which is used to replace the selected sentence. This operation generates a new "fake" summary. Table 2 illustrates a (document, real summary) pair in the Curation dataset, and the three types of "fake" summaries we generated.
Mitigating Artifacts in DOCNLI
In Section 3.1, we transformed these NLI, QA and summarization datasets to satisfy the format of DOCNLI. We refer this resulting dataset as raw-DOCNLI. In consideration of the common artifacts in some popular sentence-level NLI benchmarks (Gururangan et al., 2018;Poliak et al., 2018;Tsuchiya, 2018), we tried a "hypothesis-only" baseline based on RoBERTa on this raw-DOCNLI. Sur-entail not entail DOCNLI raw DOCNLI Table 3: D: a document in summarization benchmarks; R: a real summary; F i : a fake summary derived from R (i=1· · · n); F + i : Using CTRL to insert a generated sentence between a random pair of consecutive sentences in the F i , in a way similar to what we described as "sentence replacement" in Section 3.1. DOCNLI's training set is the combination of raw-DOCNLI and those added pairs; DOCNLI's dev and test sets do not have trivial pairs prisingly, this baseline indeed obtains non-trivial performance. This means that RoBERTa can still learn some label-specific biases from the hypotheses, even though we have tried hard to make the "fake" summaries coherent and natural. Nevertheless, this does not mean we have failed to build a robust DOCNLI dataset. The surprising behavior of "hypothesis-only" in raw-DOCNLI indicates that the BERT classifier can easily recognize the summary is "real vs. fake", but "real vs. fake" is not the same concept as "entail vs. not entail" defined in the NLI framework. This is because a "fake" one can still be "entail"-ed if the premise has proper information; and a "real" one can also be "not entail" if the premise does not contain necessary clues for inferring it.
For convenience, we use D as a document, R as the real summary, and {F 1 , F 2 , · · ·, F n } as the n fake summaries derived from R. To ensure the model can learn exactly what "entail vs. not entail" is rather than be misled by the manipulations that yield those "fake" text pieces, as Table 3 demonstrates, we prepare the following pairs to extend the raw-DOCNLI and get our final DOCNLI: • Adding pairs (F + i , F i ), i = 1, · · · , n, for class "entail". Here F + i has one more sentence than F i , inserted by CTRL, as described in "sentence replacement" in Section 3.1 (here we do insertion rather than replacement). The goal is to let the system know that a fake summary can also be a positive hypothesis in NLI, if its premise covers necessary information.
• Adding a single pair (F i , R) for class "not entail". This means the original real summary can also be a negative hypothesis if it includes mis-matching information with its premise. F i is randomly chosen from the set {F 1 , · · ·, F n }.
By adding above two sorts of pairs, we want to disconnect the concept of "real vs. fake" from "entail vs. not entail", letting the system learn the essence of NLI. Both the "real" and "fake" summaries have the same number of instances of being "entail" and "not entail" in the extended dataset.
It is worth mentioning that since the instances "(F + i , F i ) → entail" are very trivial to be recognized, we add them in the training set only. Table 4 lists the sizes of DOCNLI for train/dev/test in each class. The training set is roughly balanced, while approximately 12% examples in dev and test belong to "entail". F1 is the evaluation metric. Figures 1-2 illustrate the length distributions of premises and hypotheses in DOCNLI. Because the majority of hypotheses have fewer than 150 words, and real/fake summaries also act as premises in DOCNLI, as reported in Table 3, therefore, the majority of premises stay within the length limit of 150 words, shown in Figure 1. Still, there are a large amount of premises whose lengths are within the range of [150, 900] words.
Human Verification
DOCNLI covers examples derived from ANLI, SQuAD and three summarization datasets. Here, we only conduct human verification for the pairs derived from summarization, especially for those "fake" summaries, to get some clues to answer two questions: (i) Are those "fake" summaries indeed incorrect given the original document? (ii) Do those "fake" summaries look natural? By "natural" we mean the text should have no major grammar errors, and no unrelated text spans that make the whole text piece look over uncoordinated.
The authors of this work manually checked 200 random "fake" examples, among which none is true given the same document as the "real" summary. This is mainly because we replaced relatively a lot from the original real summaries.
However, some minor grammar issues inevitably exist. Take the following text piece as an example: "WeWork Companies LLC (replace: "We-Work") has announced plans to hold a conference call on 2025 for holders of its 7.875% Senior Notes due 26 August to discuss its Notes (replace: "Q2") results. Securities analysts and market-making financial institutions can also register for access. The call is scheduled for 12:00 P.M. (replace: "noon Eastern Time")." This example has five entities that are substitutes, all underlined. If a substitute comes from the premise document, we use "(replace: XX)" to denote the entity that was there. The two entities (NER type "date"), in red, replaced each other: "2025" and "26 August", which makes the new text "[· · ·] on 2025 [· · ·]" grammatically incorrect.
Experiments
We study three questions. (Q 1 ) How challenging is DOCNLI (especially with regard to different lengths of hypotheses)? (Q 2 ) Out-of-domain evaluation, in which we train a system given DOCNLI and test it on downstream NLP tasks that are not covered by the source tasks in DOC-NLI construction. (Q 3 ) Could a system trained on DOCNLI work well on sentence-level NLI?
The DOCNLI task is challenging
The state-of-the-art systems on sentence-level NLI problems are largely based on transformers (Vaswani et al., 2017), such as BERT, RoBERTa (Liu et al., 2019), etc. However, they can only handle maximal 512 tokens preprocessed by the WordPiece tokenizer (Wu et al., 2016). This is an issue to build an effective document-level NLI machine. Therefore, for the main experiments, we also report Longformer (Beltagy et al., 2020) -a RoBERTa variant that can handle up to 4096 tokens. Longformer has two versions, one is "Longformerbase", the other is "Longformer-large". We currently only report "Longformer-base" due to memory constraints.
To answer the question (Q 1 ), we compare the following systems (we can include more baselines, but most popular approaches either are too weak or can only handle short piece of texts): • Hypothesis-only. We train RoBERTa on hypotheses only.
• RoBERTa-large. Although we claimed that RoBERTa may not be a good platform to learn DOCNLI, here we report it just for reference. Maximal token limit: 512 tokens.
• Longformer-base. We use the released Longformer library 6 by (Beltagy et al., 2020), training it on the full training set of DOCNLI, with length limit of 1.3K tokens, batch size 1 per GPU, and learning rate 5e-6.
All systems are trained for 5 epochs, and report the best model tuned on dev set. Table 5 lists the F1 results of all systems on DOCNLI. We notice that "hypothesis-only" is just slightly higher than random guess, and is much lower than the "RoBERTalarge" system which takes both premises and hypotheses as input: 22.02 vs. 61.52 on test. Surprisingly, "Longformer"'s performance is clearly below that of the RoBERTa, even if it covers more tokens, possibly because we do not have enough computing resources to fully explore the better settings of Longformer. Figure 3 illustrates the impact of taking different numbers of tokens in Longformer, evaluated on dev set. In general, the more tokens the better performance.
We further look at the fine-grained F1 reports on the various lengths of premise-hypothesis pairs and hypotheses alone. Figure 4(a) shows that the system performance for pairs of lengths > 450 does not change clearly. This is probably due to those models' truncation when the (premise, hy-6 https://github.com/allenai/longformer pothesis) pairs are overlong (note that one word may be split into multiple tokens by the WordPiece tokenizer). Figure 4(b) demonstrates that the task gets increasingly challenging when the hypotheses become longer, which matches our intuition.
Overall, DOCNLI is a very challenging task that seeks solutions equipped with a stronger capability of representation learning.
Applying DOCNLI to end NLP tasks
To answer the question (Q 2 ), we play DOCNLI to see if it can help downstream NLP tasks. As DOC-NLI is derived from summarization and QA already, we do not consider these two types of NLP tasks any more (since improvements on them are not surprising), especially when their domains are covered in DOCNLI. In addition, we have to explore tasks that have NLI-format data availableconverting an open NLP task to NLI format is not trivial and is beyond the scope of this work. Therefore, we consider the following two NLP tasks: FEVER (Thorne et al., 2018). FEVER is a benchmark dataset for fact-checking. Given an declarative sentence (aka. "claim"), the task searches for textual evidences from Wikipedia articles and then decide the truth value of this sentence (i.e., support / refute / not-enough-info).
We use the NLI-version of FEVER, released by (Nie et al., 2019): claims are hypotheses; premises corresponding to "support" or "refute' claims consist of ground truth textual evidence and some other randomly sampled evidence; premises for "not-enough-info" claims are the concatenation of all selected evidential sentences by a previ- ous SOTA fact-checking system. We combine "refute" and "not-enough-info" as a single class "not entail", and rename this data as "FEVERbinary". We randomly split FEVER-binary by 203,152/8,209/10,000 for train/dev/test respectively. 7 MCTest (Richardson et al., 2013). In Related Work, we have introduced MCTest. Briefly, it is a multi-choice QA benchmark in the domain of fictional story. The authors of MCTest released an NLI-version MCTest by combining the question and the positive (resp. negative) answer candidate as a positive (resp. negative) hypothesis. MCTest consists of two subsets. MCTest-160 contains 160 items (70 train, 30 dev, 60 test), each consisting of a document, four questions followed by one correct answer and three incorrect answers and MCTest-500 500 items (300 train, 50 dev, 150 test). MCTest has pretty limited labeled data; thus, it is a good testbed to investigate DOCNLI in studying annotation-scarce tasks. The MCTest has two official metrics: accuracy and NDCG (Normalized Discounted Cumulative Gain). Here we only report accuracy.
In this section, we still use RoBERTa-large and compare our DOCNLI with a latest NLI dataset ANLI in which the premises are longer than single sentences, and MNLI, the most widely-used sentence-level NLI dataset. For each data set (i.e., MNLI, ANLI or DOCNLI), we try two settings: (i) Using the data for pre-training, then do inference on FEVER-binary or MCTest directly without taskspecific fine-tuning; (ii) First pre-training on the data, then fine-tune on FEVER-binary or MCTest.
In Table 6, DOCNLI can consistently generalize better than ANLI and MNLI on the two NLP 7 Please note that this data released by (Thorne et al., 2018) is different from the one used in FEVER leaderboard. tasks FEVER-binary and MCTest. We notice that the pretrained model on DOCNLI demonstrates very strong performance on the two end tasks, even without any fine-tuning on the task-specific examples. Especially for MCTest, both the "DOCNLI (pretrain)" and "DOCNLI+finetune" surpass the prior state-of-the-art by large margins.
Applying DOCNLI to sentence-level NLI
To answer the question (Q 3 ), we use SciTail and MNLI as target sentence-level NLI tasks. Sci-Tail is from the science domain with two classes "entail" and "not entail" split 23,596/1,304/2,126 (train/dev/test). MNLI covers a broad range of genres with three classes "entail/neutral/contradict" split 392,702/20k/20k (train/dev/test). Since the gold labels of the test set in MNLI are not publicly available and DOCNLI is a binary classification task, we first unify MNLI's "neutral" and "contradict" into "not entail", then build a new labeled test set by randomly sampling 13k from the original dev set (the remaining examples are the new dev set). So now we have train/dev/test of size 372,702/6,647/13k. We first try some popular Transformer-style models, such as BERT-large, RoBERTa-large and Longformer-base to check how much we can get by training a supervised system on the full training data. Afterwards, we build a classifier by training RoBERTa-large on DOCNLI with or without SciTail/MNLI-specific fine-tuning. Table 7 shows that: (i) The pretrained model on DOCNLI indeed can generalize to some extend on both SciTail and MNLI. In particular, it gets SciTail accuracy 78.17 which is even higher than some task-specific fully-supervised models such as "ESIM", "De-Att" and "DGEM". The same pretrained system can also get comparable performance with BERT, Longformer and RoBERTa on binary-MNLI; this should be attributed to the strong generalization of ANLI towards MNLI (Nie et al., 2020); (ii) When do task-specific fine-tuning, our model can further improve the performance and get very close to the state-of-the-art in SciTail.
Summary
In this work, we collect and release a large-scale document-level NLI dataset DOCNLI. It covers multiple genres and multiple ranges of lengths in both premises and hypotheses. We expect this dataset can help to solve some NLP problems that require document-level reasoning such as QA, summarization, fact-checking etc. In experiments, we show that DOCNLI can yield a model generalizing well to downstream NLP tasks and some popular sentence-level NLI tasks. | 6,500.2 | 2021-06-17T00:00:00.000 | [
"Computer Science"
] |
Robust Representation Learning of Biomedical Names
Biomedical concepts are often mentioned in medical documents under different name variations (synonyms). This mismatch between surface forms is problematic, resulting in difficulties pertaining to learning effective representations. Consequently, this has tremendous implications such as rendering downstream applications inefficacious and/or potentially unreliable. This paper proposes a new framework for learning robust representations of biomedical names and terms. The idea behind our approach is to consider and encode contextual meaning, conceptual meaning, and the similarity between synonyms during the representation learning process. Via extensive experiments, we show that our proposed method outperforms other baselines on a battery of retrieval, similarity and relatedness benchmarks. Moreover, our proposed method is also able to compute meaningful representations for unseen names, resulting in high practical utility in real-world applications.
Introduction
Representation learning of words (Mikolov et al., 2013;Pennington et al., 2014), and/or sentences (Kiros et al., 2015;Hill et al., 2016;Logeswaran and Lee, 2018) forms the bedrock of many modern NLP applications. These techniques, largely relying on context information, have a huge impact on downstream applications. To this end, learning effective and useful representations has been a highly fruitful area of research.
Biomedical names 1 , however, are different from standard words and sentences. These names have both contextual and conceptual meanings. Contextual meaning reflects the contexts where the names appear, and it is specifically granted to each 1 Biomedical names refer to surface forms that represent biomedical concepts. They can be official names in biomedical vocabularies or unofficial names mentioned in text.
name. Names of a broad and popular concept often have slightly different contextual meanings. On the other hand, conceptual meaning maps to the definitions/contexts of the names' associated concepts, i.e., CUIs as shown in Table 1. As such, names of the same concepts share the common conceptual meanings, although they can own different contextual information.
As illustrated in Table 1, biomedical concepts appear in the text under various names. Representations of the names are also expected to be well clustered in their distributional space, i.e., names of the same concepts are close to each other and distant from those of other concepts. Learning such conceptually grounded representations is highly desired for a wide range of applications, e.g., synonym retrieval/discovery, biomedical name normalization, and query expansion.
For the first time, we investigate the problem of biomedical name embedding. Our goal is to derive meaningful and robust representations for biomedical names from their surface forms. Unfortunately, this task is not trivial since two names can be strongly related but not necessarily belong to the same concept (e.g., 'complement component 5 deficiency ' and 'complement component 5'). Furthermore, names of a concept can be completely different regarding their surface forms (e.g., 'leiner's disease ' and 'c5d'). As such, we establish the key desiderata for learning robust representations. First, the output representations need to be both conceptually and contextually meaningful. Second, name representations that belong to the same concepts should be similar to each other, i.e., conceptual grounding.
To this end, our proposed encoding framework incorporates three new objectives, namely context, concept, and synonym-based objectives. We formulate the representation learning process as a synonym prediction task, with context and conceptual losses acting as regularizers, preventing two synonyms from collapsing into semantically meaningless representations. As illustrated in Figure 1, synonym-based objective enforces similar representations between synonymous names, while concept-based objective pulls the name's representations closer to its concept's centroid. On the other hand, context-based objective aims to minimize the difference between the derived representation and its specific contextual representation. More concretely, our approach adopts a recurrent sequence encoding model to extract the semantics of biomedical names, and to learn the alternative naming of biomedical concepts. Our approach does not need any additional annotations on biomedical text. To be specific, we do not need the biomedical names to be pre-annotated in the text. Instead, we utilize available synonym sets in a metathesaurus vocabulary (e.g., UMLS), as the only additional resource for training.
Our main contributions in this work are summarized as follows. For the first time, we investigate the problem of biomedical name embedding and its applications. We pay attention to the similarity between semantically related names as well as the names of the same concept. Furthermore, we define and distinguish three aspects constituting to quality of biomedical name representations. We propose a novel encoding framework that considers all these aspects in the representation learning. Finally, we evaluate the proposed encoder in biomedical synonym retrieval, name normalization, and semantic similarity and relatedness benchmarks. In most of these experiments, our Context( ) Synonym s' Concept( ) syn s Figure 1: Illustration of three aspects, which are associated to three training objectives, for computing representation of biomedical name s. Intuitively, the representation is supposed to be similar to its synonym's as well as its conceptual and contextual representations. model significantly outperforms other baselines.
Related Work
Our problem setting of name embedding is different from recent works in biomedical word embeddings (Chiu et al., 2016;Wang et al., 2018) and concept embeddings (Beam et al., 2018;. Our goal is to derive meaningful representation for a sequence of words that likely represents a concept. This setting is also orthogonal to works that only focus on estimating the matching between names (Li et al., 2017;.
There are several options to encode variablelength names/phrases into fixed-sized vector representations. Existing approaches range from phrase-level extensions of word embeddings, compositions of pre-trained word representations to sequence encoding neural networks.
Contextual Word Embeddings. We revisit skip-gram model (Mikolov et al., 2013), as one of the most popular context-based embedding approaches. The model computes the representations for both target word w t , and context word w c by maximizing the following log-likelihood: The probability of observing w c in the local context of w t is defined as follows: where u w and v w are the 'input' and 'output' vector representations of w. In this work, we refer to the input representations as contextual representations of words, or in short, word embeddings.
The skip-gram model is extensible to names (or phrases) by treating them as special tokens: (2) where s is a special name token. Training of this model results in word and name embeddings.
Average of Contextual Word Embeddings.
Another simple and effective method to compute name embeddings is taking the average of their constituent word embeddings. Since words in a biomedical name are usually descriptive about its meaning, this simple baseline is expected to produce quality representations. FastText (Bojanowski et al., 2017) leverages this idea by considering character n-grams instead of words. Therefore, the model can derive representations for names that contain unseen words. The effectiveness of simple compositions such as taking average or power mean have also been verified in phrase and sentence embeddings (Wieting et al., 2016;Arora et al., 2017;Rücklé et al., 2018).
Sequence Encoding Models. Sequence encoding models aim to capture more sophisticated semantics of character and word sequences. These models range from multilayer feed-forward networks (Iyyer et al., 2015) to convolutional (Kalchbrenner et al., 2014), recursive and recurrent neural networks (Socher et al., 2011;Tai et al., 2015). They also differ by the types of supervision used in training. Context-based sentence encoders (Kiros et al., 2015;Hill et al., 2016;Logeswaran and Lee, 2018) is based on distributional hypothesis. The training utilizes sentences and their contexts (surrounding sentences), which can be extracted from an unlabeled corpus. Similar to contextual word embeddings, the derived sentence embeddings are expected to carry the contextual information. However, this contextual information does not fully reflect paraphrastic characteristic, i.e., semantically similar sentences do not necessarily have identical meanings. These embeddings, therefore, are not favorable in applications that demand strong synonym identification. In contrast, supervised or semi-supervised representation leaning requires annotated corpus, such as paraphrastic sentences or natural language inference data (Conneau et al., 2017;Wieting and Gimpel, 2017;Subramanian et al., 2018;Cer et al., 2018). However, most of these works focus on learning representations for sentences.
The closest work to our problem setting is (Wieting et al., 2015). In this proposed model, the authors utilize pairs of paraphrastic phrases as training data, e.g., 'does not exceed' and 'is no more than'. To prevent the trained model from overfitting, authors introduce regularization terms that applied on encoder's parameters as well as the difference between the initial and trainable word embeddings. Their evaluation, however, only considers the paraphrastic similarity of phrases.
Discussion. Our proposed encoder is based on BiLSTM (Graves and Schmidhuber, 2005), although it can be replaced by another sequence encoding model as mentioned above. Our approach utilizes synonym sets in UMLS to learn name representations, while also enforces the learned representation to be similar to their contextual and conceptual representations. The idea is related to word vector specialization (retrofitting) (Faruqui et al., 2015;Mrkšić et al., 2017;Vulić et al., 2018). The difference is that we focus on learning representation for multi-word concept names, hence the contextual and conceptual constraints are essential, in addition to the synonymous similarity. In contrast, most retrofitting approaches mainly aim to improve word representations. These models map initial word embeddings into a new vector space that satisfy the synonymous similarity desiderata, while also constrain the new representations to be similar to the initial ones. Since the initial word representations can be assumed to encode both contextual and conceptual information of the words, these retrofitting approaches can be viewed as special cases of our proposed encoding framework.
Biomedical Name Encoder
For ease of presentation, we use three generic terms, u w , u s and u c , to denote pre-trained word, name and concept embeddings, respectively. These embeddings will be used as inputs in our encoding framework. Note that there are several options to calculate these embeddings and our encoder can be adapted to different calculation results. Before going to details, we present an extension of skip-gram, which will serve as a baseline. Furthermore, the outputs of this baseline will be used as pre-trained embeddings in one of the framework's configurations.
Bi-LSTM Figure 2: Our proposed biomedical name encoding framework. The main encoder (BNE) is based on two-level BiLSTM to capture both character and word-level information of an input name. BNE parameters are learned by considering three training objectives. Synonym-based objective L syn enforces similar representations between two synonymous names (s and s ). Concept-based objective L def , and context-based objectives L ctx apply similarity constraints on representations of names (s or s , which are interchangeable) and their conceptual and contextual representations (g(c) and g(x), respectively). Details about g(c) and g(x) calculations are discussed in Section 3.2.
Skip-gram with Context and Concept
The skip-gram model described by Equation 2 uses context words to calculate embeddings for names. Apart from the context words, we also considers the name's conceptual information in this new baseline. We leverage two sources of conceptual information: words in a name, and name's associated concept. We assume that names containing similar words tend to have similar meaning. Furthermore, names of the same concepts will also share common meaning. We introduce a new token type for concepts. The concept embeddings are trained in a similar way as name embeddings. Specifically, for this baseline, we utilize a pre-annotated corpus where names appearing in the training text are labeled with their associated concepts. We convert the annotated texts into sequences of words, name, and concept tokens to be used as inputs to the skip-gram model. For example, consider a pseudo sentence that has 4 words and contains a bigram name: w l w 1 w 2 w r , we map the annotated name w 1 w 2 to a name token s i , and its annotated concept is denoted by c i . We create two sequences of tokens corresponding to this original sentence: The name and concept tokens are placed on the left and right sides of the annotated name to avoid being biased toward any single side. These token sequences are subsequently fed as inputs to the skipgram baseline (the training details are presented in Section 4). Outputs of this baseline are word, name and concept embeddings.
Biomedical Name Encoder with Context, Concept, and Synonym
Our proposed framework is illustrated in Figure 2.
The encoder unit is based on BiLSTM to aggregate information from both character and word levels.
The encoded representations are constrained by three objectives, namely synonym, context, and concept-based objectives. The model utilizes synonym sets in UMLS as training data. We denote all the synonym sets as U = {S c }, where S c includes all names of concept c, i.e., S c = {s i }.
Biomedical Name Encoder (BNE). The encoder extracts a fixed-sized representation for a given name (or surface form) s. We use one BiL-STM unit with last-pooling to encode characterlevel information of each word. The representation is then concatenated with the pre-trained word embedding to form a word-level representation. Another BiLSTM unit with max-pooling is used to aggregate the semantics from the sequence of words' representations. Finally the aggregated representation is passed through a linear transformation. Mathematically, the encoding function is expressed as follows: where u w i represents the pre-trained word embedding of word w i in name s. t i,j is a trainable character embedding in w i . ⊕ denotes vector concatenation. W and b are parameters of the last transformation. Next, we detail three objectives used to train the encoder.
Synonym-based Similarity. Representations of names that belong to the same concept should be similar to each other. We formulate this objective using the following loss function: where d(·, ·) is a function that measures the difference between two representations. As mentioned in the introduction, training the encoder using only this synonym-based objective will lead to biased representations. Specifically, the encoder will be trained to act like a hash function, which performs well on determining whether two names are synonym of each other. However, it likely loses the semantics of names. As a remedy, we further introduce concept and context-based objectives to regularize the representations.
Conceptual Meaningfulness. Representations of biomedical names should be similar to those of their associated concepts. This objective complements the synonym-based objective introduced earlier. The latter not only shifts the synonymous embeddings close to each other, but also pulls them near to its concept's centroid, expressed as: where g(c) returns a vector that encodes conceptual information of the corresponding concept c.
There are several options for this representation. It can be a mapping to pre-trained concept embeddings learned from a large corpus, i.e., g(c) = u c . Another option is taking composition (e.g., average) of all its name embeddings (see Table 1), i.e., g(c) = 1 |Sc| s∈Sc u s . Furthermore, when definition of the concept is available, g(c) can be modeled as another encoding function that extracts the conceptual meaning from the definition.
Contextual Meaningfulness.
Each name representation should accommodate specific contextual information owned by the name, formulated as: where X s represents all local contexts of name s, and q(x) returns contextual representation of local context x. A straightforward way to model X s is using local context words of s. However, this modeling is computationally expensive since the training will need to iterate through all the context words of the name. Alternatively, the contextual information can be modeled using 1-hop approximation of the name's local contexts, which is mapped to the name's contextual representation, i.e., X s = {s} and q(x) = q(s) = u s . We also consider another approximation where the contextual representation is further approximated by its pre-trained word embeddings, i.e., q(s) = 1 |T (s)| w∈T (s) u w where T (s) represents words in name s. Intuitively, in these two approximations, we assume that the pre-trained name or word embeddings carry local contextual information since they are trained by context-based approaches (see Section 2).
Combined Loss Function. The final loss function combines all the introduced losses: For simplicity, we ignore weighting factors that control the contribution of each loss. However, applying and fine-tuning these factors will shift the encoding results more on either semantic similarity or synonym-based similarity direction.
Choices of g(c) and q(x). Several options to calculate the conceptual and contextual representations are discussed earlier. Note that the two representations should be placed in the same distributional space. As such, the implicit relations between them are encoded in, and can be decoded from, their presentations. For efficiency, we model the local contexts X s using contextual information encoded in the name itself, i.e., X s = {s} and q(x) = q(s). To this end, we focus on studying two combinations of g(c) and q(s): • Option 1: Both g(c) and q(s) directly map to the pre-trained concept and name embeddings, respectively, i.e., g(c) = u c and q(s) = u s . These embeddings are the outputs of our proposed extension of skip-gram model (see Section 3.1). This option requires annotated biomedical corpus.
• Option 2: The contextual presentation q(s) is approximated by the average of pretrained words embeddings, i.e., q(s) = 1 |T (s)| w∈Ts u w ; and g(c) is the average of all contextual presentations associated to the concept, i.e., g(c) = 1 |Sc| s∈Sc q(s). These computations only require pre-trained word embeddings, and a dictionary of names and concepts, e.g., UMLS.
Distance Function and Optimization. Distance function d can be Euclidean distance or Kullback-Leibler divergence. Alternatively, the optimization can be modeled as binary classification, motivated by its efficiency and effectiveness (Conneau et al., 2017;Wieting and Gimpel, 2017;Logeswaran and Lee, 2018). Another benefit of using classification is to align the encoded BNE vectors to the pre-trained word, name, and concept embeddings. The pre-trained embeddings are derived by skip-gram with negative sampling (Mikolov et al., 2013), which is also formulated as classification. In a similar way, we adopt logistic loss with dot product classifier for all the objectives. For example, the updated loss function for L syn is rewritten as follows: where is the logistic loss function : x → log(1 + e −x ). Negative names is sampled from a mini-batch during optimization, similar to (Wieting et al., 2015). In a similar way, the loss functions L def and L ctx are also updated accordingly.
Experiments
We first detail the implementations of baselines and the proposed BNE model. We then evaluates all the models with 4 different tasks in retrieval, embedding similarity and relatedness benchmarks.
Skip-gram Baselines. We consider three variants of skip-gram (with negative sampling). SG W obtains word embeddings by training the very basic skip-gram model (see Equation 1). To get the representation for a name, we simply take the average of its associated word embeddings. SG S is another variant that considers names as special tokens. The model obtains embeddings for word and names concurrently (see Equation 2). SG S training requires input text to be segmented into names and regular words. SG S.C is our proposed extension of skip-gram model. As introduced in Section 3.1, this baseline requires an annotated corpus where the names are labeled with their associated concepts.
Training of Skip-gram Baselines. We use PubMed corpus, which consists of 29 million biomedical abstracts, to train SG W . For SG S and SG S.C , we further utilize the annotations provided in Pubtator (Wei et al., 2013). The annotations (names and their associated concepts) come with five categories: disease, chemical, gene, species, and mutation. We use annotations of the two popular classes: disease and chemical. In preprocessing, text is tokenized and lowercased. Words that appear less than 3 times are ignored. We use spaCy library for this parsing. In total, our vocabulary contains approximately 3 millions words, 700 thousand names, and 85 thousand CUIs. We use Gensim library to train all the skip-gram baselines. The embedding dimension is 200, and the context window size is 6. Negative sampling is used with the number of negatives set to 5.
Biomedical Named Encoder (BNE). We set the character embedding dimension to 50, and initialize their values randomly. We use 200 dimensions for the outputted name embeddings. The hidden states' dimensions for both character and wordlevel BiLSTM are 200. We use Adam optimizer with the learning rate of 0.001, and gradient clipping threshold set to 5.0. Training batch size is 64. Dropout with the rate of 0.5 is used to regularize the model. Average performance on validation sets of biomedical name normalization experiment (see Section 4.3) is used as a criteria to stop the model training.
Training of BNE. Our proposed model is trained using only the synonym sets in UMLS 2 , i.e., U = {S c }. We limit the synonyms to those of disease concepts 3 . We intentionally leave the chemical concepts out for out-domain evaluation. As a result, approximately 16 thousand synonym sets (associated to that number of disease concepts) are collected for training. These synonym sets include 156 thousand disease names in total. In each training batch, one positive and one negative pairs are sampled separately for each loss. The pre-trained word (or name/concept) embeddings are taken from the skip-gram baselines as described before. We denote two configurations, associated to Options 1 and 2 (see Section 3.2), as BNE + SG S.C and BNE + SG W , respectively. Next, we present the evaluations of these models. Figure 3: t-SNE visualization of 254 name embeddings. These names belong to 10 disease concepts in which 5 of these concepts appear in the training data, while the other 5 concepts (marked with (*)) do not. It can be observed that BNE projects names of the same concept close to each others. The model also retains closeness between names of related concepts, such as 'parkinson disease' and 'paranoid disorders' (see the blue and olive plus signs). Figure 4: Mean coverage at k: average ratio of correct synonyms that are found in k-nearest neighbors, which are estimated by cosine similarity of name embeddings. Note that names in these disease and chemical test sets are not seen in the training data.
Closeness Analysis of Synonymous Embeddings
We propose a measure to estimate the closeness between name embeddings of the same concept. For each name, we consider its k most similar names estimated by cosine similarity of their embeddings. We define coverage at k as ratio of correct synonyms that are found in the k-nearest neighbors. We report the average score of all query names, as mean coverage at k. We create two test sets for this experiment, one for disease names and one for chemical names. Given the CTD's MEDIC disease vocabulary, we randomly select 1000 concepts and all their corresponding names in UMLS. In this experiment, we exclude these 1000 concepts from the synonym sets used to train BNE encoder. Furthermore, to ensure the quality of the selected names, we only consider the ones that appear in the high-quality biomedical phrases collected by Kim et al. (2018). Similarly, we create another test set for chemical names. This chemical set is used to evaluate out-domain performance since our model is trained using only disease synonyms.
As shown in Figure 4, BNE outperforms other embedding baselines that do not consider the synonym-based objective. More importantly, the model also generalizes well to out-domain data (chemical names). Furthermore, among the skipgram baselines, the context-based name embedding model (SG S ) is worse than the average word embedding baseline (SG W ). The result again indicates that words in biomedical names are more indicative about their conceptual identities.
The embedding plots in Figure 3 further illustrate the effectiveness of our encoder in enhancing the similarity between synonymous representations. By investigating name embeddings of an unseen concept 'pseudotumor cerebri', we observe that BNE is robust to the morphology of biomedical names, such as 'benign hypertension intracranial' and ' benign intracran hypt'. The model is also aware of word importance in long names such as 'intracranial pressure increased (benign)'. Moreover, since BNE is trained using synonym sets, the encoder is equipped with knowledge about alternative expressions of biomedical terms, e.g., 'intracranial hypertension' and 'intracranial increased pressure'. The knowledge can be used to infer quality representations for new synonyms. However, similar to skip-gram baselines, BNE faces serious challenges if the names are unpopular and contain words that do not reflect their conceptual meanings. For example, for this 'pseudotumor cerebri' concept, the name "Nonne's syndrome" 4 is distant from its concept cluster (see the red square locating near the blue plus signs in Figure 3). Table 2: Mean average precision (MAP) performance on the synonym retrieval task. The best and second best results are in boldface and underlined, respectively.
Synonym Retrieval
We evaluate the embeddings in synonym retrieval application: given a biomedical mention (or name), retrieving all its synonyms from a controlled vocabulary by ranking. We use NCBI-Disease (Dogan et al., 2014) and BC5CDR (Li et al., 2016) datasets in this evaluation. NCBI-Disease contains disease mentions extracted from PubMed abstracts, while BC5CDR contains both disease and chemical mentions. These mentions are used as queries in this synonym retrieval task. Note that, different from the closeness evaluation, a disease name may or may not appear in the synonym sets used to train BNE encoder. On the other hand, chemical queries are completely unseen during the model training. For each query, we retrieve a list of potentially associated concepts. A concept is retrieved if one of its names is similar to the query (estimated by BM25 score). We collect all names of the top-20 retrieved concepts as a synonym candidate set. Cosine similarity is then used to rank the candidates. We also evaluate the results with Jaccard and Word's Mover Distance (WMD) (Kusner et al., 2015) measures. As shown in Table 2, SG W +WMD outperforms Jaccard baseline (in MAP score), mainly because of its ability to capture semantic matching. However, both baselines are non-parametric. In contrast, BNE+SG W learns additional knowledge about the synonym matching by using synonyms sets in UMLS as training data. Although the model is trained on only disease names, it also generalizes well to chemical names. Furthermore, comparing between the two configurations of BNE, both BNE+SG W and BNE+SG SC models yield comparable performances. However, BNE+SG W is simpler since it does not require pre-trained name and concept embeddings.
Biomedical Name Normalization
Biomedical name normalization (a.k.a., biomedical concept linking) aims to map each biomedical mention appearing in text to its associated concept in a dictionary. We use NCBI-Disease and BC5CDR datasets in this evaluation. Similar to previous works, we use Ab3P (Sohn et al., 2008) to resolve local abbreviations. Composite mentions (such as 'pineal and retinal tumors') are split into separate mentions ('pineal tumors' and 'retinal tumors') using simple patterns as described in (D'Souza and Ng, 2015). For each mention, we find the concept CUI (in UMLS) that has the most similar name. The selected CUI is then mapped to its associated MeSH or OMIM ID in the CTD dictionary for evaluation. We only consider mentions whose associated concepts exist in the CTD dictionary and report the accuracy aggregated from all mentions in test set. Apart from existing baselines, we also re-implement compositional paraphrase model, proposed by Wieting et al. (2015). The difference is that we use word-level BiLSTM instead of recursive neural network. Furthermore, L 2 regularizations with the weights of 10 −3 and 10 −4 are applied on the BiLSTM's parameters and the difference between the trainable and initial word embeddings, respectively. Different from the lexical (Jaccard) and semantic matching (WMD and SG W ) baselines, BNE ob-tains high scores in both accuracy and rankingbased (MAP) metrics (see Tables 2, and 3). The result indicates that BNE has encoded both lexical and semantic information of names into their embeddings. Table 3 also includes performances of other state-of-the-art baselines in biomedical name normalization, such as sieve-based (D' Souza and Ng, 2015), supervised semantic indexing , and coherence-based neural network Wright et al. (2019) approaches. Note that all these baselines require human annotated labels, and the models are specifically tuned for each dataset. On the other hand, BNE utilizes only the existing synonym sets in UMLS for training. When the dataset-specific annotations are utilized, even the simple exact matching rule can boost the performance of our model to surpass other baselines (see the last two rows in Table 3).
Semantic Similarity and Relatedness
We evaluate the correlation between embedding cosine similarity and human judgments, regarding semantic similarity and relatedness. Different from previous evaluations, this experiment aims to evaluate the conceptual similarity and relatedness. We use two biomedical datasets: MayoSRS (Pakhomov et al., 2011) and UMN-SRS (Pakhomov et al., 2016). The former contains multi-word name pairs of related concepts, e.g., 'morning stiffness' (C0457086) and 'rheumatoid arthriits' (C0003873). The latter contains only single-word name pairs and is spitted into similarity and relatedness partitions. For example, a pair with high similarity score are 'weakness' (C1883552) and 'paresis' (C0030552). For these two datasets, the names in each pair comes from different concepts, hence they do not appear in the synonym pairs used to train our encoder. Furthermore, the coverage of pre-trained word embeddings in baselines such as SG W are 100% and 97% for UMNSRS and MayoSRS, respectively. Table 4 shows that BNE models perform especially well on the multi-word relatedness test set (MayoSRS). Conceptual information has been utilized by these models to enrich the name representations. On the other hand, when the training is performed solely on the synonym pairs (only use L syn ), the trained model is overfitted to the training task and do not generalize to other test cases. SG W is still a strong baseline in these benchmarks. Other skip-gram and fastText embed- Wieting et al. (2015) 0.639 0.565 0.595 Table 4: Spearman's rank correlation coefficient between cosine similarly scores of name embeddings and human judgments, reported on semantic similarity (sim) and relatedness (rel) benchmarks.
dings (Pakhomov et al., 2016;Chen et al., 2018), which are trained on a similar corpus, do not achieve better results. Beam et al. (2018) use a SVD-based word2vec model (Levy et al., 2015) to compute embeddings for biomedical concepts.
Although the embeddings are trained on a much larger multimodal medical data, their results are lower than other baselines. Further investigation reveals that many concepts in the test sets do not exist in their pre-trained concept embeddings.
Conclusion
By learning to encode names of the same concepts into similar representations, while preserving their conceptual and contextual meanings, our encoder is able to extract meaningful representations for unseen names. The core unit of our encoder (in this work) is BiLSTM. Alternatively, sequence encoding models such as GRU, CNN, transformer, or even encoders with contextualized word embeddings like BERT (Devlin et al., 2018), or ELMo (Peters et al., 2018) can be used to replace this BiLSTM, however, with additional computation cost. We also discuss different ways of representing the contextual and conceptual information in our framework. In implementation, we use the simple aggregation of pre-trained embeddings. The experiment results show that this approach is both efficient and effective. | 7,214.2 | 2019-07-01T00:00:00.000 | [
"Computer Science"
] |
Convexity adjustment for constant maturity swaps in a multi-curve framework
In this paper we propose a double curving setup with distinct forward and discount curves to price constant maturity swaps (CMS). Using separate curves for discounting and forwarding, we develop a new convexity adjustment, by departing from the restrictive assumption of a flat term structure, and expand our setting to incorporate the more realistic and even challenging case of term structure tilts. We calibrate CMS spreads to market data and numerically compare our adjustments against the Black and SABR (stochastic alpha beta rho) CMS adjustments widely used in the market. Our analysis suggests that the proposed convexity adjustment is significantly larger compared to the Black and SABR adjustments and offers a consistent and robust valuation of CMS spreads across different market conditions.
Introduction
The recent financial crisis has led, among others, to unprecedented behavior in the money markets, which has created important discrepancies on the valuation of interest rate financial instruments. Important reference rates that used to be highly correlated and moving together for a long period of time, started to diverge from one another. A characteristic example, that has been widely studied recently, is the widening of the spread between deposit rates (Libor/Euribor) and overnight index swap (OIS) rates of the same maturity. At the same time, the market started observing non-zero spreads between swap rates of the same maturity, but based on different frequencies of the underlying Libor rate, or between forward rate agreement (FRA) rates and forward rates implied by consecutive deposits. These examples indicate that financial players consider each tenor as a separate market, incorporating different credit and liquidity premia, and as such, each one of them is driven by its own dynamics.
Such discrepancies have, above all, questioned the methodology used to bootstrap the yield curve, which has created a layer of uncertainty on the methods used to price and hedge interest rate financial instruments. There are three main issues associated with the pre-crisis approach, which make it inconsistent. First, the information incorporated into the basis spreads is not taken into account. Second, using a single yield curve does not allow us to consider the different dynamics introduced by each underlying rate tenor, making hedging and pricing of interest rate derivatives less stable. Finally, the no-arbitrage assumption indicates that a unique discounting curve needs to be used, regardless of the number of the underlying tenors.
In order for market participants to comply with the mentioned market features, they started building a separate forward curve for each given tenor, so that future cash flows are generated using the appropriate curve associated with the underlying rate tenor. At the same time, a single and unique discounting curve had to be used, in order to calculate the present value of contract's future payments. This led financial players to start using the OIS swap curve, rather than the Libor curve, for the construction of a riskless term structure. The reason behind their choice was mainly twofold. First, OIS is believed to contain very little credit and liquidity risk premia compared to Libor rates. Second, the fact that most trades in the interest rate market are (mainly cash) collateralized makes the funding cost for a financial institution no longer equal to the Libor rate, but to the collateral rate instead. For that reason, the Libor rate that was widely used as a proxy for the risk-free (discounting) rate, is now replaced by the collateral rate, which is assumed to coincide with the overnight rate (i.e. fed fund rate for USD, Eonia for EUR, etc.).
The literature on the valuation of interest rate derivatives based on separate curves, for generating future rates and for discounting, is growing rapidly. Previous contributions focus on the valuation of cross currency (basis) swaps (see, Boenkost and Schmidt 2005;Kijima et al. 2009;Fujii et al. 2010;Henrard 2010). Henrard (2007b), is the first to apply this methodology to the single currency case, whereas Bianchetti (2010) is the first to deal with the post-crisis situation. Furthermore, Ametrano and Bianchetti (2009), Chibane and Sheldon (2009) and Morini (2009), develop new methodologies for bootstrapping multiple interest rate yield curves. On the other hand, many contributions focus on extending pricing models under the multi-curve framework. Kijima et al. (2009), apply the methodology to study two short rate models, the Vasicek model and the quadratic Gaussian model, and use them for the valuation of bond options and swaptions. Mercurio (2009Mercurio ( , 2010 and Grbac et al. (2015) extend the libor market model (LMM) to be compatible with the multi-curve practice and price caplets and swaptions, while more recently, Pallavicini and Tarenghi (2010), Crépey et al. (2012), Moreni and Pallavicini (2014) and Cuchiero et al. (2016) extend the classical Heath-Jarrow-Morton (HJM) framework to incorporate multiple curves in order to price interest rate products such as forward starting interest rate swaps (IRS), plain vanilla European swaptions and CMS spread options. Finally, important contributions include Crépey et al. (2015) who develop a Levy-based HJM model for credit value adjustment (CVA) and Fanelli (2016) who develop a defaultable HJM model for pricing basis swaps in a multi-curve setup.
In this paper, we follow the approach described in Mercurio (2010) and Pallavicini and Tarenghi (2010) to price a CMS. A CMS exchanges a swap rate with a fixed time to maturity against fixed or floating. In a common CMS, one would swap a quarterly (e.g. 3-month Libor) or semi-annual rate against a 5 or 10-year swap rate. Whereas a regular floating rate (e.g. 6-month Libor) contains information about short-term interest rates, a CMS rate (e.g. 10-year swap rate) contains information about the overall level of the yield curve. This makes CMS a popular instrument among investors and portfolio managers. It gives investors the ability to place bets on the shape of the yield curve over time. Generally, a constant maturity payer will benefit from a flattening or inversion of the yield curve and is exposed to the risk of the yield curve steepening. It also helps portfolio managers to hedge a floating rate debt without introducing duration risk from the hedging instrument.
The mix of short and long term rates in the structure of the CMS makes its value depend on the shape of the yield curve. Standard approaches for its valuation involve the calculation of a convexity adjustment. Such convexity adjustment cannot be computed exactly, so previous literature uses either adhoc approximations or utilising unrealistic assumptions. A common assumption used in the relevant literature is that the term structure of interest rates is flat and only parallel shifts are allowed.
There are two main avenues towards pricing a CMS. In the first, one sets up a term structure model and uses some approximation method to compute the expected swap rate, under the forward measure. More specifically, Lu and Neftci (2003) follow this direction and work with two or more forward rates jointly. Using the forward libor model, they price a CMS swap and compare its empirical performance with the standard convexity adjustment proposed by Hagan (2005). They find that the convexity adjustment overestimates CMS swap rates. Similarly, Henrard (2007a) uses one-factor LMM and HJM models to approximate CMS swaps, while Brigo and Mercurio (2006) use a two-factor Gaussian short rate model (G2++ model) to model bond prices associated with CMS products. Finally, in a recent work, Wu and Chen (2010) price different CMS-type interest rate derivatives within the LMM framework. They present a new approach for finding the approximate distribution of a CMS under the forward martingale measure.
In the second direction, one uses replication arguments and the problem is formulated under the swap measure. The price is based on the implied swaption volatilities which play the role of the distribution of swap rates. For the replication procedure, the change from the forward to the swap measure is needed and the Radon-Nikodym derivatives need to be approximated. Pelsser (2003) is the first to show that the convexity adjustment can be interpreted as the side effect of a change of numeraire. He approximates the measure change by proposing a linearization of the swap rate and obtains analytical solutions to the CMS price. Hagan (2005) obtains closed-form formulae for the pricing of CMS swaps and options by relating them to the swaption market via a static replication approach. Finally, Mercurio and Pallavicini (2006) use a strike extrapolation to statically replicate CMS swap/options by modelling implied volatilities of European swaptions using the SABR model of Hagan et al. (2002). Finally, in a recent work, Zheng and Kuen Kwok (2011) propose a generalised static replication approach to hedge exotic swap contracts and annuity options using different swaptions.
The main problem with previous contributions is that the yield curve is assumed to be flat and only parallel shifts are allowed. However, in a swap where one pays Libor plus spread and receives a 10-year CMS rate, the structure is mainly sensitive to the slope of the interest rate yield curve and is almost immunised against any parallel shift. In this paper, following Hagan (2005), we apply the commonly used convexity adjustment in a new framework of double curving. We then develop a new convexity adjustment, by departing from the restricting assumption that the term structure is flat, and we allow for a tilt. Using market data for Euro money market instruments (Eonia, Euribor), CMS spreads and swaption volatilities, we find out that the new convexity adjustment is significantly larger than the one commonly made in the literature. We finally compare our approach with the SABR CMS adjustment, introduced by Pallavicini (2005, 2006), which is widely used in the market, and we find that our approach provides a better fit to the market's CMS spread prices.
The remaining of this paper is organised as follows. Section 2 presents the valuation framework for the main instruments (FRA, IRS, CMS) considered. Section 3 shows the main result of our work, that is a new convexity adjustment that takes into account the tilt in the term structure, under a double curving framework. Section 4 briefly depicts the smile-consistent convexity adjustment using SABR model. Sections 5 and 6 present the market data that have been used and describe numerical calculations. Finally, Sect. 7 concludes.
The valuation framework
This section introduces the definitions of basic instruments under the muti-curve environment. It mostly follows the works of Brigo and Mercurio (2006) and Mercurio (2009).
We introduce two curves, one for the discounting process, say curve 'd', and one for the forwarding, say curve 'f '. Forward rates can be defined for both curves. Let us take today being time zero and consider a tenor structure {T i } i=0,...,n , with T i < T i+1 . Let δ i = T i+1 −T i be the accrual factor for the time interval T i , T i+1 . Within this structure, for each curve, the time-t (with t ≤ T 0 ) value of forward rates is defined by, where P d (t, T i ), with i = 1, . . . , n, denoting the time-t price of the T i -maturity discount bond. Furthermore, we denote by Q T i d the T i -forward probability measure associated with the numeraire P d (t, T i ), and by E Q T i d the related expectation. We assume a given single discount curve for use in the calculation of all net present values (NPVs), i.e., for discounting all future cash flows. This curve is assumed to be the OIS zero-coupon curve, stripped from market OIS swap rates and is defined for every possible maturity T i . All pricing measures we will consider are those associated with the OIS discount curve 'd'. Following Mercurio (2010), we adopt the standard definition for the FRA rate.
Definition 1 Consider times t, T 1 , T 2 , with t ≤ T 1 < T 2 . The time-t FRA rate FRA(t; T 1 , T 2 ) is defined as the fixed rate to be exchanged at time T 2 for the Libor rate L(T 1 , T 2 ), so that the swap has zero value at time t.
By no-arbitrage pricing we get, where Q T 2 d denoting the T 2 -forward measure associated with the numeraire P d (t, T 2 ), E Q T 2 d the related expectation and F t the 'information' available in the market at time t.
Proposition 1 Any simple compounded forward rate spanning a time interval ending in
Following Bianchetti (2010) and Mercurio (2010), working under the single-curve framework, where the forward and discount curves coincide ( f ≡ d), from proposition 1, the where L(T 1 , T 2 ) is the spot Libor rate defined by the usual no-arbitrage relationship between Libor rates and zero coupon bond prices, which holds for non-defaultable counterparties and instruments with no liquidity risk, Based on that, we can conclude that the FRA rate FRA(t; T 1 , T 2 ) coincides with the forward Libor rate, In the multi-curve framework, however, Eq. (4) does not hold. The forward rate F f (t; T 1 , T 2 ) is not a martingale under the forward measure Q T 2 d , and the FRA rate is different from the forward rate, FRA(t; T 1 , T 2 ) = F f (t; T 1 , T 2 ). Therefore, the present value of a future Libor rate is no longer obtained by discounting the corresponding forward rate, but by discounting the corresponding FRA rate. According to Mercurio (2010), the FRA rate is the natural generalization of a forward rate to the multi-curve case. This has a straightforward implication, when it comes to the valuation of Interest Rate Swaps.
Interest rate swap
We show how to evaluate an IRS under the multi-curve framework. For simplicity, we assume that IRS tenors for fixed and floating legs are the same. The time-t value (with t ≤ T 0 ) of the floating leg payoff is calculated by taking the discounted expectation under the forward measure Q Using Eq.
(2), the present value of the swap's floating leg is given as, Similarly, the value of the swap's fixed leg is given by the present value of the fixed coupon payments, K, paid on the fixed legs' dates as, Thus, the time-t value of the IRS to the fixed rate payer is given by, It follows that the 'fair' forward swap rate that equates the two legs at time t ≤ T 0 is, This is the forward swap rate of an IRS, where cash flows are generated through curve 'f ' and discounted with curve 'd'.
Constant maturity swap
A constant maturity swap contract, is a swap where one of the legs pays (receives) periodically a swap rate with a fixed time to maturity, c, while the other leg receives (pays) either fixed or floating. More commonly, one term is set to a short term floating index such as the 3-month Libor rate, while the other leg is set to a long term fixed rate such as the 10-year swap rate. Let {t i, j } j=0,...,c , be a set of reset dates, associated with times We suppose that τ = 6 months and assume for simplicity that t i,0 = T i , and we will set Δ = δ/τ , with Δ need not be integral. The forward swap rate of the i th IRS (at time determined by is the CMS premium (spread), a constant chosen so that the cost of the instrument at time t, when the contract is initiated, is zero. For simplicity, we write S where we suppose that the counterparty pays floating (i.e. Libor + spread) and receives fixed (i.e. the swap rate). The time-t value, with t ≤ T i−1 , of the CMS can be obtained by taking the where, for the FRA, we follow Eq.
(2). At this point it is important to emphasize the fact that, naturally, the expectation used to calculate the above payoff, is associated with the payment dates T i . However, under the forward measure, Q T i d , the swap rate, S is not a martingale. The convexity adjustment arises since the expected payoff is calculated in a world which is forward risk neutral with respect to a zero coupon bond. In that world, the expected underlying swap rate (upon which the payoff is based), does not equal the forward swap rate. The convexity is just the difference between the expected swap rate and the forward swap rate.
When we consider pricing CMS-type derivatives, it is convenient to compute the expectation of the future CMS rates under the forward measure, that is associated with the payment dates. However, the natural martingale measure of the CMS rate is the underlying forward swap measure. Convexity correction arises when one computes the expected value of the CMS rate under the forward measure that differs from the natural swap measure with the underlying forward swap measure as numeraire.
Convexity adjustment
Following Pelsser (2003), we define the convexity adjustment as the difference in expectation of some quantity (i.e., swap rate) when the expectations are computed under two different measures (i.e., forward and swap measures). Therefore, expectation can be written as an expectation which is a martingale under its measure plus an adjustment. This means that the convexity adjustment is given as the difference in expectation (under the forward measure and the forward swap measure) of the forward swap rate, as, where we have used that S t i−1, j T i−1 is a martingale under the forward swap measure Q t i, j d and the present value of the CMS is given by, Given that the cost of the CMS at time-t is zero, we have, where the second part of the equation is the forward swap rate defined in Eq. (9).
The convexity adjustment C A t i−1, j t is determined by changing numeraire in the first term of Eq. (13) as, where for i ∈ {1, . . . , n} we have defined the function . The convexity adjustment is approximated by approximating the G i t term.
Flat term structure with parallel shifts
Following Hagan (2005) and Brigo and Mercurio (2006), we initially derive an expression for the convexity adjustment when the term structure is flat and can only evolve with parallel shifts. We denote by r t , the time-t value (tenor τ ) spot rate. For t ≤ T i−1 , the two numeraires are given as, and, Using Eqs. (18) and (19), function G i t is given as, An important assumption we make when we work under multiple curves, is the fact that the swap rate is not a risk-free rate anymore. More specifically, following Liu et al. (2006) and Filipović and Trolle (2013) among others, we assume it to be equal to the risk-free rate r t (assumed to be the OIS rate) plus a spread X t , written as, where, X t is the spread incorporating the credit and liquidity risk premia of the counterparty and R t is the risky forward swap rate defined in Eq. (10). What is worth mentioning at this point, is the fact that under the assumption of a flat term structure, the forward rate FRA(t; t i, j , t i, j+1 ), does not depend on j (i.e. assumed to be constant). We approximate G using a first-order Taylor expansion as, Using Eqs. (17) and (22), the convexity adjustment can be approximated as, In the above expression, there are two expectations we need to calculate, E Q d S t i−1, j t = σ t,S S t i−1, j t dW t,S , for the swap rate and d X t = σ t,X X t dW t,X , for the spread, where W t,S and W t,X are two correlated wiener processes with correlation ρ s,x and σ t,S and σ t,X are deterministic volatilities. Applying Ito's Lemma, the two expectations are given as (see "Appendix 1"), Using the two expectations, the convexity adjustment is given as, Assuming that there is no spread in the market (i.e. if X t = 0), we end up with the well-known Black-like adjustment formula proposed by Hagan (2005).
A term structure with tilts
In this section, we depart from the restrictive and unrealistic assumption of a flat term structure and we extend our analysis by allowing for a tilt. Since we no longer assume a flat term structure, the spot rate r T t is now given by some deterministic function f as follows, where r t is the short rate and a = (a 1 , . . . , a k ) is some vector of parameters. The G function we need to approximate is now given by, where, for simplicity, we have written f t, t i, j ). As in the flat term structure case, we approximate G using a first-order Taylor expansion at (r t , t) as, where G r and G t denote the partial derivatives of G with respect to r and t. In the previous case we assumed that S Using Eqs. (17) and (30), the convexity adjustment can be approximated as, As before, we assume that under Q t i, j d the two (log-normal with constant volatility) processes (i.e. swap rate and spread) are martingales and the expectations are given by Eqs. (24) and (25). So, the convexity adjustment is now given by, We also need to calculate the two terms G r (r,t) G(r,t) and G t (r,t) G(r,t) that incorporate the partial derivatives. Analytical expressions are given in "Appendix 2".
Finally, for function f we use the following parametric functional form, based on the well-known Nelson and Siegel model. with,
Smile-consistent convexity adjustment
In order to test the proposed CMS convexity adjustments, we compare them with the smileconsistent convexity adjustment, which is widely used in the market. In the presence of a market smile, when the term structure is not flat, but may tilt, the adjustment is necessarily more involved, if we aim to incorporate consistently the information coming from the quoted implied volatilities. The procedure to derive a smile consistent convexity adjustment is described in Mercurio and Pallavicini (2006) and Pallavicini and Tarenghi (2010), and is the one we will use here.
For the consistent derivation of CMS convexity adjustment, volatility modelling is required. We use the SABR model (a popular market choice for swaption smile analysis) for the swap rate in order to infer from it the volatility smile surface. The SABR model assumes that S d S and where β ∈(0, 1], and α are positive constants and ρ ∈ [-1, 1]. The CMS convexity adjustment is given in Mercurio and Pallavicini (2006) as, where, An approximation for the implied volatility of the swaption with maturity T i−1 is derived in Hagan et al. (2002) as, and The above formula provides us with an efficient approximation for the SABR implied volatility for each strike K . We consider a different SABR model for each swap rate contained in the CMS payoff and we perform a calibration of all the SABR parameters (four parameters (α, β, ρ, ) for each swap rate) to swaption volatility smile and CMS spread quoted in the market. See Mercurio and Pallavicini (2006) and Pallavicini and Tarenghi (2010) for a detailed description of the calibration procedure.
Market data
We use three data sets for this study, one containing Euro money market instruments for the construction of the yield curves, a second one containing CMS swap spreads with a maturity of 5-years, where the associated underlying swaps have a 10-year maturity (i.e. X 5,10 ), and a third one containing swaption volatilities for different strikes, as well as implied black at-the-money (ATM) swaption volatilities. All market data was collected from Bloomberg. Data sets are presented in detail below: -For the discounting curve, we use Eonia Fixing and OIS rates from 3-months to 30-years. -For the 3-month curve, we use Euribor 6-months fixing, FRA rates up to 15 months, and swaps from 2 to 30 years, paying an annual fix rate in exchange for the Euribor 3-month rate. -For the 6-month curve, we use Euribor 6-months fixing, FRA rates up to 2 years, and swaps from 2 to 30 years, paying an annual fix rate in exchange for the Euribor 6-month rate.
The market quotes a value for the CMS spread which makes the CMS swap fair. However, it quotes the spread only for a few CMS swap maturities and tenors (usually 5, 10, 15, 20 and 30 years). In the Euro market, the CMS tenor is equal to 3 months, while the c-year IRS which is used as indexation in the CMS has Libor payments of 6-months or 1-year frequency. Thus, CMS spreads depend on three different curves in our framework; first, the funding curve used to discount the cash flows of the CMS swap, which we consider to be the risk-free curve (i.e. OIS curve); second, the 3-month forwarding curve for the Euribor rates paid in the second leg of the CMS; and third, the 6-month (or 1-year) forwarding curve for the Euribor rates paid by the indexation IRS.
Empirical results
In this section, we compare numerically the accuracy of the approximations for the CMS convexity adjustments against the Black and SABR models convexity adjustments presented in Sect. 4.
An empirical illustration
Our first numerical example is based on Euro data as of 3 February 2006. We test a CMS with maturity of 5 years (i.e. nδ = 5), where the associated underlying swaps have maturity of 10 years (i.e. cτ = 10). The closing price for the CMS spread is X 5,10 = 64.9 basis points (bps). The ATM swaption volatility is σ AT M 5,10 = 0.15, and swaption volatilities for different strikes are given in Table 1. For the parameters of the term structure in case 2, we choose the values: (a, b, k) = (0.01, 0.002, 0.1). Finally, when we apply the case with the spread, we assume that the spread is constant at X t = 100 bps, while its volatility is σ t,x = 0.1, and the correlation is ρ s,x = 0.9. The calibration procedure is performed by minimising the square difference between CMS spreads (and swaption volatilities) and the market data. Our results are summarized in Tables 2, 3 and 4. We denote by case 1, the Black-like (flat term structure) convexity adjustment of Eq. (26) and by case 2, the (tilt term structure) convexity adjustment of Eq. (31). For the Black-like convexity adjustment, we set X t = 0, in Eq. (26), while for the SABR model, we use Eq. (36). The differences between the market price and the three different models are provided in basis points (bps). In this case the spread, X t of the swap rate is not taken into account The differences between the market price and the three different models are provided in basis points (bps). We incorporate the swap spread, X t , by assuming that the swap rate is equal to the risk-free rate plus the spread Our numerical results suggest that in all cases market data are well reproduced. As Table 2 reports, the SABR model performs slightly better than our new convexity adjustment (case 2), with 0.89 bps compared to 0.83 bps, when the spread is not taken into account, and much better compared to the Black-like formula (case 1), 0.83 bps against 2.53 bps. However, this is not the case when we take into account the swap spread. The absolute difference (in bps) between our new convexity adjustment model and the market is significantly smaller compared to the SABR case, i.e. 0.1 bps compared to 0.83 bps. Furthermore, as expected, Black's model calibration results, although better than the non-spread case (1.38 bps against 2.53 bps), still fail to fit the data compared to the other two cases. In addition, in Table 4, we report the convexity adjustments for all four cases. We observe that the convexity adjustment in the 'tilt' case is significantly larger than the 'flat' case, especially, when the swap spread is incorporated. Furthermore, in the non-flat case, convexity adjustment presents a curvy shape compared to the earlier Black-like case, where the shape behaves in a more static way.
Numerical examples
In order to further test the accuracy of the approximations for the CMS convexity adjustments against the Black and SABR models convexity adjustments, we calibrate the models to different dates spanning the period from 2007 to 2012. This period covers the most interesting phases of the unfolding of the global financial crisis and, as such, we can derive safer conclusions of how the proposed convexity adjustments perform under different market conditions (i.e. periods of stability and market turmoil). In Table 5, we present market data for CMS The whole structure from year 1 to year 20 is given swap spreads, where the maturity of the CMS is 5 years, and the associated underlying swaps have a 10-year maturity. Furthermore, the market at-the-money swaption volatilities, for all different dates, are reported, while Table 6 reports market volatility smiles across different strikes and for different dates. We can observe that market data clearly show levels of turmoil in the market during the period of the financial crisis. In Table 7, we report all calibrated parameters. Our results for different dates and market data are summarized in Table 8, where all prices are given in basis points. Our numerical results suggest that in each case and in each period, the convexity adjustment in the case of the 'tilt' term structure (with swap spread incorporated), gives better results in terms of fitting market CMS spreads. Our new convexity adjustment (tilt), gives sufficiently accurate and robust results across all market scenarios (stability and turmoil) and spread levels. Absolute differences (in bps) between market CMS swap spreads and the models are given Even in the period of 2008-2011, where market was experiencing an unprecedented turbulence, our convexity adjustment performs well, since the difference between the market data and the SABR model is sufficiently higher than the case of the 'tilt' term structure, with a difference of around 3-5 basis points (for the 'tilt') compared to 7-9 basis points (for the SABR). Furthermore, in every case the results are in between the limits of the bid-ask spread of around 10 basis points, indicating that the market data are well recovered across all periods. Finally, convexity adjustments for all different cases, the (Black-like) flat term structure in red colour, the 'tilt' term structure in green colour and the SABR model in blue colour, are presented in Figs. 1, 2, 3, 4, 5 and 6. All cases take into account the spread on the swap rate. Furthermore, the outcome of the calibration procedure under the SABR model (i.e. the whole volatility smile against different strikes) is presented in the lower panel of the figures, where we observe that the SABR model is perfectly calibrated across different dates. The only exception is October of 2008, i.e. the peak of the financial crisis, where markets were under severe pressures, that the SABR model struggles to fit the volatility smile. Regarding the convexity adjustments, in all cases and across different periods, we depict similar characteristics. We observe that convexity adjustment with 'tilt' term This figure shows the convexity adjustments for the 'flat' term structure (red), the 'tilt' term structure (green) and the SABR model (blue). In all cases the swap spread is taken into account. In the lower panel, the calibrated volatility smile is displayed. Results are from a specific date, 28/05/2010. (Colour figure online) structure is significantly larger than in the other two cases. Furthermore, the shape of the non-flat case presents a slope compared to the Black-like case where the convexity adjustments are flat and static. This helps the model perform well, especially in periods of market turmoil.
Fig. 5
This figure shows the convexity adjustments for the 'flat' term structure (red), the 'tilt' term structure (green) and the SABR model (blue). In all cases the swap spread is taken into account. In the lower panel, the calibrated volatility smile is displayed. Results are from a specific date, 03/06/2011. (Colour figure online) Fig. 6 This figure shows the convexity adjustments for the 'flat' term structure (red), the 'tilt' term structure (green) and the SABR model (blue). In all cases the swap spread is taken into account. In the lower panel, the calibrated volatility smile is displayed. Results are from a specific date, 09/03/2012. (Colour figure online)
Conclusion
In this paper we have developed a new CMS convexity adjustment in a double-curve framework, that separates the discounting and forwarding term structures. The motivation of our study comes from the unprecedented increase in the Libor-OIS spread that was experienced during the financial crisis, which has questioned the legitimacy of considering both (Libor and OIS) quotes as risk-free, and has raised valid issues in the construction of zero-coupon curves, which clearly, can no longer be based on traditional bootstrapping procedures. In that vein, our work fills the gap of the shortcomings of single yield curve model adjustments, widely used in the literature, when one deals with the issue of convexity in money market instruments.
In the double-curving environment that we describe, we have derived the convexity factor requirement in the conventional case that the term structure of interest rates is flat, and its dynamic evolution allows only for parallel shifts, and we have expanded our setting to incorporate the more realistic and challenging case of term structure tilts. The new term appears to be approximately linear in this parameter. In all computations, our results conclude that the convexity adjustment of the 'tilt' term structure case is significantly larger than the convexity adjustments implied by the Black and SABR models.
As an empirical illustration, we have calibrated both convexity adjustments to real market data, by using swaption volatilities, and calculated the differences between market quotes and our model implied CMS spreads. We further compared our results with the widely used by market practitioners smile-consistent CMS adjustment, using the SABR model. We considered a different SABR model for each swap rate contained in the CMS payoff, and we performed a calibration of all the SABR parameters to swaption volatility smile and CMS spreads quoted by the market. In all cases the swaption volatility smiles are very well recovered by the calibrated SABR models. Furthermore, our results demonstrate that the proposed convexity adjustments offer a market consistent and robust valuation of CMS spreads, and suggest that CMS-type of products should be priced under a multi-curve framework.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 8,187.2 | 2017-03-06T00:00:00.000 | [
"Economics"
] |
Cross-linking of actin filaments by myosin II is a major contributor to cortical integrity and cell motility in restrictive environments
Cells are frequently required to move in a local environment that physically restricts locomotion, such as during extravasation or metastatic invasion. In order to model these events, we have developed an assay in which vegetative Dictyostelium amoebae undergo chemotaxis under a layer of agarose toward a source of folic acid [Laevsky, G. and Knecht, D. A. (2001). Biotechniques 31, 1140-1149]. As the concentration of agarose is increased from 0.5% to 3% the cells are increasingly inhibited in their ability to move under the agarose. The contribution of myosin II and actin cross-linking proteins to the movement of cells in this restrictive environment has now been examined. Cells lacking myosin II heavy chain (mhcA-) are unable to migrate under agarose overlays of greater than 0.5%, and even at this concentration they move only a short distance from the trough. While attempting to move, the cells become stretched and fragmented due to their inability to retract their uropods. At higher agarose concentrations, the mhcA- cells protrude pseudopods under the agarose, but are unable to pull the cell body underneath. Consistent with a role for myosin II in general cortical stability, GFP-myosin dynamically localizes to the lateral and posterior cortex of cells moving under agarose. Cells lacking the essential light chain of myosin II (mlcE-), have no measurable myosin II motor activity, yet were able to move normally under all agarose concentrations. Mutants lacking either ABP-120 or α-actinin were also able to move under agarose at rates similar to wild-type cells. We hypothesize that myosin stabilizes the actin cortex through its cross-linking activity rather than its motor function and this activity is necessary and sufficient for the maintenance of cortical integrity of cells undergoing movement in a restrictive environment. The actin cross-linkers α-actinin and ABP-120 do not appear to play as major a role as myosin II in providing this cortical integrity.
Introduction
Dictyostelium discoideum is a social, eukaryotic amoebae that normally moves through the soil using chemotaxis to folic acid to search for it's bacterial food source (Konijn et al., 1967). When deprived of a food source, cells in a local territory are attracted together to form a multicellular aggregate by chemotaxis to cAMP (Barkley, 1969;Bonner et al., 1969;Konijn et al., 1967). The mechanisms of chemotactic motility have been extensively investigated, and it is clear that the actin cytoskeleton provides the structural framework against which force is applied, allowing cells to change their shape. In addition, new actin filament polymerization provides at least part of the force that drives protrusions during cell motility. How this activity is polarized to allow chemotaxis is not understood; however, it is clear that cell surface receptors for chemotactic factors lead to signals in the 'front' of the cell that are translated into localized activation of the cytoskeleton (Parent et al., 1998).
The organization of these actin filaments into functional arrays and the dynamics of these arrays is also not well understood. To date, more than twenty actin-binding proteins have been discovered in D. discoideum. Among these are a number of actin filament cross-linking proteins, including ABP120 (Condeelis et al., 1981), α-actinin (Fechheimer et al., 1982), fimbrin (Prassler et al., 1997), cortexillins I and II (Faix et al., 1996), and a 34 kDa protein (Fechheimer and Taylor, 1984) that presumably provide rigidity to the cortex. Why the cell needs so many different actin cross-linking proteins, and what specific roles each plays in processes that involve rearrangements of the actin cytoskeleton such as chemotaxis, cytokinesis, endocytosis and phagocytosis, is unclear.
The myosin motors also play a major role in cytoskeletal function. Non-muscle myosin II assembles into minifilaments, and these filaments are able to apply force to move actin filaments relative to each other (Clarke and Spudich, 1974;Hynes et al., 1987;Sheetz et al., 1986). Myosin II minifilament assembly is regulated by the phosphorylation state of the heavy chain tail (Egelhoff et al., 1993) and the motor activity is stimulated by phosphorylation of the regulatory light chains (Griffith et al., 1987). The essential light chain appears necessary for myosin motor function since myosin lacking this protein assembles minifilaments and binds actin, but has no measurable actin activated ATPase activity (Chen et al., 1995;Xu et al., 1996). While not generally thought of as an actin cross-linking protein, myosin II minifilaments presumably also have this capability (Wachsstock et al., 1994;Humphrey et al., 2002). Mutants lacking myosin II (mhcA -) are able to accomplish both random and chemotactic motility; however, they move slowly and have defects in pseudopod extension (Peters et al., 1988;Wessels et al., 1988). Although mhcAcells are able to aggregate, they are unable to complete the developmental program .
The developmental defect of mhcAcells appears to be due to their inability to move in a restrictive environment . During early development, cells acquire surface adhesion proteins and so movement occurs while cells are continually making and breaking adhesive contacts with their neighbors as well as the substratum. Unlike movement on a planar substratum, this form of motility is analogous to the movement of metastatic cancer cells away from a primary tumor, or the extravasation of immune cells through capillaries and into a wound site. It requires cells to overcome a barrier of resistance to their movement. Movement in restrictive conditions is also important for Dictyostelium development. Ponte et al. showed that while development of actin-binding protein mutants on agar plates is normal, development on soil plates is defective (Ponte et al., 2000). Soil is presumably a more restrictive environment for cell motility than a planar agar surface. Myosin II seems to be essential for this multidimensional process of migration, apparently by providing cortical integrity, since the mhcAcells became stretched and distorted when attempting to move in aggregation streams .
Surprisingly, cells lacking the essential light chain (mlcE -) behave normally in this environment indicating that the motor activity of myosin is not required for motility in restrictive conditions (Xu et al., 2001). Since the environment of aggregation streams is so complex, we sought to develop a simpler and more versatile means by which cell motility in a restrictive environment could be investigated. An underagarose folate chemotaxis assay has been developed in which cells are induced to move between a planar substratum (glass or plastic) and a layer of deformable agarose of varying stiffness (Laevsky and Knecht, 2001). Using this system, we have investigated the movement of cells lacking specific cytoskeletal proteins. Consistent with our previous results, it appears that the actin binding activity of myosin II, and not the motor activity, is required for movement and cortical stability in this restrictive environment. None of the other actin cross-linkers tested have as major a role in this process as myosin II.
Materials and Methods
Cell culture and conditions All cell cultures were grown in 100 mm plastic Petri dishes containing 10 ml of HL-5 medium [5 g Bacto ® protease peptone #2 (Difco, Detroit, MI, USA), 5 g BBL thiotone E, 10 g glucose, 5 g yeast extract, 0.35 g Na2HPO4, 0.35 g KH2PO4, 0.1 mg/ml ampicillin, 0.1 mg/ml dihydrostreptomycin, to 1 l, pH 6.7]. NC4A2 is an axenic cell line derived from the wild-type NC4 without mutagenesis Morrison and Harwood, 1992). HK321 is a myosin II heavy chain null mutant (mhcA -) derived from NC4A2 . mlcEis an essential light chain mutant in which the light chain is replaced by the thy1 selectable marker (Chen et al., 1995;Pollenz et al., 1992). ELC+ is a cell line in which the essential light chain gene is integrated back into the genome of mlcEcells in order to rescue mlcEfunction (Chen et al., 1995;Pollenz et al., 1992). This cell line was used as a control for the mlcEcells since the thyparental of both cell lines (JH10) moves poorly in the conditions of the under-agarose assay. α-actinin and ABP-120 mutants were generated via homologous targeting in an AX2 parental line (Rivero et al., 1999). Cell lines containing the actin binding domain (ABD) of actin binding protein 120 (ABP120) fused to green fluorescent protein (ABD-GFP) (Pang et al., 1998) were used to localize F-actin. A GFP-myosin expression plasmid (Moores et al., 1996) was used to determine myosin localization.
Under-agarose assay
The under-agarose assay was performed as described previously with minor modifications (Laevsky and Knecht, 2001). 14 ml of SeaKem ® GTG agarose (BMA, Rockland, ME, USA), made with SM medium (Sussman and Sussman, 1967), was poured into 100 mm plastic Petri dishes. The agarose was allowed to solidify for 1 hour at 22°C. Three 2 mm wide troughs were cut 5 mm apart with a standard razor blade (4 cm length) using a template (Fig. 1). 100 µl of 0.1 mM folic acid (Research Organics, Cleveland, OH, USA) was added to the center trough and allowed to form a gradient for 1 hour at room temperature. Cells were harvested, adjusted to 1×10 6 cells/ml for individual analysis and 1×10 7 cells/ml for population analysis. 100 µl of cell suspension was then added to the peripheral troughs.
Analysis of cell movement
Images were taken of the cell populations using a Zeiss ® IM inverted microscope (Carl Zeiss, Oberkochen, Germany), Paultek Imaging Inc. CCD camera (Advanced Imaging Concepts, Princeton, NJ, USA), Scion Inc. LG3 frame grabber (MVI, Avon, MA, USA) and NIH Image software (developed at the US National Institutes of Health and available on the Internet at http://rsb.info.nih.gov/nih-image/). Overall distance traveled by cells under-agarose was determined by measuring the average distance the ten front most cells in a field of view were from the trough edge, approximately 3 hours after the cells were applied to the trough. The analysis of the movement of mhcAcells was done near the trough edge as soon as they could be seen to have moved underneath the agarose in order to examine cells prior to stretching and fragmentation. Individual cell speed and direction change was determined using DIAS ® software (Solltech, Oakdale, IA, USA). Speed was calculated using the displacement of the centroid from frame to frame during 1-minute intervals. Direction change was measured as the absolute value of the difference in the direction of movement of the centroid from frame to frame, measured in degrees. Cross sectional area measurements were made using NIH Image software. The cross sectional area is measured as the area of the image of a cell seen using phase contrast microscopy.
Fluorescence imaging
For fluorescence imaging experiments, 0.75 ml of agarose was added to a Rose chamber (Rose et al., 1958), or 4 ml to a 60 mm glass bottom Petri dish (Willco Wells, Amsterdam, Netherlands) so that cells could be imaged through a 0.17 mm thick glass coverslip. Two troughs were cut in the Rose chamber with a 10 mm long razor blade, and the amount of cells and folate was decreased proportionally. Confocal imaging of GFP-labeled cells was performed using a Leica TCS SP2 confocal microscope system (Leica Microsystems, Heidelberg, Germany) and an MRC 600 (Bio-Rad Laboratories, Hercules, CA, USA) equipped with a 25 mW krypton-argon laser and COMOS software.
Motility of cytoskeletal mutants under agarose of varying concentrations
A cell is generally able to generate a particular shape and to change shape as desired using internally generated forces. In order to do this, the cell must be able to overcome the environmental forces that resist these processes, such as membrane tension, hydrostatic pressure, fluid shear etc. The shape changes are generally accepted to be driven by the actin cytoskeleton, which together with accessory proteins make up the cell cortex. Therefore, we use the term 'cortical integrity' to refer to the structural properties of the cell cortex, i.e. its ability to deal with external or internal mechanical forces. When cells are attached to a planar substratum and moving in fluid, there is little in the way of external forces to resist cell shape changes. However, in the under-agarose chemotaxis assay, cells move between the plastic surface and a sheet of agarose ( Fig. 1). As the cells move out of the trough to move up the folate gradient, they must deform the agarose upward and at the same time become flattened (Laevsky and Knecht, 2001). As the stiffness of the agarose increases, cells have more and more difficulty deforming it, until at 3% agarose, the wildtype cells can no longer move at all. This is likely to occur because at this concentration, the cells no longer have sufficient cortical integrity to deform the agarose sheet upwards. If so, then cells with reduced cortical stiffness should show defects in moving under lower concentrations of agarose compared with wild-type cells. In order to examine this possibility, several mutants that might be expected to have reduced cortical integrity were examined. ABP-120 and αactinin are the two major actin cross-linking proteins found in the cortex of Dictyostelium cells (Condeelis et al., 1984). Gene disruption mutants have been isolated that lack either of these proteins and these cells have measurable but not dramatic alterations in cytoskeletal function and motility (Cox et al., 1992;Cox et al., 1996;Noegel et al., 1989). However, mutants lacking either protein showed normal movement in the underagarose chemotaxis assay (Table 1). This result indicates that neither protein is required for under-agarose motility.
Cells lacking myosin II (mhcA -) are able to move on a liquid covered planar surface at rates about one third of wild-type cells (Wessels et al., 1988). However, these mutants are unable to penetrate aggregation streams, which are presumed to be a viscous restrictive environment (Clow and McNally, 1999;Shelden and Knecht, 1995). In order to examine more directly whether this defect is the result of their inability to move in a restrictive environment, the under-agarose chemotaxis assay was used to examine the motility of the mhcAcells. In 0.5% agarose, wild-type cells moved relatively freely out of the troughs within 1 hour and continued to do so over the next 9 hours reaching a distance of about 3000 µm from the trough (Laevsky and Knecht, 2001) (Fig. 2). In contrast, few mhcAmoved out of the trough, and those that did never migrated more than 500 µm from the trough (Fig. 2). Because of this, the movement of individual mhcAcells was measured near the trough edge soon after exit. The speed of these cells was about two thirds of wild-type cell speed and their movement was directed toward the folate trough (Table 1). At concentrations of agarose 1% or above, the mhcAcells did not move out of the troughs at all (Figs 2 and 3). In order to confirm that the inhibition of mhcAmovement was due to the stiffness of the agarose and not the adherence of the agarose to the plastic dish, the same experiment was performed except that the agarose layer was either rotated 180°or lifted out and placed in a fresh dish prior to cutting the troughs. The same results were obtained when the agarose was freed from the surface in this way (data not shown) indicating that it is the local deformation of the agarose and not the adhesion of the agarose to the surface that inhibits the movement of mhcAcells. Individual cell speed, the number of direction changes and surface area were determined as described in the Materials and Methods section. Transmitted light images were acquired under indicated agarose concentration and quantified using DIAS ® imaging software. Surface area measurements were determined using NIH software. *Significant deviation from parental control values. The data are means±s.d. (n≥10). Differences between means were checked for significance (P<0.05) with a two-way analysis of variance and the Student's t-test.
AX2 is the parent of 120-and 95-cell lines. NC4A2 is the parent of the mhcAcells. The ELC+ serves as a control for the ELC-cell line (see Materials and Methods).
na, not applicable.
Effects of agarose overlay on mhcAmorphology When moving under 0.5% agarose, the mhcAcells did not move as far or as fast as the wild-type cells, however, the most unusual aspect of their behavior was the dramatic elongation of the cells as they attempted to move (Figs 3,4,5). This behavior is reminiscent of mhcAcells moving in wild-type aggregation streams using the chimeric aggregation assay Xu et al., 1996) (see Discussion).
However, it was difficult in that assay to pinpoint the precise cause of the stretching. In the under-agarose assay, the stretched appearance of cells was found to be due to a failure of retraction of the rear of the cell body. Dictyostelium cells do not normally have a well-defined uropod, but this process generated a structure resembling the uropod of mammalian cells. Time-lapse analysis of cell movement indicated that the rear of the cell would often become stuck to the surface while the cell body continued to move. This uropod would eventually only be connected by a thin bridge of cytoplasm and this bridge sometimes broke as the cell body moved away (Fig. 4). Even cells that did not fragment ceased moving about 500 µm from the trough edge (Fig. 2). The posterior of wild-type cells is enriched in F-actin as shown by the bright fluorescence of the GFP-ABD120 probe in this region (Laevsky and Knecht, 2001). The posterior of mhcAcells is also enriched in F-actin and the probe is concentrated in the cytoplasts released from mhcAcells (Fig. 5). This loss of cellular actin may account for the eventual cessation of movement by these cells (Fig. 5). However, it is interesting to note that cells that do not fragment also eventually stop moving about 500 µm from the trough edge. Previously, we showed that increasing the concentration of agarose results in an increased surface area of wild-type cells, indicating that the cell is less able to deform the agarose upward and becomes more compressed and flattened as a result (Laevsky and Knecht, 2001). If the cortex of the mhcAcells were flaccid, one would expect that they would have a greater surface area than wild-type cells at the same agarose concentration. The elongated appearance of the mhcAcells makes this comparison more complicated, however, the mhcAcells that were able to move under 0.5% agarose did have a significantly greater surface area than wild-type cells (Table 1).
There are two possible reasons why the mhcAcells might not be able to exit the troughs at high agarose concentrations. The first possibility relates to their behavior when moving under 0.5% agarose. The stretching and fragmentation indicates that the cells have trouble releasing and retracting their uropods. While this is seen to some extent when mhcAcells are moving in liquid media without an agarose overlay (D.A.K., unpublished observations), it is far more dramatic under agarose. Thus it is possible that at 1% and higher agarose concentrations, this problem is magnified. In this scenario, the cells would not move beyond the trough edge because once they move the cell body underneath the agarose at the edge, they become trapped because they cannot retract their uropods and move any farther. If this were the case, we would see agarose. In order to distinguish between these possibilities, mhcAcells were examined at the edge of the trough as they tried to move out under the agarose (Fig. 6). The cells moved to the edge of the trough, and then frequently extended pseudopods under the agarose sheet, but the cell body was never able to move underneath. However, the cells were able to withdraw the pseudopod and continue moving along the agarose interface. Stretched cells under the agarose at the edge were not observed indicating that uropod retraction was not the cause of the defect. This data indicates that the defect in mhcAcells is in creating the force necessary to push the stiffer agarose out of the way and move the cell body underneath.
Essential light chain mutants move normally under agarose
Myosin II from cells lacking the essential light chain of myosin (mlcE -) has actin-binding activity, but lacks ATPase motor function (Chen et al., 1995;Xu et al., 2001). In the chimeric aggregation assay, mlcEcells moved normally and did not become elongated like mhcAcells (Xu et al., 2001). MlcEcells were tested in the under-agarose chemotaxis assay to see if the motor function of myosin was necessary for movement in this restrictive environment. Under all agarose concentrations tested, mlcEcells moved the same distance and at the same speed as the control rescued cells in which the essential light chain was reintroduced into the cells (Table 1). DIAS analysis of individual cell behavior indicated that the rate of direction change was consistent with cells undergoing positive chemotaxis, as opposed to cells moving randomly (Table 1) (Laevsky and Knecht, 2001). The mlcEmutants maintained a surface area of about 198 µm 2 , similar to that of the control cells (Fig. 2, Table 1) and became comparably flatter under 2.5% agarose. Morphologically, no obvious difference was seen between the two cell lines when viewed under agarose (Fig. 3).
Localization of F-actin and myosin II during underagarose chemotaxis
In order to determine if the localization of F-actin in mlcEcells was altered, the GFP-ABD120 probe was introduced into the cells (Pang et al., 1998). This probe dynamically associates with F-actin filaments in live cells allowing visualization of the actin cortex. In both wild-type and mlcEcells moving under agarose, the probe localized to an arc around the posterior and rear edge of the cell and transiently to new protrusions at the leading edge (Fig. 7A,B). No significant difference in the localization of this probe was observed in mlcEcells. Previous work has shown that myosin is distributed throughout the cortex in cells in buffer or media, but when placed under agarose, it rapidly relocalizes to the rear of the cell (Neujahr et al., 1997;Yumura et al., 1984). In order to examine the localization of myosin II during under-agarose chemotaxis, wild-type and mlcEcells, expressing myosin II-GFP were examined. Confocal optical slices about 0.5 µm thick were acquired every 5 seconds at a point just above the surface of the coverslip. In both cell types, myosin II is concentrated in an arc at the rear of cells undergoing underagarose chemotaxis (Fig. 7C,D). In addition to its prevalent localization in the rear, myosin II is also found to transiently localize to small patches of the cortex at the front of the cell. No significant differences in myosin-GFP localization were observed between the wild-type and the mlcEcells during under-agarose motility.
In order to visualize the three-dimensional localization of myosin through the volume of the cell, 0.2 µm thick z-sections were acquired with the confocal microscope. The cells are about 4-5 µm thick under this condition, and actin (not shown) and myosin II are present in an arc or ring at the edge of the cell throughout much of this volume (Fig. 8). There is much less myosin near the dorsal surface of the cell. This conical wall of myosin (and actin) sometimes extends all the way around the cell (Fig. 8), but is frequently just in the rear half of the cortex as in the cells shown in Fig. 7. The only significant difference between wild-type and mlcEcells was that the later frequently had small round dots of fluorescence in the rear of the cell, which is probably results from a disassembly defect in myosin lacking the essential light chain. We hypothesize that the cortical rim of acto-myosin is the structural element that allows the cell to resist the downward pressure of the agarose and move in this environment.
Discussion
Movement of cells on a planar surface requires protrusion of the membrane at the leading edge, adhesion of this new protrusion to the surface, and retraction of the cell body. Depending upon where adhesion to the surface is concentrated, adhesions must either be released from the surface or the pull of the cell body must overcome the force of adhesion. It has been proposed that myosin II is the contractile motor that causes tail retraction. However, for many cell types, there is not a distinct 'retraction' event. Instead, the cell moves smoothly along with protrusion and retraction apparently occurring simultaneously. Also, it is clear that cells lacking myosin II can move on surfaces, albeit at a slower rate than wild-type cells (Wessels et al., 1988). If the cell is moving in a restrictive environment or a three dimensional matrix, the situation becomes even more complex as there is no longer a 'dorsal' or 'ventral' side of the cell and adhesion can take place anywhere on the Journal of Cell Science 116 (18) surface. We have begun to investigate the issue of how a cell 'squeezes' itself through a restrictive environment that provides adhesive surfaces on more than one side. In this situation, the cell is subjected to additional stresses as it must push against resisting structures or resist the pushing of other cells. We use the term cortical integrity to refer to the ability of the actin cortex to apply and resist these external forces. An example of this type of movement would be a neutrophil or macrophage extravasating through a capillary wall, or a metastatic cancer cell invading a tissue layer.
Our results indicate that myosin II is a surprisingly important player in the maintenance of cortical integrity, especially when a cell is challenged to move in a restrictive environment. Even more surprising is the finding that this action of myosin II does not appear to require the normal contractile activity. The most likely interpretation of this result is that ELC-myosin II retains the ability to bind and cross-link actin filaments and thereby the cortex, in addition to its ability to rearrange those filaments when called upon to contract. This result is consistent with rheological measurements that show that mixing myosin II with actin filaments in the presence of ADP can dramatically stiffen the matrix (Humphrey et al., 2002). How the cell might regulate this aspect of myosin function is unknown, but a precedent exists in the latch state of smooth muscle myosin where force production is not always directly linked to actin binding (Sweeney, 1998).
We, and others have previously shown that cells lacking myosin II are unable to accomplish morphogenetic movements (Clow and McNally, 1999;Shelden and Knecht, 1995). In aggregation streams the mhcAcells were unable to move amidst the mass of adhered cells and became dramatically stretched as they tried to make and break contacts with neighboring cells in this environment . The defect was interpreted as a failure in cortical integrity, allowing cells to be stretched abnormally by externally applied forces. Surprisingly, cells lacking the essential light chains of myosin II behaved normally in this chimeric aggregation assay (Xu et al., 1996). In the absence of the essential light chain, myosin is found associated with the actin cortex, so presumably can bind actin, but there is minimal actin-activated ATPase motor activity (Chen et al., 1995;Xu et al., 1996). This result indicated that the motor activity of myosin II is not required for the maintenance of cortical integrity.
We envision at least three distinct force-generating steps in movement under agarose. First is the protrusive force at the leading lamella or pseudopod causing forward movement of the leading edge. Because this part of the cell is relatively thin, and there is a small space between the agarose and the planar surface, the agarose concentrations we are using are probably not especially inhibitory to this protrusion process. The second step is the upward deformation of the agarose necessary to allow the thickest part of the cell (the nuclear region) to B. WT in 0.5% and 2.5% C. mhcAin 0.5% and 2.5% D. mhcAin 0.5% A. WT in 0.5% and 2.5% Agarose squeeze underneath. This localized upward deformation as the cell crawls was shown to occur by tracking the movement of fluorescent beads embedded in the agarose (Laevsky and Knecht, 2001). As the agarose concentration is increased, it becomes more and more difficult for the cells to deform the agarose, and so movement slows down and eventually ceases around 3% agarose. The third step is the retraction of the rear of the cell. Dictyostelium cells do not have well defined uropods, and therefore it was not obvious that this would be separate from the translocation of the cell body. However, the finding that cells lacking myosin II have long trailing extensions of cytoplasm when moving under 0.5% agarose indicates that the detachment of the rear of the cell is indeed a separate and important issue in translocation. However, the stretching is not simply a matter of increased surface adhesion. Jay et al. examined the movement of mhcAcells on surfaces of varying adhesiveness and did not observe the uropods being left behind (Jay et al., 1995). Instead the mhcAcells were unable to move at all on sticky surfaces that the wild-type could still crawl on.
The inability of the mhcAmutant cells to move under concentrations of agarose above 0.5% is likely to be a different problem. In this situation, the cells still make protrusions under the agarose at the trough edge, but the cell body never flattens and continues up the gradient. Thus this is not a problem of rear retraction, since the cells never get the uropod underneath the agarose. A clue to understanding this phenotype, comes from visualization of the dynamic localization of GFP-myosin in these cells. Actin and myosin are not prominent in the ventral or dorsal cortex of cells under agarose, but are enriched in the peripheral cortex, either in the rear portion as an arc, or surrounding the cell (Figs 7 and 8). This vertical ridge of actomyosin is likely to be responsible for deforming the agarose upward and allowing the nucleus to fit underneath. In the absence of myosin II, we presume that the cortex does not have the stiffness to deform higher agarose concentrations, and so the nucleus cannot fit underneath and the cells are trapped at the trough edge.
The model in Fig. 9 explains the events proposed to occur during protrusion and retraction events. The crosshatching indicates the orthogonal network of actin filaments that lie beneath the membrane. This network would be held together by actin binding proteins, such as ABP-120, α-actinin, cortexillin, talin and myosin II. The wild-type, mlcEand mhcAcells are able to extend protrusions under the agarose (Fig. 9A,C). The linkage between the pseudopod and the cell body in wild-type and mlcEcells is retained and the cell moves as an integral unit under the agarose (Fig. 9B). The mhcAcell (Fig. 9D) is able to retract the nuclear region under 0.5% agarose, but not under 2.5% agarose. At the higher agarose concentrations, the cell apparently cannot produce sufficient force to make further progress.
The implication of these results is that the cell cortex acts to integrate the cell as a whole, and myosin II is crucial to integrating the actin cortex. The surprising result is that normal contractile activity is not needed for myosin II to carry out this function. It is possible that some contractile activity below the limit of our assays is present in ELC-myosin, and this is sufficient to allow myosin II to integrate the cortex. It has been determined that Aspergillus nidulans myosin I mutants with less than 1% of wild-type actin-activated MgATPase activity retain essential in vivo functions (Liu et al., 2001). However, we have shown that mlcEcells cannot undergo contraction of detergent extracted cortices, which would be a direct test of contractile activity of myosin in situ (Xu et al., 2001). Another possibility is that because the actin-activated ATPase activity is lost in the mlcEmutant, this mutant myosin has become a permanent actin cross-linker and this cross-linking activity replaces the normal contractile activity of myosin. This is possible, but it seems unlikely that such a dramatic change in function could allow cells to behave so normally or would allow normal organization of the actin filament network. We favor a third hypothesis, that as in smooth muscle, non-muscle myosin II is not constitutively applying force to the actin cytoskeleton, but can enter a state in which it is bound to actin like a cross-linker, while not actively engaged in the ATPase cycle. Myosin II may only be called upon to contract when the cell changes shape, as happens in cytokinesis. The mlcEmutation would allow the myosin II to function in its crosslinking state, but not enter a contractile mode.
Our data indirectly indicates that the cortex of mhcAis less stiff than wild-type cells. Attempts have also been made to directly measure the cortical integrity of cells using biophysical techniques. The results are contradictory and confusing. Pasternak (Pasternak et al., 1989) showed only a slight decrease (32%) in the cortical stiffness of mhcAcells using a 'cell poker' that measured the resistance of the cell to inward deformation. Egelhoff (Egelhoff et al., 1996), using a vibrating glass rod, measured a 50% decrease in cortical stiffness in myosin II mutant cells. However, Merkel et al. (Merkel et al., 2000) used a pipette aspiration system and found a dramatic increase in the resistance of mhcAcells to outward deformation from a suction pipette, indicating a stiffer cortex in the myosin II mutants. Feneberg et al. (Feneberg et al., 2001) used a microrheology technique based on colloidal magnetic tweezers to measure the viscoelastic forces within the cytoplasm. They found the apparent viscosity of myosin II null mutants was higher, also implying a stiffer cortex. Some of the discrepancies may be the result of the methodologies used. Live cell imaging of the actin cortex in cells containing the GFP-ABD120 probe shows that any time a cell makes contact with an object (another cell, a bead or an obstacle), there is a rapid accumulation of F-actin in the contact region (D.A.K., unpublished observations). Thus, application of a pipette or poker may lead to an actin polymerization response that will interfere with the measurements. By using a biological assay, we have directly evaluated the functionality of the cortex in what is to the cells, a relatively normal environment.
ABP-120 and α-actinin are, by mass, the two major actin cross-linking proteins in the cell (Condeelis et al., 1981;Condeelis and Vahey, 1982). Thus it is surprising that mutants lacking these proteins had no altered phenotype in this assay. This result indicates that not only is myosin II important for cortical integrity, but that so far, it is the single most important protein providing this function. Clearly, cells lacking myosin II have some cortical integrity or they would not be able to move at all. This residual cortical integrity is presumably supplied by the myriad of other actin cross-linkers or the rheological properties of actin filaments themselves.
Our model is not intended to suggest that cells do not require the motor activity of myosin II. Mutants lacking the essential light chain (and thus motor activity) are unable to divide in suspension and have defects in multicellular development (Chen et al., 1995). In addition, we have previously shown that myosin II contractile activity is needed for cells to generate shape in suspension or to elongate vertically off a surface (Shelden and Knecht, 1996) and the essential light chain mutants are defective in both functions (Xu et al., 2001). Our model, therefore, proposes that myosin plays a major role in maintaining the physical integrity of the actin cortex, and that its function can be separated into contractile and actin-binding activities. | 8,205.4 | 2003-09-15T00:00:00.000 | [
"Biology"
] |
Electrochemical Oxidation of Fragrances 4-Allyl and 4-Propenylbenzenes on Platinum and Carbon Paste Electrodes
The electrochemical oxidation behaviors of 4-allylbenzenes (estragole, safrole and eugenol) and 4-propenylbenzenes (anethole, asarone and isoeugenol) on platinum and carbon paste electrodes were investigated in a Britton-Robinson buffer (pH = 2.93 and 10.93), acetate buffer, phosphate buffer solutions (pH = 2.19 and 6.67), and acetonitrile containing various supporting electrolytes examined lithium perchlorate. Their oxidation potential with Hammett (free-energy relationships) and possible reaction mechanisms were discussed.
INTRODUCTION
2][3][4][5] Fragrances that are air-sensitive may form peroxides, respiratory irritants, and aerosol particles that cause inflammatory responses in the lungs.0][11][12][13][14] The structurally related substituted 4-allylbenzenes derivatives (eugenol, estragole and safrole) and 4-propenylbenzenes derivatives (isoeugenol, anethole and asarone), occur naturally in various traditional foods, particularly in spices such as cloves, cinnamon and basil1.Some of them have been demonstrated to be an effective, inexpensive anesthetic agents, antioxidants and blood circulation enhancers.The major analytical methods for analyzing alkenylbenzenes fragrances are gas chromatography and gas chromatography-mass spectrometry. 15There are few electrochemical theories reported in pharmaceutical formulations. 161][22][23] The present work is concerned with the measurement of aromatic substituent effects and structural elucidation of 4-allyl and 4-propenylbenzenes.
Voltammetric Measurements
The three voltammetric techniques, Sampling DC, linear Sweep and cyclic voltammetry, were all performed on a platinum and carbon paste electrodes.Cyclic voltamograms (CV) of the fragrances were taken on a platinum electrode in acetonitrile containing various supporting electrolytes, and Britton-Robinson buffer solutions (pH = 2.93 -10.93) to monitor potential vs. current.
Supporting Electrolytes and Solvent Effects
There are several ways in which the supporting electrolytes solvent system can influence mass transfer, the electron reaction (electron transfer), and the chemical reactions which are coupled to the electron transfer. 24he effects of supporting electrolytes and solvent composition on fragrances 4-allyl and 4-propenylbenzenes peak potential (E p ) and peak current (i p ) are listed in Table 1.Table 1 shows the peak current of fragrances in non-aqueous solvent (100 % acetonitrile) were higher than in aqueous-organic solvent (30 % acetonitrile) due to the higher background in non-aqueous solvent than that in aqueous-organic solvent.However, the peak potential of fragrances were less positive in aqueous organic solvents that are more suitable for oxidation.The electrooxidation process occurs in the heterogeneous phase.For non-aqueous solvent, its molecules completely cover the electrode surface to prevent the adsorption of fragrances.Furthermore in organic work, strongly basic anions or radical anions are often produced and these are rapidly protonated by solvents like Table 1.Effect of supporting electrolytes on the cyclic voltammetric peak potential (E p ) and peak current (i p ) of α-asarone, transanethole, isoeugenol, safrole, estragole and eugenol at platinum electrode.The concentration of fragrances was 1 mmol dm -3 ; scan rate, v = 50 mV / s.
α-Asarone
trans-Anethole Isoeugenol Safrole Estragole Eugenol 1.17 - (a) 51.8 -(a) 1.37 - (a) 48.9 - (a) 0.83 -(a) These results can be accounted for by the presence of larger ion tetrabutylammonium than the tetraethylammonium film on the platinum surface.However, the E p of quaternary ammonium ion film on the Pt surface is very similar.Compared with quaternary ammonium ion and (R 4 N + ) lithium ion (Li + ), the E p of the safrole in Bu 4 NClO 4, Et 4 NClO 4 and LiClO 4, small cation size of lithium ion show less positive values (1.39 V) than quaternary ammonium ion (1.57V) (Figure 1).Indeed, as reported in the literature, 25 the bulky hydrophobic alkyl group stronger Van der Waals forces of cohesion between the ammonium groups, leading to a more compact hydrophobic adsorbed layer.The E p of the estrogole first peaks are expected to correspond to the processes of two a one-electron (E p at 1.67 V and 2.13 V) and a one two-electron (E p at 1.37 V) oxidation, in nonaqueous solvent (acetonitrile) and aqueous-organic solvent (30 % acetonitrile) (Figure 2).These data show the peak potentials shift more positively with 100 % acetonitrile, because its molecules completely cover the electrode surface to prevent the adsorption of estragole.However, 30 % acetonitrile point to an appreciable content of estragole adsorbed on the electrode surface. 26e total number of electrons is determined using controlled-potential coulometry using a platinum electrode.The accumulated charge (Q) is taken from the digital coulometer at a curve (potential corresponding to peak current) of the oxidation wave.Applying the equation: Q = nFw/M where w is the weight of the sample in grams and M its molecular weight, the value of n for estrogole is found to be two electrons. 19,27The active oxidation group (OH) on benzene ring of estrogole because of between C=C double bond and benzene ring has not conjugation results a marked higher potential (1.61 V) than that has conjugation of trans-anethole (1.25 V) in acetonitrile containing LiClO 4 .A possible mechanism is given as below: Substituted Group Effects Insofar as electrons are transferred in the potentialdetermining step, the transition state is more electron rich than the reactant is, and electron-donating substituents will facilitate to oxidation process. 28Voltammetric oxidative groups (i.e.hydroxyl and methoxyl) are electron-donating substituents these attached to the 4-allylbenzene nucleus and 4-propenylbenzenes (structure shown in Scheme 1) will affect the electronic distribution within that nucleus.The substituents constant values can be quantitatively divided into the sum of independent inductive and resonance contributions. 29he following peak potentials are reported for substituted 4-allylbenzenes in LiClO 4 /CH 3 CN (w(CH 3 CN) = 100 %): hydroxyl +1.16 V, methoxyl +1.64 V and methylenedioxy +1.49V. (Figure 3) Eugenol is oxidized more easily than the estragole and safrole.The same substituents at 4-propenylbenzenes have resonance effect because involves interaction between double bond and benzene ring, and give three peaks.Figure 4 demonstrates the effect of the hydroxyl and methoxyl substituents on the oxidation of isoeugenol and transanethole and both have three peaks.
The vibrational spectroscopic features of fragrances demonstrate the relationship between substituents and vinyl double bond.Both ATR-IR and Raman have C=C aromatic and conjugate bands about of 1600 cm -1 (Figure 5(a) and (b)).The ATR-IR spectra 4-allylbenzenes (safrole, estragole and eugenol) have highly similar characteristics because of the presence of the same weak bands 1611 cm -1 for ν(C= C) ar and 1637 cm -1 for ν(C= C) vin , strong band 1508 cm -1 for ν(-OCH 3 ) in estragole and eugenol, and strong band 1242 cm -1 for ν(C-O).On the other hand, Raman spectra of these compounds show quite similar strong bands at 1584 cm -1 for ν(C= C) ar and 1618 cm -1 for ν(C= C) vin .However, from Figure 6 shows 1637 cm -1 for ν(C= C) vin in trans-anethole is not apparent because of the resonance effect between the double bond and the benzene ring.
Fragrances on the Platinum and Carbon Paste Electrodes pH Effects
The E p and i l in the Sampling DC voltammetric oxidation over a wide pH range are found to substantially vary from each other.The E p and i l obtained in the present work on the platinum and carbon electrodes are listed in Table 2. Notes, the E p decreasing with an increasing pH does not obviouslly change in weak acidic media (pH = 2.93-5.39),but significantly decreased above pH = 6.14.Safrole and estragole differing from the other fragrances have two peaks above pH = 6.14 in alkaline media.However, there are two discrepancies between the voltammetric behaviors on the platinum (Pt) and carbon paste electrodes (CPE): (1) A higher value of limiting current (i l ) in strong acidic media (pH = 2.93-3.89)at CPE; but higher i l in weak acidic media (pH = 6.18-6.83)at Pt; (2) E p decreasing with an increasing pH at Pt is clearly more regular than that CPE.Our analyses of the effect of pH and supporting electrolytes on the oxidation peak current and peak potential of fragrances in acidic solutions (pH = 2.19-6.83)and in the electrolyte of lithium perchlorate (pH = 6.04) showed the peaks shifted to a less to positive potential in acetate buffer and that the peak current in phosphate buffer (pH = 2.19) was higher than in the others (Figure 7).This indicates the oxidation of fragrances is strongly pHdependent.The E p -E p/2 (E p/2 = half-peak potential) values in pH = 2.93-8.81 at Pt and CPE electrodes are shown in Table 3. E p -E p/2 gave a range of 90-190 mV and 80-170 mV in acidic media at the Pt and CPE electrodes respectively.For a reversible charge transfer, the E p -E p/2 should be around 60 mV (E p -E p/2 = 47.7 mV / αn a at 298 K).Hence, it may be concluded the mechanism for the oxidation of fragrance is an irreversible charge transfer at both Pt and CPE electrodes in acidic media.At a pH below 6.14, only one-electron peak was observed since the second one-electron step is obscured by hydrogen evolution.The α-Asarone and eugenol undergoes oxidation in two steps (two one-electron), which are observed at pH above 6.83 and shown in Table 3.
The effect scan rate on the electrooxidation of isoeugenol was examined in pH = 6.14 in the range of 10 mV/s to 800 mV/s.In this case the oxidative peak current was proportional to the square root of the scan rate on Pt and CPE electrodes, respectively.From
Structure and Reactivity
The two linear portions [plotted for (pK-1) > pH > (pK+1)] intersect at a pH value corresponding to pK.The E 1/2 and pH values were input into the computer and using the simple regression method of the regression analysis the best two equations were found.These were solved to find the pK values.The E 1/2 vs. pH plot of α-asarone on Pt (pK = 4.60) and CPE (pK = 4.88) electrodes were given in Figure 9.By using the substituent constant (δ H ) value of trans-anethole as 0, the other substituent constants (δ x ) of fragrances calculated from pK H -pK x are -0.30,-0.40, -0.57, -2.0, -1.0, for eugenol, α-asarone, safrole, estragole, and isoeugenol, respectively.Most of the data may be correlated by a modified Hammett equation, E 1/2 = ρδ x , ρ(-1.15) is a voltammetric reaction constant.The Hammett δ-ρ linear free-energy relationship is useful for evaluating substituent effects in a system.The rate of oxidation is greatly increased by the electron-donating substituent (-OH and -OMe).
Not determined The positive potential order is estragole > safrole > eugenol due to the -OH strongly activating group than -OMe in the para-position.However, the overall rate enhancement arises from a sum of the groups' and resonance effects.Therefore, 4-propenylbenzenes (isoeugenol, α-asarone, and trans-anethole) have both inductive and resonance effects.The same substituents in the 4-allyl (eugenol) and 4-propenylbenzenes (isoeugenol), when there is isoeugenol through-resonance between a reaction site that becomes electron-rich.Thus, the potential of isoeugenol shifts is less positive than that of eugenol.The same as substituents of transanethole and eatragole in the benzene, likewise the potential of trans-anethole shifts less positively than that estragole.
CONCLUSION
The fragrances of the ally and propenyl derivatives of phenol and phenol ethers have similar of the irreversible oxidation potentials, and their potential is closely dependent on the structural factors.Thus, compared with various electron-donating groups and conjugation results on platinum and carbon paste electrodes.
Figure 8 A, good linearity of the regression equation being y = 11.4 x + 70.3, the correlation coefficient r = 0.9900 for Pt electrode; y = 2.45 x -5.27, the correlation coefficient r = 0.9961 for CPE.Under these conditions the currents were diffusion controlled.The relationship between peak potential and the logarithm of the scan rate (Figure 8(b), y = 0.14 x + 0.39, the correlation coefficient r = 0.9906 for Pt electrode; y = 0.13 x + 0.55, the correlation coefficient r = 0.9950 for CPE) can be used to roughly estimate the number of electrons involved in the catalytic oxidation.
Figure 8 .
Figure 8. Magnitude of the peak current for isoeugenol oxidation as a function of square root of scan rate on Pt (▼) and CPE (•) electrodes (a); and peak potentials of isoeugenol oxidation as a function of logarithm of scan rates on Pt (▼) and CPE (•) electrodes (b).
Figure 9 .
Figure 9.The relations of E 1/2 and pH of α-Asarone on Pt (♦) and CPE (•) electrodes. μA These reasons explain why aqueous organic solvents are more suitable for the oxidation of the allyl and propenylbenzenes.The solubilities and specific resistance of Bu 4 NBF 4 /CH 3 CN (s = 70 g /100 ml, ρ = 37 Ω m) and Bu 4 NClO 4 /CH 3 CN (s = 71 g /100 ml, ρ = 31 Ω m) are very near.Therefore, the E p and i p of the fragrances are very the similar.The cation of the supporting electrolytes significantly influencing the safrole, estragole and eugenol on E p and i p is confirmed by the results listed in Table1.From Table 1, it is 1.51 V for Bu 4 NClO 4 and 1.49 V for Et 4 NClO 4 but the i p in CH 3 CN containing Bu 4 NClO 4 (57.2 μA) is 1.4 times that of in Et 4 NClO 4 (41.7 μA).
Table 3 .
Comparative linear sweep voltammetric behavior of fragrances in Britton-Robinson buffer on platinum (Pt) and carbon paste electrodes (CPE) where E p is peak potential, and E p/2 is half-peak potential | 3,078.6 | 2015-03-27T00:00:00.000 | [
"Chemistry"
] |
Failure Modes and Resistance of Perforated Steel Rib Shear Connectors under Uplift Forces
In recent years, there is a rapid increase in the application of perforated steel rib shear connectors in steel and concrete composite structures. +e connectors must not only ensure shear transfer but also sufficient uplift resistance. +e shear behavior of connectors has been extensively investigated. However, studies on uplift resistance are lacking so far. +erefore, three push-out test specimens were tested to investigate the shear and tension behavior of perforated L-shaped and plain steel rib shear connectors. +e failure modes of connectors were analyzed, and analytical models for the determination of uplift resistance were derived based on test results. +e results showed that the ductility of perforated steel rib shear connectors under uplift force was smaller than that under shear force, and more severe concrete damage surrounding the rib and larger bending deformation of transverse steel bar was observed.+e rib flange of L-shaped perforated rib has a significant contribution to the uplift resistance. It was suggested to increase the rib height of L-shaped rib to avoid the horizontal crack at the height of the rib flange. +e validity of the proposed analytical models was confirmed by comparing the failure modes and capacities of specimens.
Introduction
In steel and concrete composite structures, various types of shear connectors, such as headed stud shear connector [1] and perforated steel rib shear connector [2], are arranged to ensure the composite action between the steel and concrete components. A perforated steel rib shear connector is a thin steel plate with a number of uniformly spaced holes. After the holes in the perforated rib are filled with concrete, the concrete dowel can resist longitudinal shear and prevent uplift separation between steel and concrete components [2][3][4][5]. ough the headed stud shear connector is still the most widely used shear connector, there is an increase in applications of perforated steel rib shear connectors in the composite structures [5][6][7][8][9], owing to their ease of manufacture, excellent load bearing and deformation properties, superior antifatigue performance, and usefulness in slender concrete slabs [10].
In the literature, the shear behavior of perforated steel rib shear connectors has been studied extensively [4,11,12] since the earlier research work of Leonhardt et al. [2], and several formulas for estimating the shear resistance of perforated steel rib shear connectors have been proposed for the shear design of the connectors in composite structures. e research studies on perforated steel rib shear connectors have shown that the shear behavior is significantly influenced by a number of parameters, including the hole diameter, the number of holes, the compressive strength of concrete, the thickness, length and height of the steel plate, the configuration of transverse steel bar in the hole, and the dimension of concrete slab [4]. Besides the traditional perforated steel rib with closed recesses, rib connectors with open recesses have developed in recent times. For the rib connectors with open puzzle-shaped geometry, intense research for the assessment of the shear behavior of rib shear connectors was performed in [13,14] for steel failure, in [15][16][17] for concrete failure, and in [18] for fatigue failure. ese experimental and theoretical studies led to the development of design principles for puzzle-shaped rib shear connectors under shear forces.
With the increase of composite structures and the widespread use of perforated steel rib shear connectors, it is becoming more common for connectors to be subjected to uplift forces. e behavior of composite structures would be significantly affected by the performance of perforated steel rib shear connectors under uplift forces. As shown in Figure 1(a), the connectors in steel-concrete composite beams exposed to bending should provide sufficient resistance against the uplift of the concrete slab. In composite slabs, the perforated steel rib shear connector is subjected to both the shear and uplift forces, especially when the shear crack occurs in the slab and results in vertical relative slip between steel ribs and concrete, as shown in Figure 1(b). In integral steel bridges, perforated steel rib shear connectors were reported to be installed at the end diaphragm of the steel girder to improve load-and crack-resisting capacity of the girder-abutment joints (Figure 1(c)) [19]. In Figure 1(d), the out-plane deformation of steel tube in concrete-filled steel tubular columns could be restrained by perforated steel rib shear connectors which thus were in tension [20]. In such composite structures, the perforated steel rib shear connectors must not only ensure shear transfer but also sufficient uplift resistance. According to the design provisions for composite beams in Eurocode 4 [21], shear connectors should be designed to resist a nominal ultimate tensile force, perpendicular to the plane of the steel flange, of at least 0.1 times the design ultimate shear resistance of the connectors. While headed stud shear connectors are generally assumed to provide sufficient resistance to uplift [21], there are no official guidelines for the design of the uplift resistance of rib shear connectors in the codes. Only few tests on puzzleshaped rib shear connectors with open recesses were carried out to investigate the resistance of rib shear connectors under tension [22][23][24][25] or combined shear and tension [26,27]. However, studies on rib shear connectors with closed recesses (traditional perforated steel rib) are lacking so far. erefore, the focus in this study is on the behavior of perforated steel rib shear connectors subjected to uplift forces. e perforated steel ribs can act as not only shear connectors but also stiffeners for the steel plate in composite slabs. e perforated L-shaped ribs proposed by Xu et al. [5] and used in composite slabs, as shown in Figure 2, have been proved to reduce the shear crack risk of composite bridge deck slabs and have a larger contribution to the loading-carrying capacities of composite slabs than plain ribs. However, it is still not clear that how perforated ribs resist the vertical relative slip between steel ribs and concrete after the shear crack occurred. In this study, perforated L-shaped and plain steel rib shear connectors were tested under uplift forces. e failure modes of two types of rib shear connectors were analyzed, and analytical models for the determination of the uplift resistance were derived based on test results.
Push-Out Tests
To investigate the structural behavior of perforated steel rib shear connectors, three push-out test specimens were fabricated and tested with no redundancy, among which two specimens reflecting the real state of perforated rib in composite slabs were designed and tested for the uplift resistance.
e failure modes and load-slip curves of the specimens are reported. Table 1 summarizes the geometric properties of the specimens together with the properties of materials used in this study. d and d s are, respectively, the diameter of the hole and steel bar in the hole, and the yield strength of the bar (f y,bar ) is 450 MPa. e yield strength of the steel rib (f y,rib ) is 365 MPa. e concrete cubic and cylinder compressive strength (f cu and f c ′ ), tensile strength (f ct ), and modulus of elasticity (E c ) on the day of the test are presented.
Test Programs.
e specimens were designed based on the design of the composite slabs reported in [5]. eir details are shown in Figure 3. e photos of steel members of the specimens are shown in Figure 4. S-1 specimen is a traditional push-out test specimen. Two steel plate of 6 mm thickness, 350 mm height, and 100 mm width were welded in vertical direction to a steel plate of 20 mm thickness. e thicker steel plate was subjected to a vertical load which produces a shear load along the interface between the concrete slab and steel member. At the end of the steel plate, plastic foam blocks were set to eliminate the end-bearing effect of concrete. T-1 and T-2 specimens were designed to study the uplift behavior of the perforated rib in steel and concrete composite slabs. One steel plate of 6 mm thickness, 100 mm height, and 180 mm width was welded in horizontal direction to a steel plate of 20 mm thickness. rough two vertical steel plates, vertical loads were applied symmetrically to two ends of the horizontal steel plate. S-1 and T-1 specimens were fabricated with plain steel ribs, while L-shaped steel rib was used in T-2 specimen. e width of the L-shaped steel rib flange was 30 mm, as shown in Figure 3(c). All steel members were cast into concrete block with a thickness of 150 mm which is the same with that of the bridge deck slab reported by Xu et al. [5]. Referring to Figure 5, in all the specimens, transverse steel bars of 14 mm diameter were placed through the center of the holes. Reinforcing bars of 10 mm diameter were arranged above the steel ribs with a spacing of 100 mm. Additional rebars were placed under the transverse bars in T-type specimens to avoid the premature bending failure of specimens.
ese rebars were placed more than 100 mm away from the ribs in order to minimize their effects on the failure modes of the connectors.
As shown in Figure 6, the test setup is similar to that used in earlier push-out tests [4]. Load was applied using a hydraulic testing machine of 2000 kN capacity.
e loading procedure is shown in Figure 7. P u is the estimated ultimate load, which was obtained through preliminary calculation before the tests. It was initially taken as 380 kN, and then it was adjusted to 360 kN after the test of S-1 specimen. After 2 Advances in Civil Engineering preloading, five low level and five high level load cycles were applied before the specimens were loaded to failure or the slip up to 20 cm. e loading speed was controlled under 30 kN/min or 1 mm/min. Figure 8 presents the arrangement of the measuring system. D1 to D4 are displacement transducers (LVDTs) to measure the vertical relative slip between the concrete slab and the steel member. Strain gauges (G1 to G8) were attached to the steel members and bars. In this figure, the long side of the black rectangle representing the strain gauge is the direction of the strain gauge. Data were acquired using a personal computer-based data acquisition system and were continuously sampled at a rate of 1 Hz during the test.
Failure Modes.
e failure of S-1 specimen was triggered by the longitudinal splitting of the concrete lab, followed by the crushing of concrete on the bottom of the specimen as shown in Figure 9(a). e concrete cracks were firstly observed at the load of about 570 kN, and then they propagated and the crack width increased with the increase of the load and the number of the load cycles. Two T-type specimens exhibited similar failure modes. Approaching the ultimate load, at the bottom surface of the concrete block, transverse concrete cracks were observed on both sides in Figure 9. At the end of the test, concrete cones with a height of about 5 cm between the two transverse cracks were punched out along with the steel member. In Figure 10, the sectional views through both test specimens show the punching cracks and concrete cones clearly. In the T-2 specimen with L-shaped rib, horizontal cracks were initiated near the short leg of L-shaped rib while during the test of T-1 specimen, no crack was observed.
e concrete slabs were opened after the experiment. Considerable bending and shear deformation of the transverse steel bars were observed, as shown in Figure 11. At the locations of perforation, the deformation of steel bars was the largest, which resulted in V-shaped bars. e bar in T-1 specimen was deformed almost symmetrically, while in T-2 specimen, the right part of the bar, which was at the same side with the short leg of L-shaped rib, had smaller bending deformation.
As shown in Figure 12, local plastic deformation occurred on the top edge of the holes in T-1 and T-2 specimens. Meanwhile, in T-2 specimen, the rib flange bent up visibly under the pressure from the concrete below. Figure 13 shows the load-relative slip curves. It is noted that for S-1 specimen, the load for one rib connector is the half of the applied load in the test. At the early stage of loading, there was almost no relative slip between the steel plate and the concrete block. e shear force was mainly carried by the initial interface bond. en, with the initiation and development of the relative slip, the interface bond was failed and the bond stress gradually reduced. At the same time, the dowels through the holes in the ribs started to resist the increasing external force. S-1 specimen presents the largest Advances in Civil Engineering ductility. After reaching the peak point, the load in T-type specimens deceased rapidly while in S-1 specimen, the load gradually dropped only by 10%. Table 2 summarizes the residual load capacity F r , the ultimate load F u , and the relative ultimate slip S u . F r is defined as the load value at the relative slip of 15 mm. F r of S-1 specimen is more than 2 times as large as those of T-type specimens. T-1 specimen has an ultimate strength 12.0% larger than that of T-2 specimen. S u of S-1 specimen is also the largest. It indicates that when the perforated rib is subjected to uplift force, the capacity and ductility are smaller than those under shear force. Figure 14, the results of vertical strain gauges located 100 mm above or below the rib hole in S-1 specimen were plotted with the relative slip. e value of the strain gauge above the hole was negative and increased with the relative slip. It indicates that the upper part of the rib was in compression. e compressive stress was applied to the rib through the upper part of the rib hole by concrete. e value of the strain gauge below the hole was also negative when the slip was smaller than 0.1 mm. It then became a positive value at the slip of about 0.6 mm. e reversal in the sign of strain was thought to occur when the natural bond between concrete and steel rib failed. e large differences between two strain gauges implied that the shear force had been transferred to concrete through the rib hole. Advances in Civil Engineering e results of vertical strain gauges near the rib holes in T-1 and T-2 specimens are shown in Figure 15. It was found that the tensile strain in the rib slightly increased at low level of load when the natural bond between concrete and steel rib mainly resisted the uplift forces, while after the natural bond failed, the strain gradually increased with the load. According to these results, the initial slip loads were evaluated to be 178 kN and 130 kN for T-1 and T-2 specimens, respectively. Figure 16 shows the horizontal strain in the flange of the steel rib in T-2 specimen. e upside face of the flange was in tension when the load was larger than the initial bond load. When the rib flange was pulled down by the rib web, its deformation was resisted by the surrounding concrete and it would be in bending and tension. e flexural deformation of the rib flange, which was also observed in Figure 12, implied that the rib flange has a significant contribution to the uplift resistance of the L-shaped shear connector.
Strain in Steel Members and Transverse Steel Bars. In
When perforated ribs transfer the longitudinal and vertical shear forces between steel and concrete members, the transverse bars will deform with the concrete around the holes, according to Zheng et al. [4]. e deformation of transverse bars can be taken as an indication of shear forces transferred by perforated ribs. Figures 17 and 18 show the results of strain gauges in the transverse bars in S-1 specimen. It can be seen that the transverse bar was in tension. No. 3 strain gauge is closer to the rib hole, and larger tensile strain was observed. It can be concluded that the tensile force and bending moment decreased along the bar from the rib hole to the bar end. e force exerted to the bar by the concrete in the hole was balanced by the bearing force and bond stress from the concrete around the bar.
e results of no. 3 strain gauges in T-1 and T-2 specimens are shown in Figure 18. e values of strain in transverse bars in these two specimens were more than 4 times larger than that in S-1 specimen. e transverse bars almost or already yielded at a slip smaller than 3 mm, which indicates that the transverse bars in T-1 and T-2 specimens had larger bending deformation than those in S-1 specimen. e possible reason is that the concrete layer below the transverse bar is thicker in S-1 specimen and thus its constraint on the overall bending deformation of the bar is stronger.
In Figure 19, results of no. 5 and no. 6 strain gauges in transverse bars in T-type specimens are presented. It was found that the bars were in tension and the tensile forces increased with the relative slip, which was also reported in [26,27]. Compared with the results of strain gauges at the same position in S-1 specimen, it is clear that the bars in T-type specimens have larger deformation.
Analytical Models for Predicting
Uplift Resistance e uplift resistance of rib connectors is affected by the performance of concrete, steel rib, and steel bar. In Figure 20, the steel rib, concrete, and bar in the rib hole are detached from the specimen, and the reactions that the remaining concrete and steel bar exert on this free body are shown. e tensile force in the web of steel rib is in vertical equilibrium with four forces: the shear force in the steel rib (F hs ), the shear force in the concrete (F hc ), the bond and friction in the interface between concrete and steel rib (F b ), and the vertical force applied to the flange of the L-shaped rib (F f ). ese forces are all the resultant of the stresses at the shear connector surface. e equilibrium equation is written as follows:
Tensile Strength of Steel Rib
Web. e tensile failure of the steel rib web under F w is ductile. e strength of the web can be evaluated by the following equation: where f y,rib is the yield strength of the steel rib and A w is the cross-sectional area of the web of steel rib. For T-type specimens with steel ribs of 180 mm width (a rib ) and 6 mm thickness (t rib ), the tensile strength was calculated to be 394.2 kN.
Resistance of Concrete Dowel and Transverse Steel Bar.
Studies on perforated rib shear connectors under shear load have shown that the shear capacity of the connector is mainly contributed by the concrete dowel, chemical bond, and the bar in the hole, and the slip at the ultimate load is generally larger than 1 mm [12]. In this study, equation (3) proposed by Zheng et al. [4] is adopted to estimate the shear capacity of the perforated shear connector: where α A is the effective shear area ratio of concrete dowel per hole, reflecting the confinement effect of bar on concrete; A h and A bar are the area of the rib hole and steel bar (mm 2 ), respectively; f c ′ is the concrete cylinder compressive strength (MPa); and f y,bar is the yield strength of the transverse bar (MPa). As shown in Table 3, the shear capacity is calculated to be 204.8 kN, which is smaller than the experimental uplift resistance. e strength is underestimated because the contribution of the chemical bonding effect to shear resistance is ignored in equation (3).
Equation (3) is based on the results of traditional pushout tests. In these tests, steel bars in the rib holes almost failed in shear with limited bending deformation [4]. However, in T-type specimens, the bars have much larger bending deformation. It implies that, as shown in Figure 21, the moment (M hs ) in the bar section A is larger in T-type specimens, and thus that the normal stress is larger. Based on the von Mises yield criterion, it is known that the steel bar would yield under smaller shear stress when normal stress is larger. It means the shear resistance of bars in hole in T-type specimens is smaller than that in S-type specimens.
Bond Strength.
For steel and concrete composite constructions without shear connectors, natural bonding, friction, and mechanical interlocking actions transfer the stress between steel and concrete members. e stress is referred to as bond stress. e bond stress capacity (τ u ) is reached with the rupture of the chemical adherence between the steel and concrete. en, the bond stress reduces rapidly to a value and remains relatively stable after a certain slip.
is value is referred to as residual bond stress. Since 1962, many studies [28][29][30] have addressed the bond stress capacity of steel and concrete composite constructions. e encased steel section compared with the concrete encasement (ρ), the thickness of the concrete cover, and the length of the bond stress region (L b ) was deemed to affect the bond stress capacity [31]. T-and S-type specimens have large differences in the values of ρ and L b . e bond stress capacity was then estimated through equation (5) by Roeder et al. [31]: where τ u is the bond stress capacity (MPa); ρ is the encased steel section compared with the concrete encasement ρ and is equal to (A s /A t ); and d is the depth of steel section (mm). en, the strength of the bond (F b,u ) is written as follows: where A b is the contact area between the steel rib and concrete. e bond fails at a small slip. e ultimate slip at the loaded end of specimens was evaluated according to Yang et al. [32]: e calculated results are presented in Table 4. e calculated bond stress capacity of S-1 specimen is smaller than those of T-type specimens due to the larger contact area. ese results are consistent with experimental ones. e calculated S bu is about 20% of S u , which indicates that the bond had failed before the specimens reached their ultimate strengths. erefore, the value of the bond stress at the failure of the specimen was assumed to be equal to the residual bond stress. Only a few studies have been performed to investigate the residual bond stress. In this study, it was taken as 63% of the bond stress capacity according to the test results of 16 push-out specimens by Yang et al. [32]. en, Advances in Civil Engineering the residual bond strength (F b,r ) was estimated and is presented in Table 4.
Contribution of Rib Flange to the Load Capacity.
e force at the flange of the rib (F f ) causes a bending moment at the fixed end of the flange. When the force is large enough, a plastic hinge would form. is force is referred to as F frib,u . Meanwhile, the equal and opposite forces by the rib flange on the concrete would lead to local crushing or tensile failure of concrete. us, the following relationship is obtained: where F f,u is the ultimate load at the rib flange and F fc,u is the strength of concrete surrounding the rib flange. e rib flange embedded in concrete is subjected to complex loading, as shown in Figure 22. e shear force in the left side of the flange is in vertical equilibrium with the reaction forces from concrete. For the rotation equilibrium, moment at the left side of the flange is induced. Here, the concrete bearing stress distribution acting on the rib flange was simply assumed to a triangle in which the width is equal to the width of the rib flange, as shown in Figure 22.
en, the capacity of rib flange (F frib,u ) can be calculated as follows: where M frib,u is the ultimate sectional bending moment of the rib flange (N·mm) and b f is the width of the rib flange (mm). e local bearing strength of concrete under the rib flange was calculated according to the ACI standard [33]: where A b,0 is the net bearing area of the rib flange on the concrete. After the tension force was transferred by the shear connector to concrete, it induces shear force and bending moment in concrete transverse sections. In Figure 22, section B, which is the weakest concrete section and which passed through the right end of the rib flange, is paid attention to. Section B has a width of 200 mm (a c ) and a height of 100 mm (h rib ), and the internal forces in this section consist of a force equal to (F fc,2 /2) and a moment equal to (b f ·F fc,2 /2). It was assumed that concrete crack occurred when the maximum tensile stress reached the concrete tensile strength. en, the concrete cracking load is evaluated through the following equation: where a c is the width of section B, h rib is the rib height, and M fc,u is the ultimate sectional bending moment of section B. e calculated results are presented in Table 5.
Punching Strength of Concrete.
e shear connector exerts on the remaining concrete and steel bar forces equal and opposite to the forces exerted by the remaining concrete and steel bar on the shear connector. e forces in the concrete lead to punching shear failure. Concrete cone would be punched out, and its geometry shows the mechanical characteristics of the shear connector and was reported to be influenced by steel bar diameter [24]. Concrete cones developed on both sides of the shear connectors in the T-type specimens, as shown in Figure 23. e geometry of the concrete cones was carefully measured and is presented in Table 6 with the aim to estimate the load when the concrete failure prism forms. e height of the cones is about half of the height of the steel rib, while the inclination of the shear cracks ranges from 11.1°to 22.3°. e strength was estimated by referring to the calculation method for the punching shear strength of concrete slab. Steel reinforcements were not arranged in the concrete cones. erefore, the concrete cones were assumed to form when the tensile stress of concrete along the cone surface reached its tensile strength. en, equation (12) is derived as where A p is the projected area of concrete cone, B p is the width of concrete cone, L p is the length of the concrete cone, and θ 1 is the concrete cone angle.
Discussions
e calculated results are summarized in Table 7. e failure modes of the specimens are discussed by the comparison between the calculated and experimental results.
S-1 specimen which failed in ductile mode of failure is a traditional push-out test specimen. Its failure mode is similar to that reported in [12]. Its shear capacity can be written as the sum of the strength of the concrete dowel and transverse steel bar in the hole and the residual bond strength, as shown in equation (13). e ratio of the predicted strength to the tested one is 1.09, which shows a good level of accuracy of the predict method.
T-1 specimen failed in brittle mode of failure. e estimated F c,u for T-1 specimen is close to the experimental Table 4: Calculation results of bond strength.
Label ultimate strength. It seems that the punching shear failure of concrete triggered the failure of the specimen, though the calculated (F h,u + F b,r ) is smaller than F c,u . Considering that equation (13) for traditional push-out test specimen might not apply to T-type specimen, the capacity of T-1 specimen was calculated through equation (14). It can be seen that the predicted method is good.
T-2 specimen also failed in brittle mode of failure. F c,u for T-2 specimen is about 72.7% of that for T-1 specimen.
In T-2 specimen, F c,u was thought to be affected by the horizontal crack initiated near the rib flange. F fc,u1 and F fc,u2 are larger than F frib,u , which implies that when the plastic hinge formed, the concrete under the rib flange would not crush or crack. e contribution of the rib flange to the uplift resistance F f,u was equal to F frib,u and remained constant after the plastic hinge formed. e capacity of T-2 specimen was calculated through equation (15). As shown in Table 7, a good level of accuracy of the predict method was obtained.
In summary, there are three potential failure modes of the perforated rib shear connector under uplift forces, as shown in Figure 24. One is the yielding of the steel rib web. e second one is the shear failure of the concrete dowel and transverse steel bar in the hole, which is a ductile failure mode and is similar to that when the shear connector is under shear forces. e last one is the punching shear failure of concrete under the steel rib, which is a brittle failure mode. e uplift resistance of the shear connector is the smallest failure capacity of these three failure modes, as given in the following equation: In this study, the horizontal crack at the height of the rib flange has a significant unfavorable effect on the uplift resistance of the L-shaped rib shear connector. us, a proper design of the rib flange is required to avoid premature cracks in concrete, and the rib flange should be designed to meet the following inequality equation (17). According to equation (11), F fc,u2 is in proportion to the height of the rib and inversely proportional to the width of the rib flange. Considering that the bond strength and the concrete punching shear strength also increase with the height of the rib, increasing the rib height is recommended to avoid the horizontal crack.
Conclusions
is paper presented the test results of 3 push-out specimens for perforated steel rib shear connectors. Two of them were designed to reflect the real state of perforated steel rib shear connectors in steel and concrete composite slabs with the aim of investigating the uplift behavior of the connectors. e failure modes of the rib shear connectors are analyzed, and analytical models for the determination of the uplift resistance are derived based on test results. e following conclusions can be drawn: (1) Since the thickness of concrete block to support the steel bar in the hole of the perforated rib is small, the failure mode of the specimens under uplift force is different from that under shear force. e concrete below the bar was pushed out, and the transverse steel bars had much larger bending deformation. ough the ultimate strength of perforated rib in composite slab under uplift force was close to that under shear force, the ductility and residual capacity were smaller.
(2) When the L-shaped perforated rib was under uplift force, cracks would develop at the height of the short leg of the L-shaped rib and propagate horizontally. As a result, the L-shaped rib shear connector had an ultimate strength 10.7% smaller than that of the plain rib shear connector. Our study contributes to the understanding of the behavior of perforated steel rib shear connectors. e proposed equations for the uplift resistance of the connector are beneficial to the design and maintenance of composite structures. e experimental results of this paper represent a relatively small database. In the future, additional experimental and theoretical studies will be conducted to validate the proposed models. Width of the weakest concrete transverse section a rib : Width of the web of steel rib A b : Contact area between the steel rib and concrete A bar : Area of the steel bar A b,0 : Net bearing area of the rib flange on the concrete A h : Area of the steel rib hole A p : Projected area of the concrete cone A s : Area of the encased steel section A t : Area of total cross section of the specimen A w : Cross-sectional area of the web of steel rib b f : Width of the flange of the L-shaped rib B p : Width of concrete cone d: Depth of steel section d h : Diameter of the rib hole d s : Diameter of the transverse bar E c : Elastic modulus of concrete f c ′ : Concrete cylinder compressive strength f ct : Concrete tensile strength f cu : Concrete cube compressive strength f y,bar : Yield strength of the transverse bar f y,rib : Yield strength of the steel rib F b : Bond and friction F b,r : Residual bond strength F b,u : Ultimate bond strength Maximum value of F f F hc : Shear force in the concrete dowel F hs : Shear force in the transverse bar F r : Residual load capacity F w : Tensile force in the rib web F w,u : Maximum value of F w F u : Ultimate load F u,CAL : Calculated load capacity h rib : Rib height L b : Length of steel section or bond stress interface length L p : Length of concrete cone M f,u : Ultimate sectional bending moment of the flange of the L-shaped rib M fc,u : Ultimate sectional bending moment of the weakest concrete transverse section M hs : Moment in the transverse bar P u : Estimated ultimate load S bu : Ultimate slip when the bond fails S u : Ultimate slip when the specimen fails t rib : ickness of the steel rib W fc : Proportional limit force α A : Effective shear area ratio of concrete dowel per hole θ 1 : Concrete cone angle ρ: A t /A c σ fc : Compressive stress on the rib flange τ u : Bond stress capacity.
Data Availability
All data included in this study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. | 8,022 | 2019-03-06T00:00:00.000 | [
"Engineering"
] |
NRG1 is required for glucose repression of the SUC2 and GAL genes of Saccharomyces cerevisiae
Background Glucose repression of transcription in the yeast, Saccharomyces cerevisiae, has been shown to be controlled by several factors, including two repressors called Mig1 and Mig2. Past results suggest that other repressors may be involved in glucose repression. Results By a screen for factors that control transcription of the glucose-repressible SUC2 gene of S. cerevisiae, the NRG1 gene was identified. Analysis of an nrg1Δ mutant has demonstrated that mRNA levels are elevated at both the SUC2 and of the GAL genes of S. cerevisiae when cells are grown under normally glucose-repressing conditions. In addition, genetic interactions have been detected between nrg1Δ and other factors that control SUC2 transcription. Conclusions The analysis of nrg1Δ demonstrates that Nrg1 plays a role in glucose repression of the SUC2 and GAL genes of S. cerevisiae. Thus, three repressors, Nrg1, Mig1, and Mig2, are involved as the downstream targets of the glucose signaling in S. cerevisiae.
Background
For the yeast Saccharomyces cerevisiae, glucose is the preferred carbon source. When glucose is present in the growth media, transcription of a large number of genes encoding products involved in the metabolism of alternative carbon sources is repressed (for reviews, see [1,2,3]. These genes include the GAL, SUC2, MAL and STA genes, required, respectively, for the utilization of galactose, sucrose/raffinose, maltose, and starch. At many of these genes, glucose repression is mediated, at least in part, by the glucose-dependent repressor Mig1, a zinc-finger protein that binds in vitro to DNA consensus sites consisting of a GC-rich core and flanking AT sequences [4,5]. Mig1 is thought to bind to several promoters, including GAL1, GAL4, SUC2 and MAL62, and to effect transcriptional repression by interacting with the co-repressor complex Ssn6-Tup1 [6,7,8]. Mig1's activity is regulated by phosphorylation and subcellular localization: in high glucose, Mig1 protein is hypophosphorylated and in the nucleus, where it can repress transcription; upon withdrawal of glucose, Mig1 is rapidly phosphorylated and transported into the cytoplasm [9]. This regulated phosphorylation requires the function of the Snf1/Snf4 kinase complex [10].
Deletion of MIG1, however, only partially relieves glucose repression at promoters such as SUC2, whereas deletion of either SSN6 or TUP1 completely abolishes glucose repression. Moreover, the STA1 gene of S. cerevisiae var. diastaticus, which is also repressed by glucose, is unaffected by mig1∆ [11]. Therefore, other proteins in addition to Mig1 are required for glucose repression. One of these proteins is Mig2, which shares se-quence similarity with Mig1 in their zinc finger regions [12,13]. Genetic analysis suggests that Mig2 plays a minor role relative to Mig1.
Recently, a previously uncharacterized gene, NRG1 (Negative regulator of glucose-repressed genes), was shown to be required for glucose repression of the STA1 gene in S. cerevisiae var. diastaticus [11]. These studies demonstrated that LexA-Nrg1 behaves as a repressor of a reporter construct and that this repression is dependent on glucose, Ssn6, and Tup1. In addition, Nrg1 and Ssn6 interact with each other in two-hybrid and GST pull-down assays, indicating that Nrg1 may repress via the same pathway as Mig1. Consistent with these results, Nrg1 appears to bind to two sites within the STA1 promoter.
The SUC2 gene of S. cerevisiae has been extensively studied with respect to its glucose repression [1,2]. Glucose repression of SUC2 is mediated by Ssn6/Tup1 and SUC2 has two Mig1 binding sites in its regulatory region. Additionally, in high glucose its promoter is also occupied by positioned nucleosomes, which cause transcriptional repression themselves [14,15]. Derepression in low glucose is correlated with a loss of both Mig1-and nucleosome-mediated repression, although the precise relationship between the two pathways is not clear.
Genetic screens have identified a large number of genes, named SNF (Sucrose Non-Fermenting) that are required for derepression of SUC2 transcription in the absence of glucose [16,17,18]. Genetic analyses and subsequent studies have traditionally divided SNF genes into two groups. One group encodes the protein kinase Snf1 and its associated regulator Snf4, required to antagonize the repression caused by Mig1 [10,19]. The other group consists of members of the Swi/Snf complex required to counter the repressive effects of chromatin by remodeling nucleosomes in an ATP-dependent manner (for review see [20]. Suppressors of swi/snf mutations, such as spt6, do not suppress snf1∆ [21], and ssn6, a strong suppressor of snf1∆, only partially suppress swi/snf mutations [22].
In this work, we report the identification of Nrg1 in a genetic screen for new regulators of SUC2 transcription. We show that Nrg1 plays a role in the glucose repression of SUC2 and GAL genes in S. cerevisiae. Thus, at these genes, Mig1, Mig2 and Nrg1 are partially redundant for mediating repression by glucose. Consistent with our findings, recent results have demonstrated an interaction between Snf1 and Nrg1 [23]. We also present experiments that test the genetic interactions between mig1∆, nrg1∆ and deletions of various genes encoding activators that function at the SUC2 promoter.
Isolation of a high-copy-number suppressor of snf2∆
The Swi/Snf complex is required for normal levels of expression of SUC2 when cells are grown in low glucose. To identify factors that might be functionally related to Swi/ Snf, we screened for high-copy-number plasmids that could suppress a snf2∆ mutation (see Materials and Methods). To sensitize the screen, we used an allele of SUC2, SUC2-36, that allows an elevated level of SUC2 transcription in the absence of Swi/Snf [24]. The SUC2-36 mutation is a single base pair change, AT to GC at position -401 relative to the SUC2 ATG. SUC2-36 strains still have a Rafphenotype in a snf2∆ mutant.
To identify high-copy-number suppressor candidates, we used a 2µ circle library to transform the snf2∆ SUC2-36 strain FY1845 (Table 1) and screened 60,000 transformants for those with a Raf + phenotype. Eighty-two candidates were identified, 25 of which contained the SNF2 gene. Among the remaining plasmids, most conferred a weak Raf + phenotype. We focused on the candidate that conferred the strongest Raf + phenotype. This plasmid contained a chromosome IV genomic fragment that spans from within the NRG1 gene (open reading frame YDR043C) through the HEM12 gene (YDR047W). Subcloning experiments identified the partial NRG1 clone as the sequence responsible for suppression of snf2∆ and demonstrated that this suppression occurred in both SUC2-36 and SUC2 + genetic backgrounds (Figure 1).
NRG1 is predicted to encode a protein of 231 amino acids with two C 2 H 2 zinc fingers in the carboxyl terminus. Sequence analysis revealed that the 2µ plasmid that confers suppression of snf2∆ encodes just the amino terminal region of Nrg1, lacking the zinc fingers. To test if the complete NRG1 gene causes the same high copy number phenotype, we subcloned the complete NRG1 gene into a 2µ plasmid and tested it for suppression of snf2∆. Our results demonstrate that the complete NRG1 gene on a 2µ plasmid does not suppress snf2∆ ( Figure 1).
NRG1 encodes a repressor of transcription
To characterize further the role of Nrg1 with respect to SUC2 transcription, we constructed and analyzed an nrg1∆ mutant. The nrg1∆ mutant grows normally on media containing glucose, sucrose, or galactose, demonstrating that NRG1 is not essential for grwoth and that nrg1∆ mutants can utilize several different carbon sources.
To test for the requirement for Nrg1 in glucose repression, we tested growth of an nrg1∆ mutant on YP sucrose media containing the glucose analog, 2-deoxyglucose (2-
Glucose repression of transcription is defective in nrg1∆
To test whether the nrg1∆ phenotype on 2-DG plates is caused by altered transcription, we performed Northern analyses to SUC2 mRNA levels. Under repressing conditions (2% glucose), the level of SUC2 mRNA was increased by two-to-four fold in an nrg1∆ strain compared to a wild-type control ( Figure 3A). Consistent with previously published results, a mig1∆ mutant had a nine-tofourteen fold increase in SUC2 mRNA levels while a mig2∆ mutant had no detectable defect in glucose repression of SUC2 [4,12]. We also analyzed the SUC2
Figure 2
Deletion of NRG1 partially abolishes glucose repression. nrg1∆ allows cells to grow on sucrose plates containing 2deoxyglucose, and has additive effects with mig1∆ and mig2∆. A single colony of each strain was inoculated into liquid YPD and grown to saturation (approx. 1 × 10 8 cells/ml). The cultures were then diluted 1:2 (upper panels) or 1:5 (lower panels) in sterile water, and spotted on YPD plates and YP sucrose plates with 200 µg/ml 2-deoxyglucose. Plates were photographed on after 1 and 2 days of incubation at 30°C.
mRNA levels in double and triple mutant combinations.
In general, multiple mutations caused greater derepression, up to 79-fold for the triple mutant, nrg1∆ mig1∆ mig2∆ ( Figure 3A). These data demonstrate that Nrg1, Mig1, and Mig2 all contribute to the glucose repression of SUC2.
We also tested if an nrg1∆ affects glucose repression of the GAL genes as described in Materials and Methods. Both nrg1∆ and mig1∆ mutations cause a defect in the glucose repression of GAL1 and GAL10, whereas mig2∆ alone had no effect ( Figure 3B). As for SUC2, additive effects were observed in double and triple mutant strains, up to a 13-fold effect for the nrg1∆ mig1∆ mig2∆ triple mutant ( Figure 3B). These data indicate that all three proteins are involved in glucose repression of GAL1-GAL10, with Mig2 playing only a minor role.
Deletion of MIG1 or NRG1 suppresses mutations in both SNF1 and SWI/SNF genes
Activation of SUC2 transcription depends upon both the Snf1/Snf4 kinase complex and the Swi/Snf nucleosome remodeling complex. To address the relationship of Nrg1 to both complexes and to compare it to Mig1, we tested the abilities of nrg1∆ and mig1∆ to suppress the Gal -, Suc -, and Rafphenotypes of mutations in SNF1 and SWI/SNF genes.
Our results (Figure 4) show that both nrg1∆ and mig1∆ suppress, albeit sometimes weakly, mutations in both SNF1 and SWI/SNF genes. With respect to suppression of snf1∆, mig1∆ is the stronger suppressor, with suppression detectable for the Gal" phenotype ( Figure 4A). The observed suppression by mig1∆ is consistent with previous results [22]. The nrg1∆ mutation did not detectably suppress either the Sucor Rafphenotypes caused by snf1∆. With respect to swi/snf mutations, we tested suppression of both snf2∆ and swp73∆ and observed weak suppression of the Galand Suc phenotypes ( Figure 4B). Suppression of the Rafphenotype was not detectable. There appear to be some gene-specific interactions as suppression of swp73∆ by mig1∆ was stronger than the suppression observed for the other pairs tested.
Discussion
Our results demonstrate that Nrg1 plays a role in glucose repression of the SUC2 and GAL genes of S. cerevisiae. Consistent with a role in glucose repression, an nrg1∆ mutation suppresses the defects of a snf1∆ mutant. Recent results from an independent study have demonstrated an interaction between Snf1 and Nrg1 [23]. Our results also suggest that Nrg1 is partially redundant with two other factors required for glucose repression, Mig1 and Mig2. At SUC2 and GAL1-10, all three proteins appear to be involved in glucose repression, because dou-ble-and triple-deletion mutations have additive effects. Interestingly, both nrg1∆ and mig1∆ can also suppress the defects caused by mutations in genes encoding members of the Swi/Snf complex.
While Nrg1, Mig1, and Mig2 are partially redundant, current evidence suggestions that they do not function in the same relative fashion at all glucose-repressible promoters. For example, while mig1∆ and nrg1∆ cause comparable defects at GAL1-GAL10, nrg1∆ causes a weaker defect at SUC2. Mig2 appears to have only a minimal function at either promoter. In addition, Nrg1 is the major repressor at STA1, whose glucose-repression does not require Mig1 [11]. Therefore, some gene-specific specialization exists among these three glucose-dependent repressors.
A previous study of Nrg1 provided evidence that it interacts with Ssn6 and confers repression by recruitment of Ssn6/Tup1 [11]. We initially identified NRG1 in our studies by the isolation of a high-copy-number plasmid encoding a fragment of Nrg1, lacking the zinc-finger domain. Likely, the phenotype caused by this plasmid is caused by interference of repression by Ssn6/Tup1.
Our studies have not yet distinguished between a direct or indirect effect of Nrg1 on glucose repression at SUC2 and GAL1-GAL10. One possible indirect effect of Nrg1 could be by regulation of MIG1 transcription. However, Northern analysis showed that MIG1 mRNA levels are unaffected by an nrg1∆ mutation (H. Zhou and F. Winston, unpublished data). We tested Nrg1 for binding to the SUC2 promoter and those experiments are briefly summarized here. We screened for DNA binding of Nrg1 to the SUC2 promoter region using a previously described GST-Nrg1 fusion protein [11] and a gel shift assay. Our results demonstrated specific DNA binding to two sites within the -1022 to -825 region 5' of SUC2 (H. Zhou and F. Winston, unpublished results). However, a deletion of this region does not alter SUC2 expression. Based on the similarity between the zinc fingers of Nrg1 and Mig1 and our binding studies, the binding site of Nrg1 may contain a GC-rich core. Another such site in the SUC2 promoter may occur at -570 with the sequence AGGCCCA. Although we did not detect a gel shift of a fragment containing this site, it is still possible that it is recognized and bound by Nrg1 in vivo. Furthermore, although an Nrg1 consensus binding [11] exists at -976 of SUC2, we were unable to detect binding to this site by GST-Nrg1. This region also did not compete the binding that we detected by GST-Nrg1. This discrepancy between our findings and previous results can be explained by the fact that Park et al [11] used 10-fold more GST-Nrg1 in their binding studies than we did. Finally, we did not de-
Figure 3
Deletion of NRG1 causes defects in glucose repression. (A) A single colony of each strain was inoculated into YPD liquid with 2% glucose and grown to mid-log phase (approx. 1 × 10 7 cells/ml). The cells were harvested, and total RNA was isolated and analyzed by electrophoresis followed by hybridization with probes specific to SUC2 or SPT15. The intensities of each band was quantitated using phosphoimager and ImageQuant software. The amount of SUC2 mRNA in each strain was normalized to SPT15, and the result obtained for the wild-type strain was assigned the arbitrary unit of 1.0 and used to calculate the relative SUC2 mRNA levels in other strains. (B) Northern analysis of GAL1-10 mRNA in mutant strains. A single colony of each strain was inoculated into SD complete liquid with 2% glucose+2% galactose and grown to mid-log phase. The cells were harvested, and total RNA was isolated from each and analyzed by electrophoresis followed by hybridization with probes specific to GAL1, GAL10 or SPT15. Quantitation was carried out as for (A).
tect any binding of Nrg1 to the Mig1 binding sites. Thus, the DNA binding of Nrg1 to SUC2 remains to be resolved.
Conclusions
In conclusion, these studies have identified Nrg1 as a third repressor required for glucose repression at SUC2 and the GAL genes. Based on the similarity between the zinc fingers of Nrg1 and Mig1, the phenotypes of nrg1∆ and mig1∆, and the reported interaction between Nrg1 and Ssn6 [11], Nrg1 likely functions by binding to the target promoters and recruiting the Ssn6/Tup1 complex. The relative and possible cooperative roles of each of these repressors in recruiting Ssn6-Tup1 remains to be determined.
Materials and methods
Yeast strains All S. cerevisiae strains are listed in Table 1 and are in the S288C genetic background [25,26]. Deletion of MIG1 was achieved by transforming strain yHZ416 with the HindIII digest of pJN22 (for migl-∆2::LEU2) or pJN41 (for mig1-∆2::URA3) [4], and selecting for Leu + or Ura + transformants, respectively. PCR-directed gene replacement [27] was used to construct deletions of NRG1 and
Figure 4
Mutations in SNF1 and SNF/SWI can be suppressed by both nrg1∆ and migl∆. A single colony of each strain was inoculated into YPD liquid and grown over-night to saturation and adjusted in water to 1 × 10 8 cells/ml. The cultures were then diluted 1:2 in sterile water and spotted on YPD, YP galactose and YP sucrose plates, with uracil added to each plate to 80 µM. The first spot of each row represents a cell count of 5 × 10 7 cells/ml, which is diluted 1:4 for the second spot and 1:2 for each spot thereafter. YPD and YP sucrose plates were photographed after incubation at 30°C for 2 days, and YP galactose plates were photographed after 5 days.
Media
The media used in this study were previously described [29]. Glucose, galactose, sucrose or raffinose was added to 2% final weight per volume. For solid media containing a carbon source other than glucose or glycerol, antimycin A was also added to a concentration of 1 µg/ml. To test for glucose repression of SUC2 and GAL genes, 2deoxyglucose was added to YP sucrose-antimycin A and YP galactose-antimycin A plates to a final concentration of 200 µg/ml [4]. We discovered during the course of this study that a ura3∆0 strain had half the amount of GAL 1-10 mRNA of a URA3 strain when grown in SD media containing 2% glucose and 2% galactose. A ura3∆0 strain also grew more slowly than a URA3 strain on minimal media containing sucrose or galactose. We do not yet have an explanation for this phenomenon. To overcome this growth defect, uracil was added to YP plates to a final concentration of 80 µM.
Subcloning of NRG1 constructs
The 1.8 kb SacI-SalI fragment of the original library clone, containing only the 5' half of NrG1 without the zinc fingers, was cloned into the SacI-SalI sites of pRS426 to create pHZ56. To clone the complete NRG1 ORF, HZ032 and HZ033 were used to PCR from genomic DNA the complete wild-type NRG1 from -1119 to +719. The PCR fragment was digested with Sad and cloned into the SacI-SmaI sites of pRS426 to generate pHZ52.
Northern analysis
Cell cultures were grown in liquid media as indicated to mid-log phase (1-2 × l0 7 cells/ml), and total RNA was prepared as previously described [27,30]. RNA was separated by electrophoresis on 1% agarose-formaldehyde gels, transferred to membrane and blotted with specific radio-labeled probes. The probes were: for SUC2, the 1.3 kb BamHI-HindIII fragment of pRB59 [31]; for GAL1-10, the 2 kb EcoRI-EcoRI fragment of BNN45 [32] and for SPT15, the 0.8 kb SpeI-HindIII fragment of pIP45 (I. Pinto, personal communication). All probes were labeled by random priming. | 4,267.2 | 2001-03-19T00:00:00.000 | [
"Biology"
] |
Taking Initiative in Human-Robot Action Teams: How Proactive Robot Behaviors Affect Teamwork
People coordinate in action teams to accomplish shared goals in safety- and time-critical contexts such as healthcare and firefighting. Operational failures in these teams can cost lives. Therefore, in our work, we explore how robots that take initiative to support team and task success can augment action teams. We designed and studied proactive robot behaviors in non-dyadic (three humans and a robot) action teams to better understand factors that support the acceptance and effectiveness of proactive robots. Our work will support better human-robot teaming in action teams by addressing gaps in how robot initiative impacts team dynamics, preferences for proactive robot behaviors, and how proactive robots can take initiative based on task states to empower their teammates.
INTRODUCTION
Collaborating in teams is an essential aspect of our species.We farm for food, play sports for entertainment and competition, and build bridges and cities to improve the quality of our lives.All these require us to coordinate with one another in a fuent, cohesive, and proactive manner.
Figure 1: Our work aims to support action teams by designing proactive robots that can take initiative to advance team goals.We explore this within the context of escape rooms using Stretch, a mobile manipulator robot.From lef to right: Stretch interacts with participants during an icebreaker, participants solve a puzzle and the robot hands over an item they need, Stretch reminds users about safety considerations while administering frst aid.
One type of team that is of interest is an action team.In action teams, members perform in fast-paced, unpredictable situations and use their specialized skills to improvise and coordinate their actions [13].For example, emergency medical teams are action teams.
Robots have the potential to augment action teams by taking on redundant and/or unsafe tasks allowing people to focus on tasks that require their specialized skills.However, given the time-and safety-critical nature of these teams, failure can be costly, possibly leading to massive fnancial losses, grave harm, or death.Therefore, robots must be well contextualized to the needs of the team, and their tasks to be helpful.
Robots in action teams should be able to work fuently to support task progression and safety.This is because robots can afect team dynamics such as increasing intrateam communication [41,42], regulating confict [23], and furthering trust-related behavior [43].
One robot behavior that is of interest is proactivity.Proactive robots anticipate team needs and take initiative to support teamwork.For example, Baragalia et.al. [4,5] have shown that people prefer to interact with robots that exhibit proactive behaviors and consider them to be better teammates.However, prior work has primarily focused on dyadic humanrobot interactions or studied proactive speech and proactive interactions independently.In the context of action teams, this is problematic as these teams likely consist of multiple team members, and their shared understanding is impacted by both behavior and dialogue.
Our work aims to design proactive robots that are efective in action teams.
Through our research, we consider 1) how teams perceive diferent levels of robot proactivity, 2) what factors guide their preferences, and 3) how these behaviors impact team dynamics.Therefore, our current research objectives include understanding: 1) factors that impact preferences for proactive robot behaviors in action teams, and 2) how robot proactivity afects team dynamics such as team cohesion, trust, and shared mental models.
Our research will support the understanding of team dynamics and develop new algorithms through which robots can make sense of the world and act to support their teammates.
RELATED WORK
Human-Robot Action Teams: Researchers have modeled robot behaviors on how humans collaborate to support human-robot teaming.Robots are able to identify human intent, anticipate the next steps for tasks, and adapt their actions accordingly [12,18,19,26,31,37,40].They can also perform temporal adaptations during group synchronization tasks [21,22].Well-designed and contextualized robots can support action teams in acute care [16,28,44,45].
Proactive Robots: Robot initiative can support humans with a diverse array of tasks ranging from performing activities of daily living [33,34,37] to simple assembly tasks [30].Proactive robots are known to improve team performance and may reduce robot and user idle time [20,29].They are perceived better in terms of likability, performance, and team efectiveness [2,5,50].Proactive dialogue is perceived as trustworthy and sympathetic [25].
However, human teammates may have negative perceptions of proactive robot behavior as well.For example, proactive robots may come across as interrupting and more controlling [35], and harder to use [14].Additionally, they may add to a person's cognitive load and reduce situational awareness [50].Therefore, further investigation is required before integrating proactive robots into action teams.
Escape Rooms: Escape room games have been used to study human-human [3,9,17,49] and human-robot teaming [15,47].Escape rooms can be designed to simulate safety-and time-critical tasks that impose interdependence between team members.This makes them suitable for studying teaming.
METHODOLOGY
We conducted a 2 (escape room: medical vs. hazard) x 2 (robot behavior: proactive vs. passive) mixed methods within-subject study to understand human-robot teaming with proactive robots.We counterbalanced the conditions to reduce order and game efects.The study was conducted using a constrained Wizard of Oz (WoZ) [38] paradigm across the two robot conditions.In the passive condition, the robot only acted and communicated when a human teammate initiated an interaction or request.In the proactive condition, the Wizard followed a script that guided how the robot should take initiative to support team progress.These robot behaviors are presented in Fig. 2.
Escape Room Design: We designed two escape rooms with different themes (medical and hazard) to mitigate learning efects.We designed the tasks to require similar efort to complete and mitigate the game design's infuence on the results.We extensively piloted the design of both escape rooms to ensure that the escape room tasks: necessitated teamwork, were fun to complete, had humanrobot interdependence, and could be completed in 20 minutes.In our study, 5 ad-hoc teams of 4 participants (3 people and a robot) completed two 20-minute escape rooms.
Robot Design: We used the Stretch robot to conduct the study (see Fig. 1).Stretch is a mobile manipulator able to move around, pick and place objects, and perform audio communication.We added a tablet to Stretch which displayed a blinking face to support natural interaction [1,24].We used Siri's gender-neutral voice [36] for Stretch to mitigate gendered perceptions of the robot.All verbal utterances for Stretch were pre-recorded to standardize interactions across teams.
Measures: We collected data using sensors, surveys, and interviews.In this report, we summarize the results of the interview data while our analysis of the other data is ongoing.
Sensors: We recorded the participant teams using 10 cameras for full room coverage.These cameras streamed wirelessly to an app on two tablets which the researchers used to track participants inside the room.Participants also wore wireless lapel microphones to record audio data and inertial measurement unit (IMU) sensors on their dominant hand to record activity data.
Group Interviews: Teams also participated in semi-structured group interviews where they shared their experiences of teaming with the robot.Three researchers independently coded the data and over several discussions generated themes from the data.
Procedure: Once participants arrived they were greeted by a researcher.After introductions, they took the NARS survey.Participants then put on unique colored vests and the sensors.After this, they were informed about the escape room rules and then they participated in the two escape room games.A survey and a group interview followed each escape room game.The entire study from arrival to conclusion took around 120 minutes.
Results and Discussion: We analyzed interview data using refexive thematic analysis (RTA) [6,7].Results of our qualitative analysis of the group interviews revealed that teams difered in their preference for proactive and passive robot behaviors based on a number of factors.Teams that preferred the proactive robot considered the robot's perceived capabilities, independence, and helpfulness as guiding factors for their preference.
On the other hand, teams that preferred the passive robot considered control over the robot and the task, quality of robot communication, and perceived robot capabilities as factors that motivated their preference.All teams also anthropomorphized the robot, some teams wanted more naturalistic communication with the robot and trusted their human teammates over the robot.
Our results suggest that when the perceived cost to the team is high, people prefer to rely on each other rather than a robot, desire more control over the task and robot, and have a lower tolerance for when the robot makes a mistake.Even though participants desired more naturalistic communication with the robot, we must also consider how to align the desire for fuent robot speech with ethical considerations such as accuracy of information when using large language models (LLMs) [48].Other ethical considerations include how to prevent proactive robot behavior from misleading people about its capabilities and knowledge, as well as consider worker displacement [8,11] and the impact of proactive robots on reshaping workspaces for humans.
ONGOING AND FUTURE WORK
We are currently conducting a qualitative analysis of video and audio data to identify behavior patterns in action teams when interacting with passive and proactive robots.This will support our understanding of team dynamics and what makes a good humanrobot action team.
We will analyze the survey data to further our understanding of how teams strategize and how robot behaviors afect team dynamics such as cohesion, trust, and shared mental models.In the future, we will use the collected sensor data (video, audio, and IMU) to build robots that are capable of understanding the world state and can generate actions they needs to take proactively.
We will also analyze the video and IMU data to develop activity detection models, then use these models that will be used to inform how we build future robot controllers.For example, robot controllers that allow robots to execute desired robot action based on the task context and team progress.We will evaluate our controllers on an autonomous robot in a similar experimental paradigm.
Our research contributes to an improved understanding of humanrobot team dynamics in fast-paced, uncertain environments.This will lead to the design of research-informed robot behaviors that can positively infuence team dynamics improving robot acceptance and team outcomes.Through our research, we aim to address the challenges of worker safety and robot efectiveness in dynamic environments.
Figure 2 :
Figure 2: Examples of behaviors from the proactive and passive robot conditions.(a): The proactive robot notices the team is stuck, and takes initiative to assist its teammates.(b): The passive robot waits for a human-request before assisting the team. | 2,398.2 | 2024-03-11T00:00:00.000 | [
"Computer Science",
"Engineering",
"Psychology"
] |
Potential for Vertical Heterogeneity Prediction in Reservoir Basing on Machine Learning Methods
With the rapid development of computer technology, some machine learning methods have begun to gradually integrate into the petroleum industry and have achieved some achievements, whether in conventional or unconventional reservoirs. This paper presents an alternative method to predict vertical heterogeneity of the reservoir utilizing various deep neural networks basing on dynamic production data. A numerical simulation technique was adopted to obtain the required dataset, which contains dynamic production data calculated under di ff erent heterogeneous reservoir conditions. Machine learning models were established through deep neural networks, which learn and capture the characteristics better between dynamic production data and reservoir heterogeneity, so as to invert the vertical permeability. On the basis of model validation, the results show that machine learning methods have excellent performance in predicting heterogeneity with the RMSE of 12.71mD, which e ff ectively estimated the permeability of the entire reservoir. Moreover, the overall AARD of the predictive result obtained by the CNN method was controlled at 11.51%, revealing the highest accuracy compared with BP and LSTM neural networks. And the permeability contrast, an important parameter to characterize heterogeneity, can be predicted precisely as well, with a derivation of below 10%. This study proposed a potential for vertical heterogeneity prediction in reservoir basing on machine learning methods.
Introduction
Reservoir heterogeneity is one of the important characteristics in reservoir description, which has a significant impact on fluid flow and oil recovery. At the same time, it provides fundamental information to build reliable reservoir models and plays a crucial role in reservoir development, especially the vertical heterogeneity, which is a key element in predicting the distribution of remaining oil in the reservoir. Permeability as a primary property for characterizing heterogeneity can show the fluids' ability of flowing underground when subjected to applied pressure gradients.
Different types of data that can be directly obtained in oil fields have been studied and utilized to predict vertical permeability in reservoir. Deutsch [1] presented a consistent numerical modelling framework to obtain the vertical permeability basing on core data, conventional well logs, highresolution image logs, and detailed geological interpretation. In addition, Russell et al. [2] utilized the existing High-Resolution Dipmeter Tool, Formation MicroScanner, and conventional log data to characterize and extrapolate geological heterogeneity. Moreover, Perez and Chopra [3] adopt the successive random addition technique combined with the data which comprise cores and logs of adjacent wells. The acquisition of vertical heterogeneity analyzed by cores and well log data on account of its intensive vertical resolutions can be seen. However, there are many limitations associated with the estimation of vertical permeability. On one hand, it is difficult to collect representative core measurements. On the other hand, the high viscosity of some fluids makes it impossible to perform well testing. Above all, cores and well logging data can reflect vertical permeability only at a certain location in the reservoir rather than characterizing the whole. Dynamic production data can be utilized to overcome these problems mentioned above, such as bottom hole pressure, oil production, and water cut data. This is because some characteristics of the entire reservoir can be reflected by the dynamic variation of these production data. Meanwhile, dynamic production data is also the most accessible and effective data we can get, which contains the information of vertical heterogeneity. Machine learning methods can learn and capture relative characteristics between dynamic production data and vertical permeability, making it possible to acquire vertical permeability using production data.
In recent years, with the development of artificial intelligence technology, a large number of machine learning methods are widely applied in the petroleum industry and have a better performance compared with traditional methods. Talebi et al. [4] estimated reservoir saturation pressure efficiently basing on the multilayer perceptron neural network (MLP) and radial basis function (RBF). Zhang et al. [5] adopted the long short-term memory neural network to predict water saturation distribution in reservoirs. Schuetter et al. [6] applied random forest (RF), support vector regression (SVR), and gradient-boosting machine (GBM) to establish a prediction model for oil production in unconventional shale reservoirs. Silpngarmlers et al. [7] learned different relative permeability curve data with the BP neural network method basing on a certain amount of papers and experiments, so as to develop a liquid/liquid and liquid/gas two-phase relative permeability predictors. Zhu et al. [8,9] predicted total organic carbon content and proposed a TOC logging evaluation method by semisupervised learning. Moreover, the permeability of a tight gas reservoir was inversed utilizing a deep Boltzmann kernel extreme learning machine as well [10]. Tian and Horne [11,12] combined machine learning with improved Pearson correlation coefficients to evaluate the connectivity among wells. Lim [13] provided the intelligent technique which comprised fuzzy logic and neural networks to obtain reservoir property estimation from well logs. In summary, machine learning methods have penetrated into various fields of the petroleum industry and achieved more success in all aspects.
The reason why machine learning methods [14][15][16] are successful and have attracted widespread attention is that they can find the nonlinear relationship between multiple variables without physical models. This advantage is very suitable for solving some problems in the petroleum industry, which is caused by the complicated internal parameter relations. In this study, three neural network methods are used to learn and capture the relationship between the dynamic production data of reservoir and vertical permeabil-ity, including back propagation (BP) neural network, Convolution Neural Network (CNN), and long short-term memory (LSTM) neural network. The BP neural network has better performance in dealing with nonlinear mapping relations due to continuous improvement of its internal back propagation algorithm [17]. The reason why CNN can be widely applied is its unique data feature extraction approach, which has also achieved success in the field of oil reservoirs [18,19]. Of course, this method of extracting features can also be used for dynamic production data. The LSTM neural network [20], as a variant of the traditional recurrent neural network, is better at dealing with typically time series problems. This paper is composed of four aspects. First, we establish oil-water two-phase flow models by numerical simulation to calculate oil production, water cut, and pressure data under different heterogeneous reservoir and permeability contrast conditions, which are treated as characteristic dataset for machine learning. Secondly, machine learning models were established through different neural networks, including the BP neural network, CNN, and LSTM neural network. Thirdly, based on the dataset, various machine learning models are trained to learn and capture the characteristics between dynamic production data and vertical permeability. Finally, we compare the accuracy and the calculation time to verify the prediction performance of machine learning methods. The study provided an alternative way for quick prediction of vertical permeability utilizing machine learning methods basing on dynamic production data.
Methodology
Three deep neural networks are utilized to establish machine learning models which can capture characteristics between dynamic data and vertical heterogeneity in reservoirs, including CNN, BP, and LSTM neural networks.
2.1. BP Neural Network. The BP neural network [21,22] is constructed by the structure of multilayer perceptron combined with the back propagation algorithm. In view of the continuous improvement of the back propagation algorithm, the BP neural network can show better performance when capturing nonlinear mapping relationships between variables. There are two stages in the operation process of the BP neural network: forward propagation and backward propagation. Forward propagation is a learning process of neural networks that captures the changing characteristics of dynamic production data and outputs the prediction results of vertical permeability. The learning result is memorized by weights and thresholds in each neuron of neural networks. After that, loss function is used to calculate the error between the model prediction and the actual vertical permeability. Finally, based on the back propagation algorithm, the error is utilized to adjust and update the weights and thresholds in the neurons, completing one training session of the neural network. By continuously repeating the above two stages, the neural network will be constantly trained so that the predicted value can gradually approach the real permeability.
Geofluids
The BP neural network structure designed in this study is shown in Figure 1. The input layer of the model is composed of water cut, oil production, and water injection pressure data, each of which contains 1080 neurons. In order to fully capture the changing characteristics of data, the hidden layer is designed as three layers, with 50 neurons in each layer. Finally, the number of neurons in the output layer is 5, representing the vertical permeability of the five small layers in the simulated reservoir.
Convolution Neural Network.
The convolutional neural network [23][24][25], as one of the widely used deep learning algorithms, consists of the following five parts: the input layer, convolution layer, pooling layer, fully connected layer, and the output layer. The biggest advantage of this network is that it can extract local data features through convolution and pooling operations. Therefore, it can have a good performance when the data points entered as features are particularly large. The network structure of CNN designed in this study is shown in Figure 2. First, in order to facilitate the extraction of the features of the data by the convolution kernel, the reshaping of the data is required. The oil production (data format: 1 × 1080), water cut (1 × 1080), and water injection pressure data (1 × 1080) were integrated into a dataset (3 × 1080). The dataset was reshaped to form a new data format (48 × 60). Then, two convolutional layers have been added to extract the data features, with a number of 64 and 128 convolution kernels, respectively. The size of the convolution kernel in the two convolutional layers is 3 × 5 and 2 × 3, respectively. At the same time, the max pooling function and a kernel size of 2 × 2 were adopted in the two pooling layers. At last, the fully connected layer was composed of three hidden layers with 30 neurons per layer.
LSTM Neural
Network. LSTM [26,27], a variant of the recurrent neural network (RNN), has inherited most of the characteristics of RNN [28]. The emergence of LSTM solves the problem of gradient vanishing easily caused by RNN during training. It adds 3 "gate units" to judge and process the input information basing on the structure of RNN. There is a memory called "cell state" in the LSTM neural network, which could store information in the past like production information, water cut, and pressure data for months. Due to the existence of "gate units" and "cell state," LSTM has gradually become a research hotspot in the field of machine learning in recent years. In this experiment, the input data of LSTM is composed of oil production, water cut, and injection pressure (3 × 1080); the time step designed in the network is 100.
The first key functional gate is the "forget gate layer." This gate determines which part of information should be thrown away from the cell state, as shown in Equation (1)-(2):
Geofluids
Another functional gate is named "input gate layer" which determines what new information in input data of this time step needs to be stored in the cell state, as follows: Finally, the last gate which controls the output results of every step is named "output gate layer." It generates the output information h t based on the updated cell state C t and input data x t and h t−1 , as follows: where x t is the input data in one single step, h t is the output data of this step, h t−1 represents the output data of the last step which is also inputted again into the network, and AF is the activation function.
Model Evaluation.
Model evaluation is of great significance not only for the training process of the neural network but also for the prediction stage after training. In each training process of the neural network, the root mean square error (RMSE) is often used as a loss function to test each training result of the model. Because of the existence of the loss function, the results of the machine learning model can better approximate the expected real value. After the training is completed, some statistical test criteria will be applied to verify the predictive performance of the machine learning model, including average relative deviation (ARD) and average absolute relative deviation (AARD) [29,30]. The relevant formula is given as follows: where N represents total number of data in each set, X i data is the data value we expected from each set, and X i model is the corresponding data value calculated by the neural network. (1) (3) 4 Geofluids the one hand, due to the complexity of the geological conditions, the average permeability of each small layer in the vertical direction of the reservoir is difficult to be accurately evaluated and tested. On the other hand, as a result of human interference and some limitations of monitoring equipment, dynamic production data tends to have more noise points and missing values, which will have a great impact on the training of the model. Therefore, numerical simulation technology is a good means of collecting dataset. This study takes a five-point well pattern as an example, and the established reservoir model is shown in Figure 3. The reservoir is vertically divided into five small layers, each of which is set with different permeability to simulate heterogeneity. In the horizontal direction, the reservoir is considered homogeneous. A three-dimensional grid structure with the size of 10 × 10 × 5 is established to simulate a reservoir of 500 × 500 × 40 (ft). According to the heterogeneity of the reservoir, a total of 400 sample reservoirs were designed, covering the MIP reservoir (100 samples), MDP reservoir (100 samples), IRP reservoir (160 samples), and homogeneous reservoir (40 samples). Taking the MIP reservoir as an example, the permeability of the first layer is the smallest, the permeability of the fifth layer is the largest, and the middle layers are arranged according to the arithmetic progression (e.g., the permeability of layers 1 to 5 is 20, 40, 60, 80, and 100 (mD)). The permeability of the first layer ranges from 20 (mD) to 200 (mD), with a step size of 20 (mD) per step. The permeability of the fifth layer is increased by a value between (20,200) on the basis of the first layer, with a step size of 20 (mD) per step as well. Basing on the oil-water two-phase flow model, this experiment used the classic IMPES [31,32] method to obtain 400 sets of sample data, each of which contained the vertical permeability of the reservoir and the dynamic production data under the corresponding reservoir conditions. The duration of this simulation is 1080 days, and model-specific parameters are shown in Table 1.
The dynamic data contains a large number of characteristics that can reflect the vertical heterogeneity of the reservoir. Take the water cut curve as an example, as shown in Figure 4. Under the condition that the average permeability of the entire reservoir is equal, different heterogeneous reservoirs are simulated by changing the permeability of each small layer. The average permeability of all reservoirs in the figure is 150 mD. And the MIP reservoir is composed of 5 small layers with permeability of 50 mD, 100 mD, 150 mD, 200 mD, and 250 mD in turn. The distribution of the MDP reservoir is exactly opposite to that of the MIP reservoir. A reservoir in which the permeability of all small layers is set to 150 mD is selected as one of the representatives of the homogeneous reservoir. The permeability of layers 1 to 5 in the IRP reservoir is 100 mD, 150 mD, 200 mD, 150 mD, and 100 mD. It can be clearly seen that the most obvious characteristic shown by the water cut data is the breakthrough time of water. Under the same average permeability, the breakthrough time in the MIP reservoir is the earliest compared to other reservoirs. In addition, the water cut curves of reservoirs with different heterogeneity have diverse changing characteristics. In order to verify the accuracy of the model, we also draw a comparison chart of water saturation as shown in Figure 5, which describes the water saturation distribution of the different reservoirs at the 1080th day. The first, third, and fifth layers of the five small layers were selected to make it easier to observe the features. It can be
Geofluids
seen that the small layer with low permeability (first layer of the MIP reservoir and fifth layer of the MDP reservoir) has oil displacement effect better in the MDP reservoir, which is consistent with the actual law.
Training
Process. First, we divided the obtained dataset consisting of 400 sets of sample data into a training set and a test set according to a certain proportion [33]. In each dataset, dynamic production data is used as input to the model, which facilitates the neural network to capture the characteristics of the data. The output of the model is vertical perme-ability, which is the predicted result of the neural network after learning. Then, basing on the input and output of the training set, the neural network continuously updates the weights and thresholds through the back propagation algorithm to train the machine learning model. Since the machine learning model has the memory of training set data, it is necessary to utilize the test set to verify the accuracy of the model prediction. Input of the test set is imported as the new data into the trained model to obtain the predicted result, which will be compared with the output of the test set to verify the validity of the model. Finally, the machine 6 Geofluids learning models generated by different neural networks will be comprehensively compared in various aspects in order to select the optimal model. In this experiment, the partition ratio of dataset is 8 : 2. There are 320 groups of sample data in the training set and 80 groups of sample data in the test set. The data flow is shown in Figure 6.
Model Calibration.
There are two main factors that affect the accuracy of machine learning prediction. On the one hand, the quality of data plays a crucial role because the model needs to learn and capture the characteristics of data.
On the other hand, the structure of the model has a significant influence on the results, such as the setting of hidden layers. In the process of model training, the principle of the back propagation algorithm is basing on the chain rule, and the number of hidden layers is related to the complexity of the derivation in the chain rule. In theory, the more layers are, the better the neural network can simulate complex operations and nonlinear mapping among variables. However, if there are too many hidden layers, the problem of gradient disappearance and gradient explosion will be triggered in the calculation process, resulting in waste of computing resources and inaccurate prediction. Therefore, it is necessary to calibrate the number of hidden layers in the machine learning model. In this study, other parameters are fixed, and the AARD value calculated by the model is used to observe the influence of different layers in the hidden layer on the prediction accuracy. It can be seen from Figure 7 that in the case of the convolutional neural network, when the number of layers is increased to 2 layers, the AARD value of the model showed a significant decline, from 21.01% to 14.83%. By continuing to increase the number of layers to 3 layers, the AARD value dropped to 11.41%, indicating that the accuracy of the forecast was still improving. However, as the number of layers became 4 or 5, AARD was basically stable compared with 3 layers and no longer changes evidently. Based on the comprehensive consideration of accuracy and computational resources, the hidden layer structure of 3 layers is selected to better predict the vertical permeability of the reservoir.
Model Comparison and Error Analysis.
As described in Section 2.4, the loss function is utilized to estimate the overall error of the model after each training. Therefore, we can supervise the training of the model in general through the decline curve of loss function. The loss function chosen for this experiment is RMSE.
As shown in Figure 8, the horizontal axis represents training times of the neural network, and the vertical axis represents the error of vertical permeability calculated by the loss function after each training. The overall trend of the loss function curve acquired by three different neural networks is decreasing, which indicates that the prediction accuracy is improving with the increase of training times. In terms of the test set, the final error of the LSTM model is basically stable at around 51 mD, demonstrating unsatisfactory prediction performance. The loss function curve obtained by the BP neural network shows a strong volatility, and the loss is finally fixed at 24.18 mD, illustrating that the network has higher prediction accuracy. Compared with the above two methods, the loss of the prediction result by the CNN model is the smallest, as low as 12.71 mD. In general, both CNN and BP network have shown better predictive performance on vertical permeability, while the accuracy of the LSTM model is not ideal.
By observing the decline curve of the loss function, we can clearly grasp the change of the average error from the overall samples. The neural network was trained utilizing the training set data and had the memory for the data. Therefore, the test set data was used to verify the accuracy of the model prediction. Then, for the prediction of each sample data in the test set, a crossplot drawn from the true perme-ability of the test set and the predicted vertical permeability generated by the different neural networks can be used to verify the accuracy of the model.
As shown in Figure 9, the horizontal axis is the permeability from each sample in the test set, and the vertical axis is the permeability calculated by the machine learning methods. The red line represents a straight line of y = x. Ideally, the intersection of predicted and the true values should be located on this red line. As far as the BP neural network is concerned, most of the intersections can be concentrated around the red line, but some points still have strong discreteness, indicating that the model is inaccurate in some cases. It can be seen from the CNN model that almost all the scattered points can be appropriately distributed near the red line, showing better prediction performance on vertical permeability.
Based on the above crossgraphs, we can intuitively grasp the prediction capabilities of different machine learning models. Next, in order to describe more accurately the error of each point, the relative deviation diagram is drawn by calculating the relative error of all sample data in the test set. Figure 10 is a graph of relative deviations, where the abscissa is the sample permeability and the ordinate is the relative deviation between the model predicted value and the samples. The blue dashed line indicates y = 0, which represents an ideal situation where the predicted value is exactly equal to the samples. For CNN, the relative deviation of most predicted values is less than 20%, mainly concentrated near the line of y = 0, which keeps a high prediction accuracy. In addition, with the increase of permeability, the relative error of the CNN also gradually decreases. In contrast, the BP Table 2.
From the evaluation results shown in Table 2, the ARD and AARD calculated by CNN's prediction results are only 6.28% and 11.51%, respectively. At the same time, the RMSE can be controlled at 12.71 mD, which is the lowest compared to the BP and LSTM neural network. It can be concluded that the unique extraction method for the data in the CNN model plays an important role in inverting the vertical permeability of the reservoir. Moreover, the LSTM model that performs very well in dealing with time series problems is given high expectations in this experiment because the input data of this study is related to time series. However, the prediction effect 9 Geofluids of this model is extremely unsatisfactory. This reason is because the output of the LSTM model also needs to be relevant to the time series. Finally, it can be seen from the comparison of time that after the machine learning model is trained, the time required for prediction is extremely short, only about 1 second.
Prediction of Permeability
Contrast. Permeability contrast describes the ratio of the maximum value to the minimum value in the vertical permeability of the reservoir, which is of great significance for oil field development. The above CNN model can inverse the permeability of different layers of the reservoir in each sample from the test set. A crossplot is drawn to verify the accuracy of the model's prediction of the permeability contrast.
As shown in Figure 11, the abscissa is the permeability contrast calculated for each sample of the test set, and the ordinate is the corresponding contrast predicted by the CNN model. Almost intersections are located on the center of the line y = x, with a derivation of below 10%. At the same time, we calculated the AARD and RMSE of the permeability contrast, which are 9.58% and 0.534, respectively, further illustrating the excellent performance of the CNN model in predicting the permeability contrast.
Conclusion
This paper proposed an alternative method for predicting vertical heterogeneity of reservoirs through machine learning basing on dynamic production data. First, numerical simulation techniques were adopted to obtain dynamic production data under different heterogeneous reservoir conditions. Next, different neural network models were established and trained to capture characteristics between dynamic data and vertical permeability. Finally, reservoir permeability can be accurately inverted by the trained machine learning models. Through a comparative analysis of the prediction results, the following conclusions can be drawn. The machine learning model showed excellent predictive performance on vertical permeability with the RMSE of 12.71 mD, which effectively estimated the permeability of the entire reservoir rather than a certain position compared with traditional methods. On the basis of model validation, the overall AARD of the predictive result obtained by the CNN method was controlled at 11.51%, which was lower than BP and LSTM network in calculation error. At the same time, the prediction time of the three neural networks is extremely short at about 1 second. Therefore, CNN can be selected as the optimal model through the comprehensive analysis of accuracy and prediction time. Finally, the machine learning method can also be utilized to predict the permeability contrast with a derivation of below 10%, showing a high accuracy under diverse heterogeneous reservoir conditions.
Data Availability
The manuscript is a self-contained data article; the entire data used to support the findings of this study are included within the article. If any additional information is required, this is available from the corresponding author upon request to songhongqing@ustb.edu.cn. | 6,307 | 2020-08-12T00:00:00.000 | [
"Computer Science"
] |
Predicting Value of ALCAM as a Target Gene of microRNA-483-5p in Patients with Early Recurrence in Hepatocellular Carcinoma
The long-term survival rate of hepatocellular carcinoma (HCC) is poor. One of the reasons for the poor rate of survival is the high rate of recurrence caused by intrahepatic metastas is that adversely affects long-term outcome. Many studies have indicated that microRNAs play an important role in HCC, but there has been no research of clonal origins on recurrent HCC (RHCC) by analzing microRNAs. In the present study, we found that miR-483-5p was significantly upregulated in RHCC tissues of short-term recurrence (≤ 2 years) by miRNA microarray screening, and can significantly promote migration and invasion of HCC cells in vitro and increase intrahepatic metastasis in nude mice in vivo. Furthermore, we demonstrated that activated leukocyte cell adhesion molecule (ALCAM), which significantly suppressed migration and invasion of HCC cells, was a direct target of miR-483-5p, and the re-introduction of ALCAM expression could antagonize the promoting effects of miR-483-5p on the capacity of HCC cells for migration and invasion. In addition, expression level of ALCAM was negatively correlated with microvascular invasion and tumor size recognized as prognostic factors. The cases which were negative for ALCAM expression had shorter time to recurrence than positive cases, and univariate and multivariate survival analyses showed that ALCAM was an independent risk factor of HCC recurrence. qRT-PCR and Western blotting showed that the expression of EMT related genes (MMP-2, MMP-9, E-caherin and vimentin) significantly changed as a result of interfering or overexpression of ALCAM, and ALCAM was significantly associated with EMT in HCC. These results suggest that the miR-483-5p/ALCAM axis is an important regulator in invasion and metastasis and biomarker for recurrence risk assessment of HCC.
The long-term survival rate of hepatocellular carcinoma (HCC) is poor. One of the reasons for the poor rate of survival is the high rate of recurrence caused by intrahepatic metastas is that adversely affects long-term outcome. Many studies have indicated that microRNAs play an important role in HCC, but there has been no research of clonal origins on recurrent HCC (RHCC) by analzing microRNAs. In the present study, we found that miR-483-5p was significantly upregulated in RHCC tissues of short-term recurrence (≤ 2 years) by miRNA microarray screening, and can significantly promote migration and invasion of HCC cells in vitro and increase intrahepatic metastasis in nude mice in vivo. Furthermore, we demonstrated that activated leukocyte cell adhesion molecule (ALCAM), which significantly suppressed migration and invasion of HCC cells, was a direct target of miR-483-5p, and the re-introduction of ALCAM expression could antagonize the promoting effects of miR-483-5p on the capacity of HCC cells for migration and invasion. In addition, expression level of ALCAM was negatively correlated with microvascular invasion and tumor size recognized as prognostic factors. The cases which were negative for ALCAM expression had shorter time to recurrence than positive cases, and univariate and multivariate survival analyses showed that ALCAM was an independent risk factor of HCC recurrence. qRT-PCR and Western blotting showed that the expression of EMT related genes (MMP-2, MMP-9, E-caherin and vimentin) significantly changed as a result of interfering or overexpression of ALCAM, and ALCAM was significantly associated with EMT in HCC. These results suggest that the miR-483-5p/ALCAM axis is an important regulator in invasion and metastasis and biomarker for recurrence risk assessment of HCC.
INTRODUCTION
Hepatocellular carcinoma (HCC) is the fifth most frequently diagnosed cancer, and the third leading cause of cancer death in the world. An estimated 782,500 new liver cancer cases and 745,500 deaths occurred worldwide during 2012, with China alone accounting for approximately 50% of the total number of cases and deaths (Torre et al., 2015). Surgical resection remains the first choice of treatment of HCC; however, the long-term survival rate is poor. One of the reasons for the poor rate of survival is the high rate of recurrence caused by intrahepatic metastasis that adversely affects long-term outcome. A greater understanding of the molecular mechanisms under lying recurrence and intrahepatic metastasis of HCC may have a significant effect on improving prognosis and systematic treatment of this disease.
MicroRNAs (miRNAs) are small, noncoding RNAs of 21 to 25 nucleotides in length that direct post-transcriptional regulation through specific recognition of short sequences of target messenger RNAs (mRNAs), often in the 3 ′ untranslated region (3 ′ -UTR), causing either target mRNA degradation or inhibition of translation through assembling the RNA-induced silencing complex. Accumulating evidence suggests that the levels of miRNAs are deregulated and can play an important role in evolution and progression of HCC , especially by affecting invasion and metastasis of tumor cells. In HCC, Let-7g and miR-122 inhibit cell migration and intrahepatic metastasis (Tsai et al., 2009;Ji et al., 2010), whereas miR-143, miR-16, miR-30a, let-7e and miR-204 can significantly promote HCC metastasis (Zhang et al., 2009;Zeng et al., 2012). However, the variation of miRNAs in recurrent HCC (RHCC) caused by intrahepatic metastasis is rarely reported, and the underlying molecular mechanisms for intrahepatic metastasis remain unclear.
We selected postoperative RHCC cases, which were divided into two groups: short-term recurrence (≤2 years) and longterm recurrence (>2 years). The miRNA expression profiles were determined with miRNA array, and five miRNAs were found to be different between the two groups. miR-483-5p was the most significantly different miRNAs. The function and mechanism of miR-483-5p in RHCC are not clear; hence, we performed gainand loss-of-function studies to determine the biological roles of miR-483-5p in this study, and we integrated bioinformatics predictions, expression datasets, and luciferase reporter assay results to reveal its underlying molecular mechanism in HCC. We found that miR-483-5p promotes tumor invasion in HCC, andactivated leukocyte cell adhesion molecule (ALCAM) is characterized as a direct and functional target of miR-483-5p in HCC cells. These findings indicate that the miR-483-5p/ALCAM axis is an important regulator in intrahepatic metastasis of HCC and can serve as prognostic markers and basis of individualized treatment.
Human Specimens
All RHCC cases used in this study were obtained during liver resections performed in the Eastern Hepatobiliary Surgery Hospital (Shanghai, China) from 2000 to 2012. microsatellite LOH was detected in the primary-recurrence tissue samples to determine the clonal origin of RHCC , and the monoclonal recurrent cases were divided into two groups according to interval time between primary and recurrence: short-term recurrence (less than 2 years [G1]) and long-term recurrence (more than 2 years [G2]). These samples were obtained with informed consent according to the guidelines set forth by the Eastern Hepatobiliary Surgery Hospital Research Ethics Committee.
miRNA Extraction and Array Screening
Total miRNA was extracted from formalin-fixed paraffinembedded (FFPE) tissues using miRNeasy FFPE Kit (QIAGEN, Germany). The miRNA expression profiles were determined with Agilent miRNA Microarray (Agilent Human miRNA V16.0), and eight cases of RHCC were choose for miRNAs array testing (four cases in each group). For miRNA detection, mature miRNA was reverse-transcribed and quantified with TaqMan R RT primers and probe, and normalized to U6 small nuclear RNA expression, using predesigned TaqMan assays (Applied Biosystems, Foster City, California, USA).
RNA Extraction and Real-Time Quantitative Polymerase Chain Reaction Analysis
Total RNA in cells was extracted by applying the TRIzol R reagent (Invitrogen, Carlsbad, California, USA). Reversetranscribed complementary DNA was synthesized with the Prime-Script R RT Reagent Kit (TaKaRa, Tokyo, Japan). Quantitative polymerase chain reaction analyses were performed with LightCycler R 480 SYBR Green I Master (Roche, Welwyn Garden, Swiss). To detect mature miRNA, RNA was reverse-transcribed and quantified with TaqMan R RT primers and probe, and normalized to U6 small nuclear RNA expression, using predesigned TaqMan assays (Applied Biosystems).
Oligonucleotide Transfection miR-483-5p mimics and ALCAM small interfering RNA (siRNA) duplexes were designed and synthesized by RiboBio (Guangzhou, China), and miR-483-5p inhibitors were designed and synthesized by GenePharma (Shanghai, China). Cells were transfected in individual wells of six-well plates with a mimic, an inhibitor, or an siRNA pool (three siRNAs were mixed in an equimolar ratio) targeting miR-483-5p by using Lipofectamine R 2,000 at a final concentration of 50 nM. At 48 h posttransfection, the cells were harvested for the assays described later.
Lentivirus Packaging and Infection
Lentivirus particles were harvested 48 h after pWPXL-483-5p or pWPXL-ALCAM cotransfection with the packaging plasmid psPAX2 and the vesicular stomatitis virus G VSV-G envelope plasmid pMD2.G (psPAX2 and pMD2.G were gifts from Dr. Didier Trono) into HEK293T cells using Lipofectamine R 2000. Cells were infected with the resultant recombinant lentivirus in the presence of 6 µg/mL polybrene (Sigma-Aldrich).
In Vitro Cell Proliferation, Migration, and Invasion Assays One thousand cells were placed in a fresh 96-well plate in triplicate and maintained in DMEM containing 10% fetal bovine serum for 5 days, and cell proliferation was measured with the Cell Counting Kit-8 (Dojindo, Kumamoto, Japan) following the manufacturer's instructions.
For transwell migration assays, 5 × 10 4 cells were plated in the top chamber of each insert (BD Biosciences) with a noncoated membrane. For invasion assays, 1 × 10 5 cells were added to the upper chamber with 150 µg Matrigel (BD Biosciences). For both assay types, 800 µL of medium supplemented with 10% fetal bovine serum was injected into the lower chambers. After harvest, the inserts were fixed and stained in a dye solution containing 0.1% crystal violet and 20% methanol. Imaging of cells adhering to the lower membrane of the inserts was performed with an IX71 inverted microscope (Olympus, Tokyo, Japan), and five randomly selected fields were quantified.
In Vivo Liver Orthotopic Transplantation
For in vivo metastasis assays, 2 × 10 6 SMMC-7721 cells infected with miR-483-5p or mock vector, respectively, were suspended in 40 µL serum-free DMEM/Matrigel (1:1) for each mouse. Through an 8-mm transverse incision in the upper abdomen under anesthesia, each nude mouse (12 female BALB/c-nu/nu in each group) was orthotopically inoculated in the left hepatic lobe with a microsyringe. After 10 weeks, the mice were sacrificed, and their livers and lungs were dissected, fixed with phosphatebuffered neutral formalin, and prepared for standard histological examination. The mice were manipulated and housed according to protocols approved by the Shanghai Medical Experimental Animal Care Commission.
Luciferase Assays
HEK 293T cells were cultured in 96-well plates and cotransfected with 20 ng of the psiCHECK-2-ALCAM-3 ′ -UTR vector and either 5 pmol of the miR-483-5p mimics or the control mimics. After 48 h of incubation, firefly and Renilla luciferase activities of the cell lysates were measured using the Dual-Luciferase Reporter Assay System (Promega).
Tissue Microarray, Immunohistochemistry, and Scoring
Anti-ALCAM was purchased from Abcam (Cambridge, UK). The tissue microarray was constructed as described previously (Zhu et al., 2008). Core samples were obtained from representative regions of each tumor based on hematoxylin and eosin staining. Duplicate 1.5-mm cores were taken from different areas of the same tissue block for each case (intratumoral tissue and peritumoral tissue). Serial sections (4 µm thick) were placed on slides coated with 3-aminopropyltriethoxysilane. The immunohistochemistry analysis was carried out as described previously (Lu et al., 2011). The primary monoclonal antibody used was rabbit anti-human (1:100). The positive staining of ALCAM was located at the cellular membrane and the immunostaining intensities were scored semiquantitatively as follows: 0 = negative; 1 = positive. All samples were anonymously and independently scored by two investigators (XY- Lu and WM-Cong). In case of disagreement, the slides were reexamined and consensus reached by the observers. The intensity of immunostaining was scored on the basis of the percentage of positive tumor cells: 0 (-) (0-5%), 1 (+) (6-25%), 2 (++) (26-50%), and 3 (+++) (>51%) for ALCAM.
Statistical Analysis
Results are presented as mean ± standard error of the mean from at least three independent experiments. Unless otherwise stated, differences between two groups or more than two groups were determined using Student t-test or one way analysis of variance, respectively, followed by Dunnett's multiple-comparison test. The cumulative recurrence and survival rates were determined using the Kaplan-Meier method (log-rank test). The Cox multivariate proportional hazards regression model was used to determine the independent factors that influence survival and recurrence based on the investigated variables. Values of p < 0.05 were considered statistically significant. Statistical analyses were performed using GraphPad Prism, version 6.00 for Windows (GraphPad Software, LaJolla, California, USA) and IBM SPSS Statistics 18.0 (Armonk, NY, USA).
miR-483-5p Is Upregulated in RHCC Patients with Short-Term Recurrence
To find differential expressed miRNAs and potential predictive markers in HCC recurrence, we first determined the expression of miRNAs with miRNA array between short-term recurrence (G1, ≤ 2 years) and long-term recurrence group (G2, > 2 years), and five upregulated miRNAs (hsa-miR-133a, hsa-miR-4251, hsa-miR-4279, hsa-miR-483-5p, hsa-miR-642b-3p) were identified ( Figure 1A). To validate the miRNA array data, real-time quantitative polymerase chain reaction analysis was performed using RHCC tissue. Results showed that miR-483-5p was the most significantly different miRNA between the two groups, and it was selected for further studies ( Figure 1B). These results suggested that upregulation of miR-483-5p is related to the short-term recurrence of HCC, and may play an important role in the metastasis and recurrence process.
miR-483-5p Promotes HCC Cell Invasion and Metastasis in Vitro and in Vivo
To better understand the biological functions of miR-483-5p expression on HCC recurrence, we synthesized miR-483-5p mimics and inhibitor and constructed a lentivirus vector expressing miR-483-5p for a functional experiment. We transiently transfected miR-483-5p mimic into SK-Hep1 and SMMC-7721 (Figures 1C,D), and transfected miR-483-5p inhibitor into SMMC-7721 and Huh7. During cell proliferation assay, ectopic expression of miR-483-5p had no obvious effects on HCC cell proliferation (Figures S1A,B). Because the expression of miR-483-5p was highly associated with the recurrence and metastasis of HCC in the preliminary miRNA microarray results, we wondered whether miR-483-5p could play an important role in HCC cell invasion and metastasis. Transwell assays without Matrigel results showed that miR-483-5p mimics dramatically promoted the migration of SK-Hep1 and SMMC-7721 cells when compared with the indicated controls (Figure 2A). Transwell assays with Matrigel demonstrated that miR-483-5p mimics dramatically promoted the invasive capacities of these two cell lines when compared with the vector control groups ( Figure 2B). Furthermore, the migration and invasion of Huh7 and SMMC-7721 cells decreased when endogenous miR-483-5p was silenced with an inhibitor (Figures 2C,D). Our results illustrated that miR-483-5p can significantly enhance HCC cell migration and invasion in vitro.
To further reveal the role of miR-483-5p in tumor invasion and metastasis in vivo, we used a lentivirus system to establish a stable SMMC-7721 cell line with miR-483-5p overexpression, and SMMC-7721 cells with stable GFP overexpression served as a control. These cell lines were designated as Lenti-miR-483-5pand Lenti-GFP, respectively. Next, Lenti-miR-483-5p and Lenti-GFP were transplanted into the livers of nude mice. The HCC cell line SMMC-7721 has been employed in in vivo metastasis assays in nude mice, including orthotopical liver implantation for intrahepatic metastasis and distant metastasis, because it has relatively strong in vitro invasive properties (Liu, 2008).
Orthotopic liver implantation results showed that the number of metastatic nodules in the liver and lung were dramatically increased in the Lenti-miR-483-5pgroup compared with that in the vector control group after 10 weeks. Intrahepatic metastasis was observed in eight mice of the Lenti-miR-483-5p group and four mice of the vector control group, and pulmonary metastasis occurred in three Lenti-miR-483-5p mice compared with none in the vector control group (30% vs. 0, p = 0.030). Interestingly, additional invasive expression was observed in Lenti-miR-483-5pmice, along with common intrahepatic metastatic nodules (Figure 3A), such as intrahepatic multiple metastatic nodules and distant primary lesion metastasis (Figure 3B), vascular invasion close beside larger vein branch (Figure 3C), metastatic nodules invasion smooth muscular tissue (Figure 3D), and distant metastasis in the lung (Figure 3E). The number of metastatic nodules in the livers of Lenti-miR-483-5p group is significantly higher than that of vector control (Figure 3F). These results suggest that miR-483-5p is a positive invasive and metastatic regulator for HCC.
miR-483-5p Downregulates ALCAM Expression by Directly Targeting Its 3 ′ -UTR
To elucidate the underlying molecular mechanism through which miR-483-5p initiates HCC cell invasion and metastasis, we used the public prediction algorithm TargetScan (http://www.targetscan.org) to explore potential targets for miR-483-5p. A total of 55 candidate mRNA targets were found (Table S1). When candidate genes were detected in Lenti-miR-483-5p cells with qPCR, we found that of 55 predicted candidate genes in two cells, qPCR ( Figure 4A) and western blot (Figures 4B,C) analysis demonstrated that the expression of miR-483-5p was negatively correlated with ALCAM at both the mRNA and protein levels. To determine whether ALCAM is regulated by miR-483-5pthrough direct binding to its 3 ′ -UTR, a series of 3 ′ -UTR fragments, including full-length 3 ′ -UTR, binding site (wild-type and mutant) (Figure 4D),were constructed and inserted into the region immediately downstream of the luciferase reporter gene.
ALCAM Inhibits HCC Cell Migration and Invasion
To better understand the potential role of ALCAM in miR-483-5p-mediated tumor invasion and metastasis, we performed gain-of-function and loss-of-function analyses. SK-Hep1 and SMMC-7721cells were transfected with ALCAM siRNAs or a negative control siRNA (Table S2). Results from transwell assays showed that knockdown of ALCAM dramatically promoted cell migration ( Figure 5A) and invasion ( Figure 5B) in both SK-Hep1 and SMMC-7721 cells. However, transfection with si-ALCAM did not affect cell proliferation in both cells. During cell proliferation assay, knockdown of ALCAM had no obvious effects on HCC cell proliferation (Figures S1C,D). These results were consistent with the promotion effects on HCC cell observed with miR-483-5p mimics or stable overexpression.
Restoration of ALCAM Inhibits miR-483-5p-Mediated HCC Cell Migration and Invasion
Because si-ALCAM can promote HCC cell migration and invasion, and miR-483-5p can post-transcriptionally regulate the expression of ALCAM by directly binding to its 3 ′ -UTR, we hypothesized that the downregulation of ALCAM directly mediates miR-483-5p-initiated HCC invasion and metastasis. To further address this critical issue, we constructed a lentivirus plasmid containing the ALCAM complementary DNA sequence without its 3 ′ -UTR, which enabled constitutive ALCAM expression without the potential miR-483-5p binding sites. Reintroduction of ALCAM significantly reversed miR-483-5p-induced promotion of migration and invasion (Figures 5C,D). In summary, these data demonstrate that ALCAM is a direct and functional target for miR-483-5p.
FIGURE 3 | miR-483-5p promotes HCC cell invasion and metastasis in vivo, (A-F) showed intrahepatic metastasis and distant metastasis (lung) in Lenti-miR-483-5p mice. (A) Orthotopic primary liver tumor (white arrows) and intrahepatic metastatic nodules close beside tumor (black arrow). (B) Distant microvascular invasion in another liver lobe (black arrow). (C) Intrahepatic metastatic nodules close beside larger vein branch (black arrow). (D) Metastatic nodules invasion smooth muscular tissue near liver (black arrow). (E) Distant metastasis in the lung. (F)
The numbers of metastatic nodules in the livers of each mouse are counted; the statistical significance is labeled using the 2 test. *p < 0.05, HE stained ×40.
Clinical Significance and Prognostic Analysis of ALCAM
To determine the clinicopathologic significance of ALCAM in HCC, we obtained FFPE HCC tissues from 129 patients with HCC who had undergone curative resection and tissue microarray was constructed for further analyses. The positive staining of ALCAM was located at the cellular membrane or cytoplasm in immunohistochemistry stains, analysis results showed that 59.3% cases were positive on the cellular membrane, and 12.5% cases were positive in cytoplasm. ALCAM level was determined by immunohistochemistry on tissue microarray, and the patients were divided into two groups (negative and positive) according to the membrane expression of ALCAM in the HCC tissue (Figures 6A,B). Clinical pathological data was statistically analyzed between the two groups. We found that the expression level of ALCAM was negatively correlated with microvascular invasion and tumor size. However, ALCAM overexpression was correlated with liver cirrhosis ( Table 1).
In addition, we evaluated the clinical relevance of ALCAM expression to prognosis in the cohort. At the time of last followup, of the 129 patients studied, 66 had tumor recurrence and 52 died. Kaplan-Meier analysis showed that the 1,3, and 5-year survival rates for ALCAM-positive cases were 75.8%, 60.1%, and 54.1%, respectively, and the survival rates for the ALCAMnegative cases were 80%, 56.3%, and 50.2%, respectively; median overall survival (OS) time was 37.8 months for patients who were positive for ALCAM and for ALCAM-negative cases (p > 0.05, log rank test). The 1,3, and 5-year recurrence rates of ALCAM-positive cases were 16.4, 31.3, and 50.1%, respectively, and the recurrence rates for the negative cases were 33.5, 56.1, and 67%, respectively. The median time to recurrence (TTR) was 54.9 months for patients who were positive for ALCAM and 27.2 months for ALCAM-negative cases (p < 0.05, log rank test, Figure 6C). The data indicate recurrence-free survival was poorer in patients with ALCAM-negative expression than in those with ALCAM-positive expression.
To test whether the expression levels of ALCAM were independent of other predictive variables, we applied univariate and multivariate analyses using a Cox multivariate proportional hazard regression model with ALCAM expression and clinicopathologic factors (such as age, sex, hepatitis B virus, tumor size, vascular invasion, and tumor differentiation) as covariates. Univariate analysis showed that hepatitis B e-antigen, hepatitis B e-antibody (HBeAb), and the expression of ALCAM were the factors that were consistently significant for TTR, and HBeAb, alpha-fetoprotein (AFP), albumin, and vascular invasion were consistently significant for OS (Table S3). A multivariate statistical analysis revealed that ALCAM-negative cases harbored a 1.876-fold higher risk of cancer recurrence (p = 0.022, 95% confidence interval 1.096-3.211) than ALCAM-positive cases, and ALCAM was an independent prognostic factor for TTR ( Table 2). HBeAb, vascular invasion, and AFP were independent prognostic factors for OS.
The epithelial-mesenchymal transition (EMT) is a key step in cancer recurrence and metastasis by which epithelial cells lose their cell polarity and cell-cell adhesion, and gain migratory and invasive properties to become mesenchymal cells. The invasion and metastasis of HCC are closely related to tumor cell EMT, study has shown that the decrease in ALCAM expression was accompanied by a significant upregulation of MMP-2 expression in breast cancer (Jezierska and Motyl, 2009). When the expression of EMT molecules were studied with the HCC cell lines by qRT-PCR and Western blotting, we observed that the expression of EMT related genes (MMP-2, MMP-9, E-caherin and vimentin) significantly changed as a result of interfering or overexpression of ALCAM (Figures 6D-H). This result further confirms that ALCAM is significantly associated with EMT in HCC cells, miR-483-5p-ALCAM can affect the invasion and metastasis of HCC by regulating the EMT process of HCC cells.
DISCUSSION
HCC is a common malignant tumor with a high mortality rate. Currently, surgical resection is still the most effective treatment for HCC, but the prognosis for HCC patients is unsatisfactory because of biologic characteristics such as strong invasiveness, recurrence, and metastasis. More than 90% of deaths in cancer patients are related to recurrence and metastasis; the 5-year recurrence rate is as high as 60% to approximately 90% after operation, which has restricted the long-term efficacy of surgical resection (Cong, 2016). Recurrence also occurs in the short term in clinical studies, which seriously affects the disease-free survival time and the treatment effect. The results of molecular pathology studies on the clonal origin of the RHCC showed that there were two models of origin of RHCC: intrahepatic metastasis and multicentric occurrence . Treatment method should be selected according to different recurrence types; theoretically, the curative effect of reoperation to multicentric occurrence cases is similar to the primary tumor recurrence, whereas interventional treatment is more suitable for intrahepatic metastasis cases. The clonal origin research has limited value for the assessment of recurrence risk. To determine the recurrence risk prediction markers, the ability to understand, predict, and reduce the recurrence of HCC is necessary.
There are differences in the expression level of miR-483-5p in HCC tissue with different recurrence intervals, the in vivo and in vitro experiments confirmed that the high expression of miR-483-5p could promote the migration and invasion of HCC cells, and this effect is directly targeting ALCAM.
Currently, research on miR-483-5p of HCC is sparse. Studies have shown that the increased expression of miR-483-5p in the plasma of patients with HCC suggests the occurrence of tumor as a potential biomarker HCC (Cong, 2016). Some studies have shown that the expression of miR-483-5p in HCC tissues is lower than that in surrounding liver tissues. However, the expression level and function of miR-483-5p in RHCC have not been reported.
Our study shows that the expression of miR-483-5p is significantly correlated with the recurrence interval of RHCC, and the results suggest that the increased expression of miR-483-5p can shorten the recurrence interval of HCC, which can be used as a predictor of short-term recurrence.
Cell function experiments show that miR-483-5p can promote the migration and invasion of HCC cells, but have no obvious effect on proliferation, and these results suggest that miR-483-5p can affect the metastasis and recurrence of HCC by promoting the migration and invasion abilities of HCC cells.
In vivo experiments show that there is increased intrahepatic metastasis and microvascular invasion after miR-483-5p upregulation, and vascular invasion and metastasis are not limited to the orthotopic liver lobe but are also present in the distant liver lobe, even in muscle tissue adjacent to the liver and lung. These results suggest that miR-483-5p overexpression can enhance the invasion and metastasis of HCC cells, and it leads to the short-term recurrence by the invasion-promoting effect.
MiR-483-5p is also involved in a variety of disease processes, such as facilitating infected lymphocytes through the blood-brain barrier, polycystic ovary syndrome, and liver fibrosis (Shen et al., 2013;Li et al., 2014;Chen et al., 2015;Shi et al., 2015;Curis et al., 2016), and it also plays an important role in the development of many tumors such as esophageal, oral squamous cancer, and lung adenocarcinoma (Song et al., 2014;Li et al., 2016;Xu et al., 2016).
The effect of miR-483-5p is achieved through effects on target genes. We predict several possible target genes according to bioinformatics analysis, and found the ALCAM gene, which is the direct target gene of miR-483-5p by luciferase reporter assay. Our experiments show that there are definite binding sites in miR-483-5p and the 3 ′ UTR region of ALCAM, and through the functional restoration experiment we finally confirm that miR-483-5p promotes the biologic function of tumor cell migration and invasion by regulating its target gene, ALCAM, in HCC. ALCAM, as a member of the 'immunoglobulin super family' , is known to be involved in cancer cell proliferation and migration. The expression and function of ALCAM vary in different tumors; in some tumors, it plays a role in promoting cancer, and in other tumors it plays a role in suppression; some research indicated it was highly expressed in gastric cancer and promoted the migration of tumor cells. The membranous ALCAM expression in gastric cancer tissue and serum was associated with shorter overall survival (Ye et al., 2015;Erturk et al., 2016). ALCAM can promote cell migration, invasion, and metastasis in endometrial carcinoma, and is the marker of recurrence in early-stage endometrioid endometrial cancer (Devis et al., 2016). However, the low expression of ALCAM indicates poor prognosis in infantile neuroblastoma (Wachowiak et al., 2016). ALCAM is also expressed in serum or other body fluids, and it can be used to predict the prognosis of tumors by detecting the changes in its expression. Studies have shown that serum ALCAM levels significantly increased in patients with breast cancer and can be used as a prognostic and predictive indicator (Al-Shehri and Abd El Azeem, 2015); the expression level of ALCAM in the urine of patients with bladder cancer can be used as a prognostic marker for survival (Egloff et al., 2016). The positive staining of ALCAM was located at the cellular membrane or cytoplasm in immunohistochemistry stains, and 59.3% cases were positive on the cellular membrane, whereas 12.5% cases were positive in cytoplasm. The clinical significance and prognostic value of ALCAM positive located at membrane staining was more important than cytoplasm staining. ALCAM microarray analysis showed that the 1,3, and 5-year survival rates of ALCAM-positive cases were 75.8, 60.1, and 54.1%, respectively, and that of the ALCAM-negative cases were 80, 56.3, 50.2%, respectively; the median survival time was 37.8 months in the two groups, and the difference was not significant (p > 0.05). The 1,3, and 5-year recurrence rates of ALCAMpositive cases were 16.4, 31.3, and 50.1%, respectively, and the recurrence rates for the negative cases were 33.5, 56.1, and 67%, respectively. The median TTR of the positive group was 54.9 months, which was significantly higher than that of the negative group, which was 27.2 months (p < 0.01). Multivariate analysis showed that the risk of recurrence in ALCAM-negative patients was 1.876 times that of positive cases (p = 0.022, 95%confidence interval 1.096-3.211), which was an independent risk factor for postoperative recurrence. Factors associated with survival included HBeAb, albumin levels, venous invasion, and AFP, among which HBeAb, albumin levels, and venous invasion were independent risk factors for overall survival. And ALCAM is significantly associated with EMT in HCC cells, miR-483-5p-ALCAM can affect the invasion and metastasis of HCC by regulating the EMT process of HCC cells.
Our study describes the function and mechanism of miR-483-5p/ALCAM in RHCC. miR-483-5p promotes the migration and invasion of HCC cells, which leads to intrahepatic metastasis and distant metastasis and finally postoperative short-term recurrence; ALCAM is an independent risk factor of HCC recurrence and can be used as a biomarker for postoperative recurrence risk assessment of HCC.
STATEMENT OF SIGNIFICANCE
miR-483-5p / ALCAM axis is an important regulator in invasion and metastasis and biomarker for recurrence risk assessment of HCC.
ETHICS STATEMENT
This study was carried out in accordance with the recommendations of "Committee on Ethics of Biomedicine, Second Military Medical University" with written informed consent from all subjects. All subjects gave written informed consent in accordance with the Declaration of Helsinki. The protocol was approved by the "Committee on Ethics of Biomedicine, Second Military Medical University."
AUTHOR CONTRIBUTIONS
W-MC, X-HH, X-YL, and ZC: designed the study; X-YL, ZC, X-YG, and DC: performed experiments; X-YL and W-MC: analyzed IHC data; JD and Y-JZ: helped to perform experiments; X-YL and W-MC: wrote the paper with comments from all authors. | 6,960.8 | 2018-01-12T00:00:00.000 | [
"Biology"
] |
Onset of Darcy–Bénard convection under throughflow of a shear-thinning fluid
We present an investigation on the onset of Darcy–Bénard instability in a two-dimensional porous medium saturated with a non-Newtonian fluid and heated from below in the presence of a uniform horizontal pressure gradient. The fluid is taken to be of power-law nature with constant rheological index $n$ and temperature-dependent consistency index $\unicode[STIX]{x1D707}^{\ast }$ . A two-dimensional linear stability analysis in the vertical plane yields the critical wavenumber and the generalised critical Rayleigh number as functions of dimensionless problem parameters, with a non-monotonic dependence from $n$ and with maxima/minima at given values of $\unicode[STIX]{x1D6FE}$ , a parameter representing the effects of consistency index variations due to temperature. A series of experiments are conducted in a Hele-Shaw cell of aspect ratio $H/b=13.3{-}20$ to provide a verification of the theory. Xanthan Gum mixtures (nominal concentration from 0.10 % to 0.20 %) are employed as working fluids with a parameter range $n=0.55{-}0.72$ and $\unicode[STIX]{x1D707}_{0}^{\ast }=0.02{-}0.10~\text{Pa}~\text{s}^{n}$ . The experimental critical wavenumber corresponding to incipient instability of the convective cells is derived via image analysis for different values of the imposed horizontal velocity. Theoretical results for critical wavenumber favourably compare with experiments, systematically underestimating their experimental counterparts by 10 % at most. The discrepancy between experiments and theory is more relevant for the critical Rayleigh number, with theory overestimating the experiments by a maximum factor less than two. Discrepancies are attributable to a combination of factors: nonlinear phenomena, possible subcritical bifurcations, and unaccounted-for disturbing effects such as approximations in the rheological model, wall slip, ageing and degradation of the fluid properties.
Introduction
Thermal instability of saturated porous media has been intensively investigated with analytical tools (for a survey see Rees (2000), Nield & Bejan (2013)) since the early studies of Horton & Rogers (1945) and Lapwood (1948), subsequently extended to include parallel horizontal flow (Prats 1966). Different combinations of boundary conditions are adopted in the literature for heat flux, temperature and permeability (Nield 1968); the fluid is usually taken to be Newtonian.
From an experimental viewpoint, Rayleigh-Bénard convection in porous media has been studied by means of the Hele-Shaw analog model (see, for example, Hartline & Lister (1977), Cherkaoui & Wilcock (2001) and Letelier et al. (2016)), originally developed for Newtonian fluids and recently extended to non-Newtonian power-law fluids (Longo, Di Federico & Chiapponi 2015;Ciriello et al. 2016): the porous medium is replaced with a small gap between two flat plates; this entails advantages and disadvantages. On one hand, experiments are easily prepared and the flow characteristics conveniently measured; on the other hand, simplifying assumptions on the structure of the simulated porous medium are needed. Direct experiments were performed simulating the porous medium with glass ballotini (see, for example, Lister (1990) and Keene & Goldstein (2015)). The onset of convection in viscoplastic fluids, including the effects of wall slip, was analysed by Métivier & Magnin (2011) and Darbouli et al. (2013); a more complex scenario, with Carbopol behaving like a single or a double-phase continuum, has been analysed in Métivier, Li & Magnin (2017).
Several measurement techniques have been used for detecting the onset of instability, such as shadography (for example, Darbouli et al. 2013), variation of thermal flux induced by convection (for example, Schmidt & Milverton 1935), visualisation through a pH-indicator (Hartline & Lister 1977), magnetic resonance imaging (Shattuck et al. 1995), digital particle image velocimetry (DPIV) (Kebiche, Castelain & Burghelea 2014), holographic real-time interferometry (Koster & Müller 1982). Despite the accurate set-up and the sophisticated instruments used, the overall accuracy in detecting instabilities is usually quite limited. This is partly a consequence of the fact that the dimensionless groups, relevant to describe the physical process of incipient convection, involve numerous variables, and each of these has its own uncertainty. In addition, detecting an instability at its onset is, by definition, a challenge: instabilities are linear at the very beginning, and induce a tiny modulation of velocity, thermal flux or refraction index. Detection of instabilities, however, takes place in many cases during the nonlinear stage, whereas many theories refer to incipient linear instability.
For non-Newtonian fluids, additional complexities arise from measurement issues and uncertainty on rheological parameters, often in the presence of disturbing effects such as slipping, ageing and deterioration of the fluid under a prolonged thermal gradient. Further, most of the models adopted to describe non-Newtonian fluids are a strong simplification of the constituent equations, invariably referred to viscometric flow fields and subject to distortion in non-viscometric, complex three-dimensional flow fields. In this respect, there were some attempts to measure the rheological material properties in non-conventional rheometers, like the experimental apparatus to be used for the main experiments in this work (Celli et al. 2017).
The aim of this work is comparing theoretical simulation with experimental results in Rayleigh-Bénard convection of a non-Newtonian power-law fluid within a vertical porous layer heated from below and subject to horizontal cross-flow -a configuration common in several natural settings. The experimental set-up is composed by a Hele-Shaw cell, and correspondingly the basic solution and linear stability analysis are derived for a two-dimensional geometry. The manuscript is structured as follows. Section 2 includes the mathematical formulation of the problem and the linear stability analysis. The experimental set-up and the measurement techniques are described in § 3, while § 4 illustrates the experimental results. The discussion and the conclusions are presented in § 5, together with some perspectives for future work. Supplementary material available at https://doi.org/10.1017/jfm.2020.84 contains details on uncertainties of the experimental results.
Governing equations
The theoretical simulation is focused on the two-dimensional linear stability analysis of a fluid saturated horizontal porous layer of height H, see figure 1. The horizontal boundary planes are assumed to be impermeable and isothermal at temperature T 0 + T, lower boundary, and T 0 , upper boundary. Darcy's law generalised for non-Newtonian power-law fluids is here employed together with the Oberbeck-Boussinesq approximation. Local thermal equilibrium between the solid and the fluid phase is assumed and a convection-diffusion energy balance is used to model the heat transfer. The balance equations and the boundary conditions can be written in the dimensionless form The Rayleigh number Ra is defined as Here, u is the seepage velocity having Cartesian components (u, v), and (x, y) are the Cartesian coordinates, with y denoting the vertical axis, T is the temperature, µ * is the consistency index of the fluid and µ * 0 (SI unit (Pa s n )) denotes the value of µ * evaluated at reference temperature T 0 , n is the power-law index, K is the permeability (with SI unit m n+1 ), ρ 0 is the fluid density at the reference temperature T 0 , g is the gravitational acceleration (of modulus g), β is the thermal expansion coefficient of the fluid, σ is the ratio between the average volumetric heat capacity of the porous medium and the volumetric heat capacity of the fluid, and is the average thermal diffusivity of the saturated porous medium. We assume here the following dependence of η on T (Nowak, Gryglaszewski & Stacharska-Targosz 1982;Celli et al. 2017): where γ is a non-negative dimensionless parameter that tunes the departure from the constant consistency index model, namely where ξ is a fluid property (with unit K −1 ) modulating the slope of the temperature change. In passing, we note that an exponential dependence is modelled in Darbouli et al. (2016). A comparison between the two models is reported in the supplementary material, showing that both models can be used, with a similar agreement between experimental data and interpolating function.
Basic solution and stability analysis
The stability analysis has to be performed on a stationary solution of the balance equations (2.1). This solution is the so-called basic state, here denoted by the subscript b. The stationary solution of (2.1) here considered as basic state is } is the basic state velocity vector, and the Péclet number is defined as the mean velocity of the basic flow, given by As the experimental set-up is composed by a Hele-Shaw cell, the basic state in (2.6) and (2.7) is two-dimensional. Accordingly, in the following, a two-dimensional linear stability analysis in the plane (x, y) is performed. The basic state is thus perturbed by small-amplitude disturbances, namely where U = (U, V) is the perturbation velocity, Θ is the perturbation temperature, and ε is a parameter such that |ε| 1. The streamfunction, Ψ (x, y), formulation can be employed to simplify the problem, with U = ∂Ψ /∂y and V = −∂Ψ /∂x. The perturbations can now be expressed in terms of normal modes, namely (2.9) One can substitute (2.8) and (2.9) into (2.1) to obtain where the primes denote derivatives with respect to y and G(y) = η(T b (y)). We note that, due to (2.10), we have dη(T b )/dT b = −G (y).
Derivation of critical wavenumber and Rayleigh number
The solution of (2.10) is obtained numerically by employing the same procedure used in Celli et al. (2017). The critical values of the Rayleigh number and of the wavenumber are reported in figures 2 and 3 as functions of the parameter γ for different values of the power-law index n and of the Péclet number. As Pe increases, these figures illustrate the non-monotonic behaviour of Ra c and k c versus γ .
Experimental set-up
Validation of the model was performed by comparison of the theoretical results achieved in § 2 with experimental results obtained with an analogue model consisting of a Hele-Shaw cell 80 cm long and 4 cm high; its design is similar to Hartline & Lister (1977), see figure 4(a,b). A frame of aluminium held together two polycarbonate windows 0.8 cm thick, with a gap variable between 0.1 and 0.3 cm controlled by aluminium shims. The upper and lower frames were cooled and heated, respectively, by a water flux within PVC pipes inserted into the frame. Temperature control within 0.1 K was achieved via two thermostatic baths. The temperature was measured with two probes inserted in the upper and lower side of the frame: PT100 4 wires AA 1/3DIN with a nominal accuracy of 0.1 K at 273 K. The probes were calibrated in a limited range of temperature with a maximum uncertainty of 0.08 K, hence the error on the temperature difference between the two frames is 0.12 K. The cell is insulated with foam rubber in order to avoid lateral dispersion -except for a window in the central section to allow observation of the fluid flow. The uniform horizontal velocity component, representing the basic flow, was obtained by injecting fluid with a syringe pump in one of the two wells, with overflow in the other well. The visualisation of the flow field was obtained with a tracer having the same composition as the fluid within the cell, i.e. water and Xanthan Gum (nominal concentration in the range 0.10 %-0.20 %, the actual one is a little smaller because the mixture was filtered to eliminate some lumps for homogeneity), added with aniline dye. The tracer was injected through several small holes with diameter less than 0.1 mm in a PVC pipe inserted in one of the polycarbonate windows, positioned at mid-height of the window. The pipe was connected to two small tanks, periodically refilled with tracer fluid, in order to guarantee a constant and uniform injection. Here, ρ 0 is the reference mass density at a temperature of 298 K, µ * 0 is the fluid consistency index, evaluated at the reference temperature T 0 , n is the fluid behaviour index, b is the gap width, u b is the average basic horizontal velocity, T 0.5 is the temperature in the mid section z/H = 0.5 during experiments (minimum/maximum value), Pe is the Péclet number, Ra c,exp is the experimental critical Rayleigh number, k c,exp is the dimensionless critical wavenumber. Expt., experiment.
The images were acquired with a USB video camera (1280 pixel × 960 pixel) with microscope lenses, with a field of view (FOV) of ≈12 × 10 cm 2 . The FOV was illuminated by a lamp with rays made parallel through a lens, in order to increase resolution and sharpness. The video camera acquired a single frame each 30-60 s. The overall process was controlled by a PC with a DAQ board, storing temperature measurements at a data rate of 10 Hz, and controlling the USB image acquisition. Tracer was progressively injected and the temperature set-up was increased/lowered for the hot/cold thermostatic bath, with steps of 0.1 K each 2-3 min. Variations of the upper/lower temperature were fixed in order to have a constant (or almost constant) temperature T 0.5 in the mid section. Most experiments lasted for 3-4 h, with a time gradient of temperature of less than 10 −3 K s −1 . A typical time series of measured temperatures is shown in the supplementary material, as well as details of rheometric measurements and experimental uncertainties.
Experimental results
The experiments and their parameters are listed in table 1. Figure 5 shows a typical sequence of frames from the early stages of instability development to the appearance of strongly nonlinear instabilities. A slow translation to the left is observed due to the basic flow velocity of 0.0087 cm s −1 . Similar results were obtained for all tests. In most cases, the temperature difference between the two frames first showed an increase over time, followed by a reduction, with consequent linearisation of the convective cells up to their disappearance. Hysteresis phenomena, with a difference in temperature at the early appearance of the cell (branch of rising T) higher than the difference in temperature corresponding to the return to stability (branch of reducing T), were evident only for the more viscous fluid flow. The description of the methodology adopted for comparison gives evidence of the numerous experimental complexities encountered during the activity. The value of theoretical Ra c increases for increasing fluid behaviour index, although not monotonically for the larger value of Pe considered; also, an increase in Pe brings about an increase in Ra c , albeit not in the same proportion for different values of n. These theoretical values consistently overpredict experimental values to various degrees, from approximately 15 % to nearly 100 %; no clear trend is detected in the discrepancy for different values of n and/or Pe. The experimental values of Ra c obtained show a non-monotonic behaviour with fluid behaviour index n; for both Pe values, a minimum for Ra c is evident at n = 0.6. Figure 6(c,d) compares the experimental and theoretical values of the critical wavenumber for varying n, again for two different values of Pe. The trend is correctly reproduced and the theoretical formulation always underpredicts the experimental result to a variable degree (the maximum discrepancy is below 10 %), with a lower average discrepancy for experiments at low Péclet number.
There are several sources of discrepancy: the strong nonlinearity of the flow field favours a rapid evolution of the cell from the onset of instability towards a fully developed cell with a reduction of wavelength (and consequent increment of the experimental wavenumber). A further evolution leads to doubling of the cells. A disturbing factor is a possible slip at the wall, which is not included in the model and which reduces the flow resistance and favours the growth of the instability. This effect was clearly detected for viscoplastic fluids (Darbouli et al. 2013), and is widely documented for most aqueous polymer solutions -see Joshi, Lele & Mashelkar (2000) and Valdez et al. (1995).
Discussion and conclusions
The need to extend the available models for Darcy-Bénard instability to rheologically complex fluids and non-viscometric flow fields has suggested the analysis of non-Newtonian fluid flow in a two-dimensional geometry and in the presence of a uniform cross-flow. The fluid is assumed to display a power-law nature with temperature-dependent consistency index. A two-dimensional linear stability analysis in the vertical plane yields the critical wavenumber and the critical Rayleigh The error bars and the confidence bands refer to one standard deviation. Here, k cN = π is the critical wavenumber for a Newtonian fluid initially at rest. number upon solving numerically the eigenvalue problem. The critical Rayleigh and wavenumbers are significantly affected by the power-law index and by the thermal effects on the consistency index, displaying a non-monotonic trend with local minima and maxima.
A set of experiments performed in a Hele-Shaw cell allowed us to study flow patterns as functions of the Rayleigh number. The experiments were carried out with shear-thinning fluids of flow behaviour index n ranging from 0.55 to 0.72, coupled with cell gap widths of 0.2 or 0.3 cm, imposing cross-flow velocities of approximately 0.01 cm s −1 and vertical temperature gradients of 0.5-3.4 K cm −1 between the lower and upper frames. The critical wavenumber was obtained via analysis of the frames acquired.
The overall flow dynamics is controlled by the entangled interaction between fluid properties, geometry of the flow field and underlying uniform cross-flow. The onset of convective cells occurs with increasing wavenumber for increasing n. At the onset of convection, the shear-thinning behaviour favours a fast growth of the instabilities. Considering the complexity of the protocol and the numerous sources of uncertainty (see Longo et al. (2013b), for details on uncertainties in rheometric measurements), experimental results show a fairly good agreement, within 10 %, with theory for the critical wavenumber. The discrepancy may be attributed to nonlinear phenomena not captured by the linear stability analysis, and additionally to slip at the wall, and ageing and degradation of the fluid properties -all unaccounted-for phenomena in the theoretical model.
Results for the critical Rayleigh number show a correct trend and an overall acceptable agreement with theoretical predictions; the discrepancy varies widely with n and Pe values, and is generally larger than for the critical wavenumber. The theoretical model itself shows a larger sensitivity to the governing parameters for Ra c than for k c . Other rheological models, more complex than the power law, are available to describe Xanthan Gum mixtures (see, for example, Escudier et al. (2001), where a Carreau-Yasuda model is favourably tested). However, the power-law model is suitable to locally describe complex rheologies, with several validations in complex flows geometries -see Longo et al. (2013a) and the recent Chiapponi et al. (2019). In this regard, we have verified that in the shear rate range of our experiments, a power-law model adequately fits the rheometric data -see figure 1(b) and its caption in the supplementary material.
The experiments showed clearly that the development of the instability may occur at a threshold Rayleigh number lower than the critical value predicted by linear stability theory. This could be considered as symptomatic of a subcritical bifurcation, induced by the nonlinear terms in the governing equations of the fluid. Indeed, it is well established, both experimentally and theoretically, that the bifurcation is supercritical in the Darcy-Bénard convection with throughflow for a Newtonian fluid. The supercritical linear threshold of absolute instability (see Barletta 2019) was found to correspond perfectly to the one needed in experiments to trigger the instability. As suggested by the present series of experiments, as well as by the shear-thinning behaviour (see Balmforth & Rust 2009;Albaalbaki & Khayat 2011;Bouteraa et al. 2015), the nature of the bifurcation may be subcritical, and therefore a nonlinear stability analysis has to be carried out as a natural development of the present study. By using the concepts of nonlinearly convective and absolute instability, as defined in Couairon & Chomaz (1997), one can hope to obtain results that corroborate the experimental results obtained in this paper.
The present study can be extended in several directions. In particular, a change of the boundary condition (open instead of closed top) in the experimental set-up could give further hints for understanding the complex evolution of the cells; a direct numerical simulation should be at hand, due to the viscous regime of the flow field. The great flexibility of numerical experiments could shed light on the possible effects of slip at the wall and on the evolution in the quasi-linear and fully nonlinear regimes. Additional topics to be investigated are related to vertical fractures with uneven gaps (for example, see Felisa et al. (2018)), which are good proxies of geological fractures characterised by a length scale from a few centimetres to several metres and subject to thermal gradients and cross-flow. The vertical convection should be a quite efficient agent for re-mixing the fluid, favouring heat exchange and chemical reactions. | 4,785.2 | 2020-02-21T00:00:00.000 | [
"Physics"
] |
Compatibility Analyses of BICUVOX . 10 as a Cathode in Yttria-stabilized Zirconia Electrolytes for Usage in Solid Oxide Fuel Cells
Costs for broad commercialization of solid oxide fuel cells (SOFC) can be considerably reduced by lowering the operation temperature of such devices1-3. Nowadays, the process to reduce the operating temperature is still limited by the insufficient oxygen transport across the electrolyte at low temperatures. However, Xia and Liu4 proposed that a significant reduction in the SOFC operating temperature can be obtained through the development of composite cathodes with higher catalytic activity for oxygen reduction, thereby offering high power densities without exchanging the electrolyte. A proposed alternative material with high ionic conductivity in such composite cathodes by Xia and Liu4 is copper-substituted bismuth vanadate (BICUVOX.10)5-7. The ionic conductivity of BICUVOX.10 at 300 °C is in the order of 10–3 S.cm–1, value almost 100 times greater than any other solid electrolyte at this temperature5-7. Considering such BICUVOX.10 feature, significant research has been conducted using this material as a cathode composite together with gadolinia-doped ceria electrolytes (GDC)3,8,9. It was found that the interfacial cathodic polarization between the composite cathode BICUVOX.10/Ag and a GDC electrolyte at 500 °C is of merely 0.53 Ω.cm2, value much lower than those obtained when using La0.6Sr0.4Co0.2Fe0.8O3-δ (LSCF) or Sm0.5Sr0.5CoO3-δ (SSC) cathodes. Despite these interesting BICUVOX.10 characteristics, the main problem identified during the application of this material as a cathode is its high reactivity towards the electrolyte. A.J. Samson et al.10 observed that the temperature required to achieve a coating with good adhesive characteristics at the junction of BICUVOX.10+LSC(cathode) / GDC(electrolyte) is high enough to allow interfacial reactions, leading to inappropriate ionic transport through the reaction zone in the cathode/electrolyte interface. However regarding the formed products and their properties these reaction zones have not been fully examined. Furthermore, it has been reported by Z. Shao et al.11 that mixtures of BICUVOX.10 + Sm0.15Ce0.85O1.925 (SDC) gave rise to secondary phases formation at 700 °C with lower electrical conductivity than BICUVOX.10. In spite of such extensive research in the employment of BICUVOX.10 as a cathode material in doped-ceria electrolytes, there are no studies demonstrating the compatibility of BICUVOX.10 as a composite cathode applied in stabilized zirconia. This paper describes reactions and phase transformations in a modeled interface between BICUVOX.10 and yttria-stabilized zirconia (YSZ), Compatibility Analyses of BICUVOX.10 as a Cathode in Yttria-stabilized Zirconia Electrolytes for Usage in Solid Oxide Fuel Cells
Introduction
Costs for broad commercialization of solid oxide fuel cells (SOFC) can be considerably reduced by lowering the operation temperature of such devices [1][2][3] .Nowadays, the process to reduce the operating temperature is still limited by the insufficient oxygen transport across the electrolyte at low temperatures.However, Xia and Liu 4 proposed that a significant reduction in the SOFC operating temperature can be obtained through the development of composite cathodes with higher catalytic activity for oxygen reduction, thereby offering high power densities without exchanging the electrolyte.
Despite these interesting BICUVOX.10 characteristics, the main problem identified during the application of this material as a cathode is its high reactivity towards the electrolyte.A.J. Samson et al. 10 observed that the temperature required to achieve a coating with good adhesive characteristics at the junction of BICUVOX.10+LSC(cathode)/ GDC(electrolyte) is high enough to allow interfacial reactions, leading to inappropriate ionic transport through the reaction zone in the cathode/electrolyte interface.However -regarding the formed products and their properties -these reaction zones have not been fully examined.Furthermore, it has been reported by Z. Shao et al. 11 that mixtures of BICUVOX.10 + Sm 0.15 Ce 0.85 O 1.925 (SDC) gave rise to secondary phases formation at 700 °C with lower electrical conductivity than BICUVOX.10.
In spite of such extensive research in the employment of BICUVOX.10 as a cathode material in doped-ceria electrolytes, there are no studies demonstrating the compatibility of BICUVOX.10 as a composite cathode applied in stabilized zirconia.This paper describes reactions and phase transformations in a modeled interface between BICUVOX.10 and yttria-stabilized zirconia (YSZ), focusing on the properties of the secondary phases formed and how such products can influence the performance of BICUVOX.10(cathode)/ YSZ(electrolyte)-based devices.
In order to evaluate the interfacial reactions, a junction of BICUVOX.10 / 3Y-TZP was developed by placing a dense sample of BICUVOX.10 (95% theoretical density, average grain size of 3 μm) upon of a 3Y-TZP dense sample (99% theoretical density).Both samples were joined by heating to 955 °C for 2 h with heating/cooling rates of 5 °C/ min.The temperature of 955 °C used was experimentally optimized in order to avoid insufficient adhesion of the BICUVOX.10 / 3Y-TZP junction, as previously observed in other systems 10 .Characteristics of the reaction zone at the junction of BICUVOX.10 / 3Y-TZP were obtained from developing a composition with 50 wt.%BICUVOX.10 + 50 wt.%3Y-TZP (50B/50Z3), which was mixed in a ball mill for 24 h, compacted under uniaxial pressure of 45 MPa and exposed to the same temperature conditions mentioned above for the junction BICUVOX.10 / 3Y-TZP.Evaluation of the reactions between cubic ZrO 2 and BICUVOX.10 was performed replacing the 3Y-TZP with 50 wt.%8YSZ (50B/50Z8).However, emphasis was given to the study of reactions in tetragonal zirconia, considering that this polymorph is subject of studies for use as an electrolyte in SOFC at low temperature operation 13 , in which the composite cathodes made of BICUVOX.10 are more appropriate.In addition, during the development of this work, it was observed that a better understanding of the reaction aspects between BICUVOX.10/YSZ is obtained in the evaluation of the tetragonal polymorph.
SEM, XRD and impedance measurements
In order to characterize microstructural changes, the sample 50B/50Z3 was polished and thermally etched at 855 °C for 30 min.In the case of the BICUVOX.10 / 3Y-TZP junction, microstructure of fracture surface was examined without any previous treatment.SEM (FEI Inspect S 50) equipped with an energy dispersive X-ray spectrometer (EDX) was used to determine the chemical composition of the samples.The phases present in the samples were determined by X-ray diffraction (XRD, Siemens model D5005) using Cu-Kα radiation, 2θ range of 20 to 65° with 0.02° step.Destabilization extension of the tetragonal → monoclinic phase transition in ZrO 2 was calculated considering the volume fraction of the formed monoclinic phase (v m ), as estimated by Equation 1 14 .
Where X m is the integration of the intensities ratio (Equation 2) and I represents the diffraction intensity of the respective lattice plane in subscript.
Samples for conductivity were prepared as cylindrical pellets with 1 mm in thickness and 8 mm in diameter.Pt electrodes were placed on the sample as electrical contacts.Impedance spectroscopy measurements were performed in a temperature range of 425-600 °C, frequencies from 1 to 10 6 Hz and voltage amplitude of 0.1 V, using a Solartron SI 1260 impedance analyser.
Thermal expansion coefficient and tensile strength
Coefficients of linear thermal expansion (α) were determined using a dilatometer furnace NETZSCH model DIL402C.Sintered cylindrical samples of approximately 6 mm in length and 8 mm in diameter were analysed with a heating rate of 5 °C/min until 700 °C, and α was obtained between 150-700 °C.
Tensile strength assessment (σ TS ) was performed through diametral compression test 15 following the Equation 3. Samples of 8 mm in diameter (D) and 4 mm in thickness (h) were developed for testing.The tensile strength values shown in this paper are an arithmetic mean from 4 samples.The tests were performed at room temperature in a universal electro-mechanical tester INSTRON series 5500R, with a load cell of 500 kg and test rate of 0.5 mm/min until the maximum force applied before rupture of the specimens take place (F).Data acquisition was performed by the Blue Hill software.
Results
As shown in the interfacial junction, Figures 1a and b, the formation of a reaction zone occurs with height larger than 10 μm consisting mainly of Zr with traces of other elements such as Bi, V and Cu.This reaction zone has higher porosity and disorder than the vicinity.X-ray diffraction patterns, Figures 1c-e, show that the sample that simulates the products formed by the reaction conserves the phase γ'-Bi 2 V 0.9 Cu 0.1 O 5.35 .Nevertheless, when comparing the specimen which simulates the reaction zone (50B/50Z3) with 3Y-TZP, it is possible to observe a reduction of intensity of the (101) crystallographic plane of the tetragonal phase and a related increase at the (111) and (-111) peaks of the monoclinic phase.Hence, the extent of destabilization calculated by the volume fraction of monoclinic phase reached 86 v.% for the sample 50B/50Z3.Furthermore, in the sample 50B/50Z3 the destabilization of tetragonal zirconia is accompanied by the formation of yttrium vanadate (YVO 4 ), as identified by X-ray diffraction.
Secondary phases formed in the reaction zone are the same found in the 50B/50Z3, since they were thermally treated under the same conditions.Observing the 50B/50Z3 sample in Figure 2, the γ'-Bi 2 V 0.9 Cu 0.1 O 5.35 phase (white region) settles preferably among the monoclinic zirconia and yttrium vanadate crystals.An excessive growth of prismatic monoclinic ZrO 2 crystals, reaching 7 μm in length has also been observed.In the case of the sample 50B/50Z8, it is possible to observe by the X-ray diffraction patterns in the Figure 3 that cubic ZrO 2 was also destabilized to the monoclinic polymorph.The product originated in the interaction between BICUVOX.10 / 8YSZ was YVO 4 -the same phase obtained when reacting BICUVOX.10 and 3Y-TZP.
Figure 4 shows that the thermal expansion coefficient and tensile strength of 50B/50Z3 are lower than those found for 3Y-TZP.For instance, as to the tensile strength -which had the most marked reduction -the 50B/50Z3 sample showed one third of the measured resistance of 3Y-TZP.Likewise, the electrical conductivity assessment in 50B/50Z3, Figure 5, shows that the electrical conductivity of such material is lower as compared to BICUVOX.10 and 3Y-TZP.At 500 °C the electrical conductivity displayed by 50B/50Z3 was of 6.2x10 -7 S.cm -1 , whereas for BICUVOX.10 it was of approximately 4.6x10 -2 S.cm -1 .Nevertheless, the activation energy of 1.71 eV found for 50B/50Z3 is considerably larger than that for BICUVOX.10,0.57 eV.
Discussion
As observed in X-ray diffraction, in both 3Y-TZP (Figure 1d) and 8YSZ (Figure 3b), the interaction between yttria-stabilized zirconia and BICUVOX.10 results in high-temperature ZrO 2 polymorphs destabilization and YVO 4 formation.The production of yttrium vanadate and destabilization of tetragonal/cubic ZrO 2 polymorphs can be understood considering the Reaction 4. The YVO 4 formation occurs due to the reaction between the YO 1.5 stabilizer cation from ZrO 2 and VO 2.5 from γ'-Bi 2 V 0.9 Cu 0.1 O 5.35 .Some studies 16,17 have shown yttrium vanadate as a thermodynamically favorable product when vanadia and yttria react (Reaction 5).Hence, YVO 4 formation is intimately related to ZrO 2 destabilization, since depletion of the stabilizer cation from ZrO 2 structure results in the transformation of tetragonal or cubic phase into the most stable ZrO 2 polymorph at room temperature, i.e., the monoclinic polymorph.Corrosion studies of YSZ for thermal barrier coatings have shown that zirconia destabilization through the reaction with vanadium does not occur at temperatures below 800 °C18 .Notwithstanding, the massive diffusion displayed by the V 5+ ions at 800 °C is sufficient for the nucleation of YVO 4 crystals 19 .A.J. Samson et al. 10 showed that temperatures higher than 800 °C are crucial to promote adhesion using BICUVOX.10 as cathode for ceria-doped electrolytes.The aforementioned fact indicates that Reaction 4 is of paramount importance for this system, since BICUVOX.10 / YSZ-based devices certainly will require high temperatures -or at least, temperatures similar to the ceria-doped case -to promote good adhesive characteristics.
Zr 1-x Y x O 2-0.5x (tetragonal or cubic) + γ'-Bi 2 V 0.9 Cu 0.1 O 5.35 → ZrO 2 (monoclinic) + γ'-Bi 2 V 0.9-x Cu 0.1 O 5.35 + xYVO 4 (4) In order to further understand Reaction 4, a theoretical mechanism was proposed to explain the ZrO 2 destabilization using a charge-compensating dopants description 19 .Primarily, the destabilization mechanism is derived from two aspects of the ZrO 2 structure; one of them establishes that when ZrO 2 co-doping with M 5+ and M 3+ cations occurs, the negative charge of M 3+ →Zr 4+ ( Zr M′ ) substitution will be directly compensated by the positive charge generated by M 5+ →Zr 4+ ( � Zr M ) [19][20][21] .Secondly, the ability of tetragonal or cubic ZrO 2 polymorphs being stabilized at room temperature arises due to the lower coordination of Zr 4+ ions when oxygen vacancies are generated by M 3+ doping 22 .Such ZrO 2 destabilization model shows the link between the tetragonal or cubic to monoclinic transformation with the YVO 4 secondary phase nucleation.As shown in Figure 6, considering the oxygen vacancies available in the ZrO 2 doping with Y 2 O 3 (I), when V 5+ provided by γ'-Bi 2 V 0.9 Cu 0.1 O 5.35 diffuses into the stabilized ZrO 2 lattice, thus replacing the zirconium ions, a consumption of oxygen vacancies initially generated by the yttrium doping (II) takes place.Crystal electroneutrality demands that the charge sum of all point defects must be null.A limitation in the quantity of V 5+ in the Zr 4+ positions occurs since V 5+ →Zr 4+ replacement occurs only while oxygen vacancies exist, .Consequently, the oxygen vacancies responsible for the initial stabilization of tetragonal or cubic polymorphs are extinct and the precipitation of YVO 4 from stabilized ZrO 2 takes place.
Although the YVO 4 formation is derived only on the diffusion from V 5+ ions into the zirconia lattice, the effect of liquid BICUVOX.10 should play a significant role in this mechanism.As bismuth vanadate solid solutions undergo incongruent melting at 880 °C23 , some solution-precipitation reaction could occur at temperatures higher than that, therefore, accelerating interfacial reactions between BICUVOX.10 and YSZ.Moreover, temperatures above of BICUVOX.10 melting point also result in excessive grain growth.As shown in Figure 1a, BICUVOX.10 grains of originally 3 μm reach almost 100 μm after reaction tests at 955 °C.This same effect was previously reported by A.J. Samson et al. 10 .As seen in the properties of 50B/50Z3 sample, the YVO 4 formation and ZrO 2 destabilization have brought strong influences in the junction behaviour.The low thermal expansion coefficient and tensile strength of the 50B/50Z3 sample in relation to 3Y-TZP are originated from highly disordered and defect-rich regions when the reactions showed in Reaction 4 take place.Considering the destabilization mechanism proposed in this paper, the volumetric expansion caused by the transition from tetragonal to monoclinic ZrO 2 and YVO 4 crystal growth is responsible for the highly disordered region observed in the reaction zone.These interfacial defects act as points of high stress concentration, leading to the deterioration of the mechanical strength seen in the 50B/50Z3 sample.
Although it has been shown that reaction will undoubtedly bring about modifications in the properties of the BICUVOX.10/3Y-TZPjunction, the more deleterious effect observed was the low electrical conductivity of the reaction products.For the 50B/50Z3 sample, which represents the reaction zone, electrical conductivity was five times smaller than that of BICUVOX.10.This effect demonstrates that when YVO 4 formation and ZrO 2 destabilization occur, the reacted layer in the contact region between BICUVOX.10 and YSZ will act hindering any catalytic effect initially expected by using BICUVOX.10 as a composite cathode.Indeed, this tendency to reaction might be reduced if low temperature adhesion techniques were employed.Nevertheless, finding a temperature which allows good adhesion and avoids reactions in BICUVOX.10-basedcathodes is still a topic of current research 10 .
Conclusions
Reactions between BICUVOX.10 and YSZ were investigated and reported in this paper.Developing samples consisting of cathode and electrolyte materials in equal parts has shown to be a useful tool to characterize the reactions in the cathode/electrolyte interface.When BICUVOX.10 reacts with YSZ, the high temperature ZrO 2 polymorphs -tetragonal and cubic -are destabilized to monoclinic.In addition, an YVO 4 phase nucleates, a phenomenon which was attributed to the V 5+ cation diffusion into the stabilized-ZrO 2 lattice.Furthermore, these products have shown to be detrimental to the junction, since they lead to the deterioration of the electrolyte mechanical strength and to low electrical conductivity.In short, the application of BICUVOX.10 as highly catalytic component in the composite cathode used in yttria-stabilized zirconia electrolytes is restricted since both YVO 4 formation and ZrO 2 destabilization lead to poor properties of the junction.
Figure 2 .
Figure 2. Backscattered electron micrograph of the sample 50B/50Z3.Phases and products derived from reactions between BICUVOX.10 and 3Y-TZP are indicated by arrows.
converging to the same concentration [ ] equimolarity for the two cations is reached (III), all the oxygen vacancies generated by [ ]
Figure 6 .
Figure 6.Proposed mechanisms for high-temperature ZrO 2 polymorphs destabilization and YVO 4 nucleation.This charge-compensating dopants description mechanism is explained into three main steps.The Kröger-Vink notation for the � Zr V defect may be confusing, because the V symbol, which represents vanadium ion, has always been applied to describe a vacancy defect.However, in this schematic representation, � Zr V represents one V 5+ ion replacing the Zr 4+ cation into the ZrO 2 lattice. | 3,966.2 | 2014-09-30T00:00:00.000 | [
"Materials Science"
] |
Ruthenium(III) Based diimine Complexes; Synthesis, Characterization, PXRD study and Catalytic Hydrogenation of Cyclohexene
This study deals with preparation and characterization of a group of Ru III -chelates contains tetradentate diimine ligands. These quadridentate ligands are derived from 2-OH-1-naphthaldehyde and a number of aliphatic diamines where the number of methylene groups between the two azomethine nitrogen donors varied from two to six are the components of quadridentate ligands. The pure isolated compounds were subjected to several physicochemical investigations to assign their structures. Spectral and magnetic measurements suggested a distorted octahedral arrangement of the six coordinate diimine ruthenium(III) complexes. The structural optimization for one of the current Ru III complexes was determined based on the processing of powder X-ray diffraction (PXRD) data by the computer program Expo 2014 PXRD. As well DFT calculations were applied to optimize the geometry in the case of complexes 1 . The newly synthesized ruthenium(III) diimines were tested as catalysts for hydrogenation of cyclohexene. The effect of the catalyst structure and the type of catalysis as well as the nature and amount of the solvent used on the catalytic performance of the current catalysts were studied. Catalytic experiments reported that the ongoing ruthenium(III) complexes are promising precatalysts that have successfully catalyzed hydrogenation of cyclohexene by hydrogen gas under moderate process conditions. The results obtained allowed to establish a mechanism for the studied catalytic hydrogenation reactions.
Introduction
Catalytic hydrogenation of olefins by molecular hydrogen plays a central role in many industries, such as pharmaceuticals, petrochemical, food, specialty chemicals, commodity chemicals, and agrochemicals [1a-e]. The literature includes numerous studies related to the catalytic hydrogenation of simple olefins by means of transition metal complexes [2,4]. The metal complexes showed a distinction in catalytic performance over the metal itself in the hydrogenation processes in terms of higher activity and work under milder conditions [2]. In this respect, a wide range of metal complexes containing d 8 -metals are widely used to catalyze the hydrogenation of the unsaturated hydrocarbons by H2 [3]. In the same context, 1961 saw the first use of ruthenium complexes in homogeneous catalysis processes to activate H2 [3f].
Catalytic processes in homogeneous systems are simpler from the chemical and kinetic point of view than the heterogeneous catalytic systems. Ruthenium complex, [RuCl2(PPh3)3], is most commonly used for H2 activation processes by the heterolytic splitting mechanism. The heterolytic splitting of molecular hydrogen to produce a metal monohydride and proton is well known for many catalytic hydrogenation processes of unsaturated hydrocarbons, especially in polar solvents [5]. Presently, many ruthenium complexes with distinct catalytic activity are present in hydrogenation processes for unsaturated bonds of olefins [6]. In particular, monohydride ruthenium-complexes were reported as catalysts with high activity and selectivity towards terminal alkenes [7].
The metal complexes of Schiff's bases play major roles in the progression of metal complexes chemistry. Among these metal complexes worth mentioning in this regard and which has been used as a catalyst in hydrogenation of simple alkene by H2 is the palladium(II) complex containing N2O2 donor sites of the tetradentate Schiff base ligand (salen) [8]. Within our knowledge the use of ruthenium(III) Schiff base complexes as catalysts for hydrogenation of simple alkenes by H2 is not yet known. Only one investigation could be found reporting the reductive carbonylation of nitrobenzene to phenyl urethane catalyzed by ruthenium(III) Schiff bases of the general formula [Ru III LCl] or [Ru III LCl2], where L is Schiff base with N2O2, N4 and NS donor groups [9].
Among the complexes tested (Ru-SolphCl2) showed the highest catalytic activity but kinetic studies and mechanism elucidations were not reported.
The aim of the current study is to synthesis and characterize a new series of ruthenium(III) complexes with tetradentate Schiff bases containing N2O2 donors for catalytic hydrogenation of the simple olefin, cyclohexene, by H2.
Chemicals and Materials:
All the materials used in the present study are of a high degree of purity as they were purchased from reliable sources. The current diimine ligands were prepared according to the method reported elsewhere [10].
Preparation of diimine ruthenium(III) complexes 1 -4
An ethanol solution (50 mL) containing 0.2 mol of ligand diamine was heated for 30 minutes and followed by adding an equivalent amount of ruthenium salt (RuCl3 3H2O) dissolved in 50 ml of absolute ethanol. After that, the precipitate formed was filtered, washed with alcohol, ether, and then placed in a desiccator over P2O5 for a week. In most cases addition of ether was desirable to complete precipitation of the metal chelate.
Significant purity of metal chelates was achieved through further washing with the Soxhlet process with ethanol as the solvent. The composition of the pure isolated ruthenium(III) complexes was primarily established by elemental analysis which is recorded in Table 1.
2.3.
Physical measurements, the catalytic hydrogenation process that includes the hydrogenation apparatus, experiments, the analytical process and computational details are given in the Supplementary Information S1.
Characterization of Ru III -based diimines
The present study aims to prepare and characterize a group of Ru IIIbased complexes containing diimine ligands derived from 2-OH-1-naphthaldehyde and a number of aliphatic diamines with carbon chain in which the number of carbon atoms ranges from two to six. The reaction of the hydrated ruthenium trichloride salt (RuCl3 3H2O) in ethanolic solution with these Schiff bases gave a family of ruthenium (III) chelates which was subjected to several physicochemical investigations to assign their structures. In this respect, the results of elemental analysis in Table 1 demonstrated that the molar ratio of the newly prepared ruthenium(III) diimine complexes is 1:1. The molar conductance measurements in DMF at room temperature as shown in Table 1 indicate their nonelectrolytic behavior [12]. The full structural characterization of these ruthenium(III) Schiff base complexes was completed via comprehensive spectroscopic studies. The analytical data and the incoming spectral results demonstrate that the present Schiff bases behave as quadridentate dibasic ligands providing the chromophore N2O2 to coordinate with ruthenium(III) ion. Determination of the thermal stability of metal complexes is also an important factor, particularly when these metal complexes participate in catalytic applications under variable thermal conditions. Therefore, thermal analysis (TGA and DTA) was performed for the ruthenium(III) complexes under study. All thermal measurements were performed in an inert atmosphere of N2 at a temperature range of 50 to 1000 °C that begins with room temperature and ends when the weight loss constant has been reached as shown in the supplementary materials S2-S5. The corresponding thermal data such as the mass loss, temperature ranges and assigning the chemical compositions of both the thermally lost portion and the residual, as well as differential thermal features during the successive pyrolysis stages are given in Table 2.
As is evident from the thermograms curves of the existing ruthenium ( The estimated metal content values from the remaining metal oxide correspond to those estimated by the analytical methods, confirming the validity of the proposed molecular formulae for these ruthenium(III) diimine complexes.
Differential thermal analysis (DTA) curves show that each stage of weight loss is accompanied by an exothermic peak at a definite temperature value that given in Table 2.
In this respect, except in the case of complex 4, no endothermic peaks were observed, indicating that the current ruthenium(III) complexes are not subject to melting or suffer from any changes in the network prior to decomposition. This finding is in an agreement with the fact that the existing ruthenium(III) chelates decompose without melting.
Infra red spectra
To illustrate the bonding pattern of the present Schiff bases with the ruthenium (III) ion, the infrared spectra were measured as KBr disks of both the Schiff base bonds and their corresponding metal chelates. Based on the data obtained from the charts in S6-S13 the spectral assignments of the diimines along with the ruthenium (III) chelate were recorded in Table 3. Observed band shifts can be correlated to changes in the ligand system by coordination and thus give information about the bonding and arrangement in the metal complexes. The spectral features shown in S6-S13 and the frequency data in Table 3 demonstrate that the coordination pattern of Schiff bases with ruthenium(III) center is fairly identical for all metal chelates. The spectra of the complexes reveal the disappearance of the characteristic OH-bands of the ligands due to the destruction of the intermolecular hydrogen bond as a result of the coordination of phenolic oxygen to ruthenium(III) ion [13].
The evanescence of the distinctive peaks to OH group from the spectra of the ruthenium(III) chelates is consistent with those observed for the palladium(II) and nickel(II) complexes with the same current diimine ligands [2f, 4]. The remarkable shift for the υ(C-O) band at 1415 -1450 cm -1 to higher wavenumbers at 1450 -1500 cm -1 in the spectra of the complexes indicates the bonding of the phenolic oxygen to ruthenium(III) center [14]. In the same regard, the characteristic band of the Schiff base linkage is shifted to higher wavenumbers (1620 -1680 cm -1 ) as shown in table 3 due to bonding of the imine nitrogen to Ru III -center [14]. The bonding of ruthenium(III) ion to the current ligands through the nitrogen and oxygen is supported by the emergence of new peaks at 490-570 and 380-450 cm -1 attributable to the υ(M-N) and υ (M-O) respectively [14].
The results of elemental and thermogravimetric analysis showed, that the present Schiff base ligands behave as a dibasic acid which is then bound with ruthenium(III) ion as dianionic tetradentate ligands [13,15]. Since ruthenium(III) ion attains the coordination number six, the remaining coordination sites would be occupied by water molecule and the chloride ion. The broad band appearing at 3400 -3450 cm -1 range (Table 3) is an indication of the existence of H2O in the metal chelate molecule. The present TG measurements indicated that the present ruthenium(III) complexes have constant weight till 150 °C at which they begin to decompose. This fact excluded the surface nature of the water content and supports that the water molecule is coordinated to ruthenium(III) ion.
This fact rule out the superficial nature of the water content and supports the coordination bonding of H2O to the ruthenium (III) ion. Further emphasis on the coordination bonding of the water content of the current ruthenium(III) complexes comes from bending vibration patterns observed at 900 -930, 790 -800 and 660 -675 cm -1 which are characteristic of wagging, twisting and rocking absorptions [15]. Participation of chloro ligand in the coordination chromophore around ruthenium(III) ion is inferred from the new band appeared in the spectra of the metal complexes at wavenumber range of 300 -380 cm -1 assignable to υ(M-Cl) [14].
The composition of the current ruthenium(III) imine complexes appears to depend mainly on the number of carbon atoms of the alkyl bridge between the two azomethine groups of the diimine molecule. For L 1 its ruthenium(III) complexes is a monomeric in nature as evidenced from its magnetic moment value because the resulting chelate ring is five membered as illustrated in Scheme I. Concerning ruthenium(III) complexes of L 2 , L 3 and L 4 that contain more than three methylene groups in the bridge between two Schiff base linkages a dimeric or polymeric structure is formed (scheme I) [13]. This is to be expected because the resulting chelating ring will be somewhat unstable because it is larger than six membered and therefore the N2O2 donors cannot be given by a single diimine molecule.
Based on analytical data, measurements of both TG and molar conductance in addition to spectral investigations, the existing ruthenium(III) chelates can be formulated as shown in For n = 4, 5 and 6 the resulting ruthenium(III) complexes 2, 3 and 4 are dimeric or polymeric Scheme I: Structures of ruthenium(III)-based diimine complexes Table 3: FTIR spectra (cm -1 ) of diimines and Ru III -based chelates 1-4 Table 1; **the characteristic stretching of the coordinated water υ(H2O)
Electronic absorption spectra
In the absence of suitable crystals for the structural analysis of metal complexes, spectroscopic techniques and magnetic investigations are an alternative to determine the geometry of metal complex. Accordingly, the UV-Vis spectra of the current ruthenium(III) chelates were recorded as KBr discs. The spectra recorded for all complexes are approximately the same, indicating that their stereochemistry is identical and presented in S14 while the relevant energy values are explained and listed in Table 4. Table 4 displays that the strong bands with high frequency values are assignable to charge transfer of the type π → t2g (π*) transition where π is the HOMO orbital of the donor atoms and π* is the LUMO orbital, namely the incomplete orbital of the metal ion, t2g [16].
In the absence of any crystal field, metal complexes with d 5configuration have 6 S ground state term symbol. For the hexa-coordinated d 5 -metal complexes the ground state becomes 6 A1g and 2 T2g in both the weak and strong ligand field respectively.
Ruthenium(III) ion belongs to d 5 system where 2 T2g is the ground state for the hexacoordinated d 5 -metal complexes in the low-spin state of the octahedral symmetry. The corresponding electronic configuration is t 5 2g and the first excited doublet levels in the power-up arrangement are 2 A2g and 2 T2g respectively [17]. In the d 5 system and especially for ruthenium(III) ion in an octahedral geometry the likely spin-allowed ligand field transitions are: 2 T2g → 4 T1g, 2 T2g → 4 T2g and 2 T2g → 2 A2g, corresponding to υ1, υ2 and υ3, respectively [28]. The spectra of the current ruthenium(III) chelates (S14) exhibit three bands related to the expected d-d spin allowed transitions of the six coordinate d 5 -system in an octahedral stereochemistry of low spin ruthenium(III) complexes [17]. These bands appear in the low frequency zones at wavenumber values of 13333 -13513, 14492 -14705 and 18691 -19047 cm -1 which are assigned to υ1, υ2 and υ3, respectively [18]. An analysis of these spectral data allows us to compute the ligand field stabilization energy 10Dq, interelectronic Racah repulsion B and Nephelauxetic ratio β. 10Dq was determined from the energy difference between the ground state 2 T2g and the excited state 2 A2g based on the relation: 2 T2g → 2 A2g = 10Dq -3F2 -20F4, with F2 = 10F4 = 1000 cm -1 [18]. In the same regard the parameters B and C were computed from the relations [19]: The data in Table 4 show that the values of B (144.87 -160.37 cm -1 ) are less than the corresponding free ion Bo value (630 cm -1 ). The overlap of the orbits of both the ligand and the metal and the penetration of the free lone pairs of ligand to the d orbits of the metal ion leads to blocking or weakening the effect of the positive charge on the metal core, which is reflected in the observed decrease in the value of the electron repulsion [17]. This is reflected in the observed decrease in the value of electron repulsion which in turn causes the electron cloud to expand in d orbitals and consequently the interelectronic repulsion decreases. It is understood that as the valence state of a metal ion increases, its volume decreases and the value of Bo increases. However, this marked decrease in B compared to Bo indicates the predominance of covalent bonding in the ruthenium(III) complex molecule and thus leads to an increase in values of the ligand field stabilization energy (10Dq). This increase in the values of 10Dq is generally related to a significant electron delocalization [20]. Magnetic susceptibility measurements at 22 ºC of the current ruthenium(III) diimines (Table 5) demonstrate the low spin state of the d 5 ruthenium(III) ion. The data in Table 5 indicate that the μeff-values of Ru III ion are related to t 5 2g configuration in the octahedron stereochemistry. For complex 1, its μeff-value is greater than 1.73 BM suggesting appreciable spin -orbit coupling arising from incomplete quenching of the orbital contribution to the magnetic moment [21]. On the other hand, for complexes 2, 3 and 4 the values of μeff are lower than the spin only value. This may be an indication of spinspin interactions between the low spin neighboring Ru III centers in the dimeric or polymeric structures. As well this observed decrease in the magnetic moments could also arise from an extensive electron delocalization or lowering in the symmetry of ligand fields [22]. In the same regard, for metal complexes of 4d and 5d metals their room temperature magnetic moments are often found below the spin -only values and this behavior could be ascribed to high spin -orbit coupling constants. In this respect, the paramagnetism that can be expected from the unpaired electrons alone is reduced because the spin orbit coupling aligns the vectors L and S in opposite directions [23].
However, the magnetic moment values (Table 5) and the electronic absorption spectral properties of the current ruthenium(III) Schiff bases complexes are characteristic of the octahedral structure and are comparable with other ruthenium(III) complexes [24].
Structure solution by PXRD and DTF study
Due to the practical difficulty in obtaining a single crystal suitable for the present Ru III -based complexes, we could not perform a good structural analysis. However, the structural solution for metal complexes based on the processing of powder X-ray diffraction (PXRD) data by a structure solution computer program such as Expo 2014 is now common and accepted as an alternative of single crystal structural analysis technique [4]. In this context, the Rietveld Refinement technique was applied to achieve precise compatibility between the experimental results and computer processing of the X-ray pattern data of ruthenium(III) based complex 1 as shown in Figure 2; the corresponding PXRD pattern is shown in Figure 1.
The crystallographic data indicate triclinic system of the microcrystalline powder of complex 1 based on the space group value of P -1. The related the crystal lattice are 12.044, 9.946, 7.840Å for the dimensions a, b and c respectively ( Table 6). The corresponding angles, namely, α, β and γ have the values of 108.015°, 101.354° and 87.001° respectively. In the same regard, the cell volume (Å 3 ), volume per atom (Å 3 ) and the calculated density (g/cm 3 ) are 875.596, 14.123 and 1.976 respectively. Figure 3 shows the packing diagram of the unit cell incorporating the number of two molecules of the metallic complex. Looking at the optimized octahedron geometry in the numbering adopted scheme (Table 7). In the same context, the bonds distance between Ru(47) and the two axial sites O(49) and Cl (48) The data in Table 7 indicate that the values of bond angles around the Ru(47) center almost of ≈ 90° characteristic to the hybridization of d 2 sp 3 of the octahedral stereochemistry. For the hexa coordinated metal complexes with the octahedral polyhedron, a question arises about the ideality of this geometry, is it an octahedron or a trigonal prism? Determination the geometrical index "τ6" gives answer for this question based on the relation τ6 = θ/60; where θ is the twist angle between the opposing trigonal faces in the octahedron. In this respect if τ6 equals one this means that the respective geometry is an ideal octahedron while for τ6 equals zero the corresponding structure is a perfect trigonal prism [25]. The calculated value of τ6 is 0.9 and approaches one which indicates that the geometry of the current ruthenium(III) complex is almost a perfect octahedron. The DFT method comes to provide information about the geometry and hence these results were evaluated by comparison with their peers of experimental crystalline form.
The wB97XD/def2-SVPP model chemistry [26] was used for geometry optimizations in the gas phase as implemented in Gaussian 16 suite [27]. The optimized geometrical structures of the studied ruthenium complex, [RuL 1 Cl(H2O)] is reported in Figures 4 and 5. Interestingly, previous X-ray crystallography studies reported that similar Schiff base metal complexes possess trans and cis isomers, but the trans isomer is more common [28]. Accordingly, we calculated both isomers of [RuL 1 Cl(H2O)], and in agreement to the above mentioned references, it was found that the trans isomer is more stable than the cis one by 8 kcal/mol. In contrast, the both ruthenium(III) centers of the dimer prefer to adopt cis mode than the trans one by 6 kcal/mol. bond angles are within the ranges found for other related ruthenium(III) complexes whose exact structures have been determined by X-ray single crystal studies [27].
It should be noted here that the results obtained based on DFT calculations agree well with the results of X-ray powder diffraction data processing by Expo 2014, confirming the accuracy of the final structure of the ruthenium(III) complex under study.
Catalytic hydrogenation of cyclohexene
The To confirm the catalytic potential of the ruthenium(III) complexes under study, blank experiments were carried out in the absence of the catalyst. The results obtained showed that cyclohexane was not formed in these blank experiments, which confirms the catalytic potency of the ruthenium(III) complexes tested. The catalytic potential of studied ruthenium(III) complexes is evaluated using the following relationship: Yield percentage = [product / (reactants + product)] × 100 Influence of catalyst structure, type of catalysis, nature and quantity of co-solvents and solvents were taken into consideration during the evaluation of the catalytic activity.
Early studies reported that the catalytic hydrogenation of alkenes by molecular hydrogen in the presence of metal complexes is strongly influenced by the type and quantity of the solvent and the associated solvent [2e,f,g,h,4]. In this context, DMF proved to be the most widely used solvent in these catalytic reactions and was considered an influential factor in determining the course of many catalytic hydrogenation reactions. [29]. In light of this, DMF was used as a major component in the medium of catalytic reactions during this work. The data in S15 present the solubility degrees of H2 in the different types and quantities of solvents and co-solvents employed in the present study.
Cyclohexene is the best choice for measuring the catalytic potential of simple alkenes hydrogenation reactions. This is due to the fact that the product of the hydrogenation process is a single product that is not accompanied by by-products, and this will facilitate the study of the mechanism of this reaction. On the other hand, the use of the corresponding non-cyclic alkene (1-hexene) is not appropriate since the main product of the hydrogenation process is accompanied by the formation of many by-products. This is due to the transfer of the unsaturated bond through the carbon chain of this compound and the formation of a number of structural isomers, which therefore makes it difficult to accurately suggest the mechanism of this reaction.
The results in Table 9 indicate that the present ruthenium(III) diimine complexes catalyze the hydrogenation of cyclohexane by H2 with an efficiency ranging from 6 to 80% depending on the structure of catalyst and type solvent and co-solvents used. Among the complexes tested ( [4]. The current ruthenium(III) complexes are slightly soluble in ethanol, but completely soluble in DMF. In pure EtOH the activity of the complexes for the catalytic hydrogenation of cyclohexene is reduced ( [RuCl2P(Ph)3] [30].
With DMF excluded and under the prevailing practical conditions it is difficult to verify for a specific kind of catalysis. This is due to the always observed partial insolubility of the ruthenium complexes in the other solvents used in the study but the differences in catalytic activity are slight. However, the results in Table 9 demonstrate that the current ruthenium(III) based diimines could catalyze cyclohexene hydrogenation by both the homogeneous and heterogeneous catalysis. An overview of Table 9 shows that for a given ligand, both the type and quantity of the solvent control the catalytic hydrogenation of cyclohexane by H2.
Suggested mechanism
Hydrogenation of cyclohexene by H2 could proceed via electrons flow from the HOMO of the unsaturated double bond of cyclohexene (πorbital) to the LUMO of the hydrogen molecule (σ*). Or the electrons flow can come from the HOMO of H2 (σ) to the LUMO for the unsaturated double bond of cyclohexene (π*orbital). For the first or second path to occur, there must be a complete overlap between the interacting orbits, and this will not be achieved on the ground. Accordingly, these symmetry limitations can be overcome with the use of an appropriate catalyst and this is the role of the present ruthenium (III) complexes. In this context, the catalysis of this symmetric-blocked reaction could be accomplished by activation of either H2 or cyclohexene as shown by pathways (a) and (b). To distinguish between these two possible pathways, the following practical procedure was implemented: In the absence of ruthenium(III) complex, the reaction vessel was saturated with hydrogen gas, but the reaction between hydrogen and cyclohexane did not occur. In the same regard, repeating the same experiment in the presence of the ruthenium(III) complexes in question and the absence of olefin resulted in a remarkable decrease in the hydrogen gas pressure by an amount proportional to the amount of the catalyst in the reaction vessel. In light of this practical procedure, we can assume that the first path (a) is the dominant pathway and, therefore, the catalytic course of this reaction can be visualized and discussed as follows.
Since the examined complexes [RuL n ClH2O]; n = 1-4, are hexa coordinate and thus have no free coordinated sites for binding the reactants to perform the catalytic process.
In this case, one of the axial good leaving ligands (Cl or H2O) must be dissociate to provide a vacant coordination site for binding either H2 or cyclohexene to initiate the hydrogenation process. Accordingly, [RuL n ClH2O] is a precatalyst and the actual active catalyst is generated in the reaction medium (in situ). According to the polarization theory, H2O is less polarized than chloro ligand, and therefore less bound to ruthenium(III) center. Thus the spark of this catalytic reaction is the substitution reaction between H2O and H2 to produce the intermediate [H2RuL n Cl] in a fast step. It is generally assumed that the insertion occurs by means of a synergistic reaction pathway, via a more or less polar circular transmission state involving fracture and formation of bonds simultaneously [32]. Electrophilic attack by a proton on the carbon atom bonded to the metal yields the saturated product and regenerates the catalyst in its free active form. The catalytic cycle proposed for the homogeneous hydrogenation of cyclohexene using the chosen complex, [RuL n ClH2O], may therefore be represented as shown in scheme II.
Conclusion
In this contribution, four diimine ligands were synthesized via Schiff condensation between 2-OH-1-naphthaldehyde and a number of aliphatic diamines with carbon chain ranging from two to six. The pure isolated ligands interacted with RuCl3 3H2O to afford a series of metal chelates. Several physicochemical and spectroscopic techniques were employed to characterize the structural formulae of the prepared compounds. Octahedral structure was assigned for the newly synthesized ruthenium(III) diimine based complexes.
Confirmation of the assigned structural formula of complex 1 was achieved by PXRD and DFT calculations. Catalytic hydrogenation of cyclohexene by H2 in presence of the Ru III -based complexes in question was studied. The catalytic investigations included effect of catalyst structure, type of catalysis and nature and amount of solvent and cosolvent on the yield of the catalytic hydrogenation processes. The results in Table 9 show that there is a relationship between both the type and quantity of the solvent used and the values of yield percentage of cyclohexane. Moreover, the results obtained allowed establishing that the catalytic hydrogenation reaction takes place through H2 activation pathway. As well the catalytic investigations of the current hydrogenation process allowed us to suggest a catalytic cycle. | 6,438.6 | 2021-04-07T00:00:00.000 | [
"Chemistry"
] |
Dependent, Poorer, and More Care-Demanding? An Analysis of the Relationship between Being Dependent, Household Income, and Formal and Informal Care Use in Spain
Population ageing is one of the current challenges that most societies are facing, with great implications for health systems and social services, including long-term care. This increasing long-term care use is particularly rising for dependent older people, motivating the implementation of regional dependency laws to ensure their care needs’ coverage. Using data from the Survey of Health, Ageing, and Retirement in Europe (SHARE) from the year 2004 until 2017, the aim of this study is to assess the impact that the Spanish System for Personal Autonomy and Dependency might have on (i) household income, according to different needs for care levels, by running Generalized Linear Models (GLMs); and (ii) formal and informal care use depending on the income-related determinant through the performance of logit random-effects regression models. We show that the different degrees of needs for personal care are associated with a lower household income, being associated with an income reduction from €3300 to nearly €3800, depending on the covariates included, per year for the more severely in-need-for-care older adults. Moreover, our findings point towards a higher use of formal and informal care services by the moderate and severe dependents groups, regardless of the household income group and time period. Bearing in mind the demographic ageing, our results highlight the need for the identification of potentially vulnerable populations and the efficient planification of long-term care systems and social support services.
Introduction
Many countries are facing a growth in the number and proportion of older people in their populations, which is likely to have implications for the health and social protection systems [1,2]. Moreover, such demographic changes might also imply an increase in the share of the population presenting lower functional capacity, requiring assistance for daily living activities and a greater demand for Long-Term Care (LTC) in the coming years [3]. LTC expenditure represents 1.4% of the Organization for Economic Co-Operation and Development (OECD) countries' Gross Domestic Product (GDP) in 2014 [4][5][6], with large differences between countries, ranging from 4% in The Netherlands to less than 0.5% of GDP in some other countries such as Israel, Latvia, and Poland. However, this figure was estimated to more than double by 2050 [7]. The expected growth in LTC expenditures as a share of GDP and of public and private spending can be explained by population ageing [8,9], the greater probability of survival to older age [10], and the decline in the supply of informal caregiving due to some major social changes (i.e., new family structures, lower household size, higher female labor market participation) [9,11].
Hence, the aim of this study is to assess the impact that the Spanish System for Personal Autonomy and Dependency might have on household income and formal and informal care use.
Background Knowledge
The interrelationship between the different components of long-term care (mainly formal and informal care) is widely studied in the literature. Traditionally, informal care was regarded as a substitute of nursing homes [12,13]. Actually, a study using data from the Survey of Health, Ageing, and Retirement in Europe (SHARE) showed that informal and formal care are substitutes as long as the elderly's disability is low [14]. Substitutability or complementarity between informal care and formal care outside the household were largely discussed, highlighting that differences can be found regarding the disease, the services provided, and the degree of disability of the care recipient [13,[15][16][17][18]. Furthermore, variation within use of informal care services is quite large within European countries, not only due to population distribution and population ageing, but also due to the design of welfare programs in Europe, and the availability of support to these caregivers. For example, in Mediterranean countries, as Spain is, where informal care tradition is common, the benefits and support that informal caregivers receive for their services are quite low. On the other hand, in Northern European countries, informal care is not so extended, but social benefits and support are higher. Finally, in Central Europe, caregivers are provided with widely spread social support programs, benefits that vary within and across regions, but informal care is not so relevant [19,20]. Furthermore, differences in LTC demand can be explained by household characteristics, such as household composition and income, while playing a more modest role is the education and geographical location [21], as supported by the literature. However, other determinants of LTC might be limitation thresholds and how much coverage of care need is to be a public or a private responsibility [10], leading to economic and social inequalities [10,22]. In another study, the authors found different relationships between the type of long-term care service received and household income. More precisely, the authors found that the sole use of informal care was reduced with higher household income, whereas receiving both types of care, formal and informal, was associated with higher household income [18]. On the other hand, the sole use of formal care increased among the poorest households.
Moreover, international differences in LTC were studied from different points of view. Bakx et al., (2015) [23] concluded that LTC use is affected by country-specific eligibility criteria for public LTC coverage and comprehensiveness of public LTC systems. In the case of Spain, by the end of 2006, a new System for Promotion of Personal Autonomy and Assistance for Persons in a Situation of Dependency (SAAD) was released through the approval of Act 39/2006 on 14 December [24]. This Dependency Act (DA) recognized the universal entitlement of Spanish citizens to social care services according to their limitation degree, entering as the fourth pillar of Spain's Welfare State [25], in addition to health, education, and pensions. In Spain, the support and LTC to older people in need for personal care was traditionally organized within the family, mainly provided by women, being sometimes complemented by formal care [26,27]. Hence, one of the purposes of the DA was to reduce the burden of family members who undertake the role of the primary caregiver, and who additionally benefited from being registered within the Social Security System, recording their employment status as non-professional carers. Furthermore, the new system aimed to guarantee an adequate number of resources and services (i.e., prevention and promotion of personal autonomy, home help, day and residential care) to satisfy the growing demand and use of long-term care due to population ageing [28]. Still, public bodies were limited to provide LTC services only in cases where household income was not enough to cover such needs and if the older adult in need for care had a high grade of functional limitations [29].
Three levels of functional limitation were defined by the DA (mild, moderate, severe) with older adults in need for personal care, classified according to an official scale [30,31], which consisted of 47 tasks later grouped into ten activities of daily living (feeding, control of physical needs, toileting, other physical care, dressing, maintaining one's health, mobility, moving inside and outside the household, and being able to do housework). According to the score obtained in those 47 domains, the severity of the functional limitations was classified as: not eligible (0-24 points); mild level 1 (25-39 points) and level 2 (40-49 points); moderate level 1 (50-64 points) and level 2 (65-74 points); and severe level 1 (75-89 points) and level 2 (90-100 points).
At the end of the year 2013, 1,644,284 applications were received. From these, around 60% (944,345 requests) were eligible, but only 753,842 were actually receiving their benefits by December 2013 [32]. Moreover, despite the fact that SAAD was designed to provide universal coverage to older people in need of personal care, when the SAAD was fully active in 2015, 33.7% of the financial contributions were supported through co-payments afforded by the individuals who benefited from the DA. Moreover, according to an assessment of the Act, 45.5% of the finally perceived benefits were economic (cash-for-care) for the informal care provided by any family member who acted as the main caregiver [33], being much more extensively employed than planned. Another issue that should be considered is the fact that the 2008 economic crisis added more uncertainty to the system process, mainly due to inequality in access to LTC services between regions [34].
Purpose
To the best of our knowledge, there is no study in the existing literature that aims to assess the influence, through the application of appropriate statistical approaches, of the DA on household income and the use of formal and informal care, by additionally evaluating the mediation effect that income might have had on such care reception. Hence, the aim of this study is to assess the impact that the Spanish System for Personal Autonomy and Dependency might have on some outcomes depending on the income-related determinant, according to several characteristics of the Spanish population. Our purpose is, then, twofold: (i) we aim to assess the association between being in need of personal care and household income, and (ii) the relationship between the different functional limitations and income levels on the use of formal and informal care. The hypotheses we aim to asses are (i) that the implementation of the DA would have had a positive impact on household income, since one of the benefits considered in the law was to receive cash benefits for those individuals who received non-professional care; and (ii) there were income inequalities according to the income level, which were also dependent on the functional status of the older individuals.
Sample Data
The data used comes from the Survey of Health, Ageing, and Retirement in Europe (SHARE). SHARE emerged as a longitudinal survey with information on more than 120,000 individuals aged 50 years old and above from 27 European countries plus Israel. For ease of understanding of the data used in the study, more information can be found in Börsch-Supan et al., (2013) [35]. The period of analysis will cover 2004 (wave 1), 2006/2007 (wave 2), 2010 (wave 4), 2013 (wave 5), 2015 (wave 6), and wave 7 (2017). Wave 3 is not included due to a change in the SHARELIFE questionnaire and, hence, the information provided in Wave 3 was not useful for our analysis.
Given the aim of the study, we selected the individuals who reported to be living in Spain at the date of the interview with a minimum follow-up of three waves, which should be: the time before the DA (wave 1, year 2004), in the year of the introduction (wave 2, year 2006/2007), and after the DA (wave 4, year 2010; wave 5, year 2013; wave 6, year 2015; or wave 7, year 2017). Hence, after selecting the observations with information on at least three waves (two of them being wave 1 and 2, and then, at least, wave 4, 5, 6, or 7), and the individuals with non-missing values in any of the variables considered in our analysis, our sample further decreased to 4364 observations.
Dependent Variables
Two dependent groups of variables comprise the outcomes of the current study: The first outcome refers to household income, which is a continuous and self-reported variable referring to all annual income that was received by all the members within the same household. Household income is calculated as the sum of individual income from the responder (which is obtained from the individual income from employment, selfemployment, pension, private regular transfers (i.e., alimony), and long-term care), as well as from the sum of the gross incomes of other household members and other benefits, capital assets income (income from bank accounts, from bonds, from stocks or shares, and from mutual funds), and the rent payments received, plus imputed rents.
The second group of outcomes is formal care and informal care use. In case of the former outcome, information will be taken on whether the individual received professional help at home, as well as nursing home use, either permanent or temporarily, in the previous 12 months. About professional help at home, the questionnaire contains information on whether the individual received professional help at home with various matters, such as meals on wheels or cooking. However, it should be noted that the question related to home care was excluded in the questionnaire of Wave 4. Hence, the only measure of formal care available in Wave 4 is nursing home care as "institutions sheltering older persons who need assistance in activities of daily living, in an environment where they can receive nursing care, for short or long stays". Thus, the dependent variable took a value 1 if the respondent made use of any of the professional services mentioned above, and 0 otherwise. For informal care, SHARE allows for the identification of whether a non-professional caregiver, from inside or outside the household, helped the survey respondent due to any limitation in the activities of daily living during the previous 12 months.
Independent Variables
Being Identified as in Need for Personal Care According to the Dependency Act As Table S1, Supplementary Material, shows, the definition of "dependency" as an older adult in need for personal care in the Dependency Act was based on the limitations in the basic and the instrumental activities of daily living. SHARE does contain responses to the Katz Activities of Daily Living (ADL) Index [36,37]. This index, usually referred to as the Katz ADL, evaluates functional status as a measurement of the person's ability to carry out six activities of daily living independently. These are bathing, dressing, toileting, transferring, continence, and feeding. Moreover, SHARE also includes information on the number of limitations in the Instrumental Activities of Daily Living (IADL). This scale, usually referred to as the Lawton's IADL scale, evaluates the individual s ability to perform eight instrumental activities of daily living [38]: telephone use, shopping, cooking, housekeeping, laundry, transportation, preparation of own medication, and financing. Considering the weight assigned to it and the different categories within each, we generated our dependency score, given the availability of questions in SHARE.
According to the score obtained following the weights and points in Table S1, which are derived from the Dependency Act classification according to the individual s limitations in both ADLs and iADLs, the severity of the functional limitation was classified as: not eligible (0-24 points), mild (25-49 points), moderate (50-74 points), and severe (75-100 points).
Household Income Groups
In the second-group analysis, whose outcomes are formal and informal care reception, household income was divided into tertials, according to the distribution of the original household income variable, which was previously described as: low household income (income ranging from €0 to €14,135.19, annually), medium household income (€14,138.43-€29,046.13 per year), and high household income (€29,088.67 per year as the minimum and €477,483.8 as the maximum)
Other Independent Variables
There are three types of individual determinants of health and social care use: predisposing factors, which determine the individual's predisposition towards the use of resources, in this case long-term care; the enabling factors, which refer to the resources available to satisfy a health need; and need factors, which require the reason why the individual, due to the above factors, requests health care [39].
With respect to health status, which would enter as a need factor, different variables entered the analysis: self-assessed health status [40,41], number of chronic conditions (denoting the sum of the following conditions: heart attack, high blood pressure or hypertension, high blood cholesterol, a stroke or cerebrovascular disease, diabetes or high blood sugar, chronic lung disease, cancer or malignant tumour, stomach or duodenal or peptic ulcer, Parkinson disease, cataracts, and hip or femoral fracture), and a dummy variable for depression.
Moreover, other variables are the predisposing factors towards the use of formal and/or informal care. These were age, gender, level of education (no education, low, medium, and high, according to ISCED-97 codes), marital and employment status, number of children and grandchildren, whether any children lived in the household, and body mass index categories. Lastly, and only in case of the second aim of analysis, formal and informal care use, as enabling factors, entered the analysis as appropriate, depending on the outcome assessed.
Analyzing Associations between Being in Need for Personal Care and Household Income
The marginal impact of being functionally limited and the starting time of the Dependency Act, as well as the other independent variables on household income, were estimated using a Generalized Linear Model (GLM), given the skewed distribution of income [42]. GLMs are empirical transformations of the classical ordinary least square (OLS) regression model, which specify the conditional mean function directly. Specifically, GLMs do not require transformation scales, but a response distribution of one of the exponential family of distributions, which relates the mean of the response to a scale on which the model effects combine additively [43]. According to the Modified Park Test, the chosen family was the Gamma distribution for modeling household income in our analysis, with an identity link.
We ran four regressions models. In Model 1, we included wave dummies, the different categories of functional limitation level, and the interaction between those two categorical variables, as well as age, gender, education, and marital status. Then, in a second regression model, we included employment status. To Model 2, the third regression model added living conditions, such as living in a rural area or the number of children and grandchildren. Finally, health status variables were introduced in a fourth regression model.
Assessing the Impact of the Different Levels of Limitations and Household Income on the Reception of Formal and Informal Care
Modelling the probability of a positive outcome with a linear probability model (LPM) is a problematic issue. Instead, non-linear models for binary responses such as logit regressions with random effects were estimated [44][45][46]. We clustered standard errors at the individual level with the aim to correct for the existing correlation between individuals' different observations across waves.
In logit models, estimated coefficients capture the effects on the log-odds-ratio (see e.g., Heij, C. et al., 2004 [44]). Let Λ(t) = e t / 1 + e t be the logistic function with values stretching between zero and one, and let where i represents the individual, and t wave. f ormal it is a dummy variable indicating that respondent i received formal care in year t. x it is a vector of explanatory variables, which, in Model 1, includes time dummies, functional limitation levels, the different household income categories (low, medium, and high), as well as sociodemographic indicators (age, gender, marital status, and education level) and a dummy variable in case respondent i receives informal care in time t. Model 2 adds employment status to Model 1. In Model 3, we additionally control for living conditions, which would consist of living in a rural area, the number of children or grandchildren, and whether any of these children live within the household. In Model 4, variables related to the health status (self-assessed health status, number of chronic conditions, and depression) and the healthy lifestyles (body mass index categories) are added.
The same procedure is followed for our second outcome of interest, to receive informal care, either within or outside the household. It should be noted that, when informal care was the outcome of interest, a dummy variable for formal care reception entered the regression instead A coefficient is assumed to be significant when it is statistically significant, at least at 5% (95% confidence level). Table 1 shows the summary statistics of the sample for the set of covariates included in the analysis by year. Descriptive variables were compared between waves through T-tests or Chi-square tests for continuous and categorical variables, respectively. Table 1 shows some differences in the sociodemographic characteristics and living conditions of the individuals. Mean household income decreases from €24,330, approximately, in 2004 to €15,238 in 2015, when the household income reached its minimum.
Summary Statistics
The proportion of people receiving formal care was lower in 2006/2007, when the DA was announced, than in the year 2004, but the proportion of formal care receivers increased in the following years after the implementation of the DA. However, the increase in the use of formal care services between years seems to be driven by the demand for homecare rather than nursing home care. On the other hand, the proportion of people receiving informal care (inside or outside the household) increased between years but decreased in the last year included in the analysis. The same trend was followed for both types of informal care, inside and outside the household.
With respect to the functional and health status, individuals seem to be less healthy in later years than at the beginning of SHARE, as the proportion of people being classified as "severe functional limitation" increased (from 0.21 in the year 2004, on average, to 3.64 in 2017), as well as the percentage of individuals within the category "no limitation" decreased from 97.02% to 89.91%.
Analyzing Associations between Being in Need for Personal Care and Household Income
The results from the GLM regression with Gamma distribution and identity link on household income (Table 2) show that, in the baseline model (Model 1), functional limitation levels were not significantly associated with household income. Compared to the reference category (wave 2, years 2006/2007), living in wave 1 (year 2004) was significantly related to higher household income, which increased by €2686. None of the interactions between the limitation levels and the waves emerged as significant predictors. In Model 2, when employment status categories were considered, the set of limitations categories and their interaction with time dummies were not significant at 5% either. The coefficient from wave 1 was still significant and increased compared to Model 1, being €2857. When living conditions were included (Model 3), compared to being non-limited, severe need for personal care was significantly related to lower household income, which decreased by €3314. Living in wave 1, compared to wave 2 (years 2006/2007), was associated with higher household income, which became higher by €2275. As in models 1 and 2, none of the interactions was significant either. In Model 4, health variables entered the analysis. Being severely in need for personal care decreased household income by €3771, compared to no limitation. Being in wave 1 (year 2004) was still significantly related to higher household income, but with a lower coefficient (€2298). Interactions were not significant at 5%.
Assessing the Impact of the Different Levels of Functional Limitation and Household
Income on the Reception of Formal and Informal Care Table 3 shows the results from the logit regressions performed on formal care use. Compared to no limitation, all the functional limitations levels were significant and positively related to formal care reception. For example, being moderately in need for personal care was significantly associated with formal care, which was 7.11 OR higher than for people with no limitation. In the case of severe functional limitations, the odds ratio dropped to 4.02. Compared to wave 2 (years 2006/2007), living in wave 4 (year 2010, the first wave after the implementation of the DA at the end of 2006) was significantly related to formal care reception. Household income individually nor jointly with functional limitations levels was significantly associated with formal care reception. The odds ratio from the different levels of limitation decreased in Model 2, when employment status was included, compared to Model 1, but still significant. Moderate needs for personal care was significantly related to the odds of receiving formal care by 5.57 OR. Severe functional limitation was no longer significantly associated with formal care. Being in wave 4 was also significantly related to formal care reception, but with a lower coefficient than in Model 1. When living conditions (Model 3) and health status (Model 4) were considered, being moderately in need for personal care was significantly associated with the odds of formal care use, increasing its coefficient compared to the previous specifications (OR 9.11 in Model 3 and 9.06 in Model 4). Household income and its interaction with limitation categories were never significant. On the contrary, receiving informal care was always significantly related to a higher probability of formal care use, pointing towards the complementarity between both types of long-term care. Table 4 displays the results from the logit regressions performed on informal care reception. Compared to non-limited people, all the limitation levels were significant and positively related to informal care use, with the moderate need for personal care level having the greatest odds of using informal care (OR 75.05). Compared to wave 2 (years 2006/2007), living in wave 6 (year 2015) was positively and significantly related to informal care reception (OR 0.55). Although also significant and with a negative relationship, wave 7 reported a lower odds ratio than wave 6. Low household income, compared to high household income, was significantly associated with the odds of informal care use (OR 1.26). One of the interactions between household income and functional limitations emerged as a significant predictor of informal care reception: being moderately in need for personal care and having low household income were significantly related to higher odds of informal care reception (OR 0.11). The odds ratio from the limitation levels became lower in Model 2, when employment status was included, compared to Model 1, but still significant. Moderate limitation was significantly associated with the odds of using informal care by 69.80 OR. Severe functional constraint was significantly related to informal care reception, which increased by 26.48 OR compared with non-limited older adults. Low household income was no longer significant at 5%, nor was the interaction between moderate limitation and low household income. When living conditions (Model 3) and health status (Model 4) were considered, all limitation levels were significantly associated with the probability of informal care, increasing the coefficient from moderate functional limitation in Model 4 compared to the previous specifications (OR 72.37 for moderate need for personal care). Household income categories were not significant, but the interaction between moderate limitation and low household income was still significant (OR 0.083 in Model 4). Wave 6 was still significant and positively related to a higher probability of receiving informal care. With respect to the formal care reception variable, it was significant and positively related to a higher probability of informal care reception across regression models, confirming the complementarity between both kinds of long-term care. 0.000 *** 0.000 *** 0.000 *** 0.000 *** Clustered standard errors at the individual level in parentheses. *** p < 0.01, ** p < 0.05, * p < 0.1. Reference categories: wave 2 (years 2006/2007), not limited/not eligible for receiving benefits from the Dependency Act, medium household income, age 50 to 65, male, no education, married, retired, excellent self-perceived health status, and with normal weight. In Model 1, wave dummies, the different categories of functional limitation and income levels, and the interaction (denoted by #) between those two latter categorical variables, as well as age, gender, education, and marital status were included. Model 2 additionally adjusted for employment status. Model 3 adds living conditions, such as living in a rural area or the number of children and grandchildren. Health status variables (self-perceived health status, number of chronic conditions, depression, and Body Mass Index categories) are introduced in a fourth regression model. Significant results are in bold.
Discussion
The aim of this research was to analyze the impact that the Spanish System for Personal Autonomy and Dependency might have on (i) household income, and (ii) on the use of formal and informal care, depending on the income-related determinant, according to several characteristics of the Spanish citizens.
Our findings suggest that the different functional limitation degrees are associated with a lower household income only after adjusting for living conditions (Model 3) and health status (Model 4). The results obtained showed that being in need of personal care was associated with an income reduction in household income from €3300 to nearly €3800 per year for older adults that were severely in need of personal care. The rationale behind the result might be that more care-demanding households might incur higher healthcare expenditures [47,48], which might even lead to catastrophic health spending [48]. Our insights would be in line with the conclusions driven by other authors in other European countries, such as Ireland [49] or the United Kingdom [50], who found that household income steeply decreased when living with a disabled person. However, the interaction between the functional limitation level and the time dummies was never significantly related to household income. Furthermore, the only time dummy that emerged as a significant factor associated with household income was the time dummy referring to the period before the DA (wave 1, year 2004), leading to increases in household income.
The mechanism behind the decrease in household income according to the need for personal care level might be explained by several issues: first, the fact that disability and dependency onset might lead to work incapacity for those not yet in the retirement age; second, higher demand and use of informal caregiving, implying that some relatives might reduce their working hours or even retire early; thirdly, increased costs of living due to a higher demand of health and formal paid care, which would increase the associated expenditure, reducing the available income for other goods. Furthermore, the new system aimed to guarantee an adequate amount of resources and services to satisfy the growing demand and use of long-term care [28]. Still, public bodies were limited to provide LTC services only in cases where household income was not enough to cover such needs and if the older adult in need of care had a high grade of functional limitations [29]. However, a recent study showed that only 10% of the informal care time provided by family caregivers was eventually covered by the government [51]. Our findings confirm that the moderate and severe functionally-limited older adults have a higher use of formal and informal care services, regardless of the household income group and the time period. In fact, we observe that moderate needs for personal care are associated with a higher probability of formal and informal care reception, compared to severe limitations, which would be consistent with previous estimates [10,21]. The different functional limitations levels were still significantly associated with both outcomes even after controlling for individual and household characteristics, which were assumed to be related to inequalities in long-term care access [21]. Another relevant result concerns the positive relationships between informal and formal care use, pointing towards a complementarity association between both LTC services, as was already found in the existing literature [13,14]. Still, differences might be present according to the need for personal care of the care receiver [13,18].
Moreover, if coefficients were comparable between formal and informal care, our estimates suggest that the effect of functional limitations is higher on informal than formal care, as the odds of using informal care by an individual with moderate needs for personal care is 72.37 larger than the odds of using such care among people with no dependency. However, although statistically significant, the effect drops to OR 9.06 in the case of formal care reception. The vast impact of informal care among adults in need for personal care was indeed estimated to represent around 1.73-4.90%, depending on the dependency level, of the Spanish GDP [52], which reflects the burden that care for functionally limited people poses on the society. Household income and its interaction with the limitations levels were barely significant across regression models and long-term care services, pointing towards the predictive power of need for personal care itself.
In order to correct for structural changes that might happen simultaneously with the implementation of the SAAD, especially the 2008 economic crisis and consequent budget cuts, we included time dummies for the survey waves, taking as a reference wave 2 (years 2006/2007). It should be noted that the recent financial crisis brought about important drops in health and care services, as long-term care is, in addition to high unemployment rates (which raised 27% in 2014 in Spain) in addition to a higher risk of social exclusion [53]. Furthermore, after the worst crisis times (2009-2013), new regulations led to a substantial reduction in public expenditure and a higher promotion of co-payments. Home help shortcuts, in addition to a relevant delay in the evaluation of benefit applications under the DA, mainly those affecting moderate and mild levels of limitations, led to the existence of the so-called "dependency limbo" for those who were actually entitled to receive the benefits observed by the DA but eventually received none [27].
Some limitations should also be mentioned. First of all, we point out the generation of the different levels of functional limitations. As Table S1 shows (Supplementary Material), SHARE does not include as many activities of daily living as the Dependency Act considered. However, the use of weights and information included in SHARE might have reduced such bias. Secondly, we were not able to analyze the effect in the first wave right after the declaration of the DA, as the data that correspond to 2008 (wave 3) refer to individuals childhood conditions. Hence, the first-time reference of observation is 2010, that is, four years after the DA, when the immediate effect might have smoothed. We consider that having three points in time after 2006 provides consistent and trustworthy estimates. Thirdly, and regarding the second part of analysis (limitations and household income on the use of formal and informal care), the results from 2010 (Wave 4) for formal care should be interpreted with caution, as information on home care was excluded in questionnaire of Wave 4. Hence, the only measure of formal care available in wave 4 is nursing home care.
In recent times, there were huge advances in the development of empirical studies of public policy evaluation. This fact is motivated both by the greater availability of data at the microeconomic level (especially from surveys based on microdata) and by the latest computer advances. Actually, despite the availability of large panel datasets, we were not able to assess fully the impact of the DA. The study of the impact of aging and limitations of individuals as well as their redistributive incidence is still an open research field with a significant potential for advancement within the Welfare and Health Economics, taking into account the wide diversity of diseases, many of them based on chronic factors. Hence, it would be desirable to collect detailed information on the eligible individuals and those eventually obtaining the benefits. It is true that the Spanish Ministry of Health had a survey called EDAD on disability, personal autonomy, and dependence, but the last available data are from 2008. Future lines of research could entail collecting data from the actual applicants who were not eligible, the eligible ones, and the people who already received any benefit considered within the law.
Our results suggest that the introduction of the Dependency Act, instead of alleviating the burden assumed by informal caregivers in the care provision, posed an even greater burden, heavily increasing its use, which was not parallel to the increase in formal care availability at all. However, the heavy caring load sustained by the informal caregivers was not accompanied by the cash benefits promised at first within the new law [54], but later significantly decreased. Hence, governments should take into account that although informal care promotion is tempting from a public policy perspective due to its free provision, the heavy burden borne by informal caregivers should not be neglected, as its impact on national expenditures is vast enough [52]. Policymakers should answer to the dramatic situations that informal caregivers might face, especially when they give up their job to satisfy their caregiving tasks, and design the appropriate policies, additionally promoting the use of formal care services. Nevertheless, the successful policies implemented to ensure fair access to affordable social care are complex to assess from a comparative view, and it is even more difficult to determine LTC public policy recommendations applicable to heterogeneous welfare models, so our approach is more suitable for the case of Spain.
The increase in the older population in Europe will continue in the coming years and will pose new challenges in the reorganization of both formal and informal care for functionally impaired older adults, in addition to access to better information on the factors that determine them for a coordination of social services in efficiency and equity.
Conclusions
This study shows that, although the evolution of chronic limitations in Spain depends on socioeconomic inequalities, there are other important directions for future research related to (i) how being limited in performing activities of daily living affects household, and not only individual, income; and (ii) how being functionally limited and household income are associated with the use of formal and informal care, especially after the introduction of laws that aim to cover the particularities of older adults in need of personal care, such as the Dependency Act in Spain. Higher levels of limitations report large decreases in household income, compared to non-limited individuals, maybe due to higher long-term care needs, as our results also show. Taking into account the ageing demographic context that European societies are facing, our results point out the necessity to identify potentially vulnerable populations and to enhance the efficient planification of long-term care and social support services. Funding: This research was funded by the Instituto de Estudios Fiscales, Spain, under the title: "Evaluación económica e impacto del envejecimiento poblacional y grado de limitaciones del individuo en el gasto sanitario: utilización e incidencia redistributiva" within the research line "Evaluación de políticas de gasto". Institutional Review Board Statement: All procedures performed in studies involving human participants were in accordance with the ethical standards of the Ethics Council of the Max Planck Society and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
Informed Consent Statement:
The SHARE study is subject to continuous ethics review. During Waves 1 to 4, SHARE was reviewed and approved by the Ethics Committee of the University of Mannheim. Wave 4 of SHARE and the continuation of the project were reviewed and approved by the Ethics Council of the Max Planck Society. More information on the SHARE website (www.shareproject.org (accessed on 24 April 2020)).
Conflicts of Interest:
The authors declare no conflict of interest. | 8,878.8 | 2021-04-01T00:00:00.000 | [
"Sociology",
"Economics"
] |
Methods for enhancing the reproducibility of biomedical research findings using electronic health records
Background The ability of external investigators to reproduce published scientific findings is critical for the evaluation and validation of biomedical research by the wider community. However, a substantial proportion of health research using electronic health records (EHR), data collected and generated during clinical care, is potentially not reproducible mainly due to the fact that the implementation details of most data preprocessing, cleaning, phenotyping and analysis approaches are not systematically made available or shared. With the complexity, volume and variety of electronic health record data sources made available for research steadily increasing, it is critical to ensure that scientific findings from EHR data are reproducible and replicable by researchers. Reporting guidelines, such as RECORD and STROBE, have set a solid foundation by recommending a series of items for researchers to include in their research outputs. Researchers however often lack the technical tools and methodological approaches to actuate such recommendations in an efficient and sustainable manner. Results In this paper, we review and propose a series of methods and tools utilized in adjunct scientific disciplines that can be used to enhance the reproducibility of research using electronic health records and enable researchers to report analytical approaches in a transparent manner. Specifically, we discuss the adoption of scientific software engineering principles and best-practices such as test-driven development, source code revision control systems, literate programming and the standardization and re-use of common data management and analytical approaches. Conclusion The adoption of such approaches will enable scientists to systematically document and share EHR analytical workflows and increase the reproducibility of biomedical research using such complex data sources.
The replication of scientific findings using independent investigators, methods and data is the cornerstone of how published scientific claims are evaluated and validated by the wider scientific community [15][16][17]. Academic publications arguably have three main goals: a) to disseminate scientific findings, b) to persuade the community that the findings are robust and were achieved through rigorous scientific approaches and c) to provide a detailed description of the experimental approaches utilized. Peer-reviewed manuscripts should, in theory, describe the methods used by researchers in a sufficient level of detail as to enable other researchers to replicate the study.
A recent literature review [18] of research studies using national structured and linked UK EHR illustrated how a substantial proportion of studies potentially suffer from poor reproducibility: only 5.1% of studies published the entire set of controlled clinical terminology terms required to implement the EHR-derived phenotypes used. Similar patterns were discovered in a review of over 400 biomedical research studies with only a single study making a full protocol available [17]. With the volume and breadth of scientific output using EHR data steadily increasing [19], this nonreproducibility could potentially hinder the pace of translation of research findings.
No common agreed and accepted definition of reproducibility currently exists across scientific disciplines. Semantically similarly concepts such as "replicability" and "duplication" are often used interchangeably [20]. In this work, we define reproducibility as the the provision of sufficient methodological detail about a study so it could, in theory or in actuality, be exactly repeated by investigators. In the context of EHR research, this would involve the provision of a detailed (and ideally machine-readable) study protocol, information on the phenotyping algorithms used to defined study exposures, outcomes, covariates and populations and a detailed description of the analytical and statistical methods used along with details on the software and the programming code. This in turn, will enable independent investigators to apply the same methods on a similar dataset and attempt to obtain consistent results (a process often referred to as results replicability).
Electronic health record analytical challenges
EHR data can broadly be classified as: a) structured (e.g. recorded using controlled clinical terminologies such as as the International Classification of Diseases-10th revision (ICD- 10) or Systematic Nomenclature of Medicine -Clinical Terms (SNOMED-CT [21]), b) semi-structured (e.g. laboratory results and prescription information that follow a loose pattern that varies across data sources), c) unstructured (e.g. clinical text [22]) and d) binary (e.g. medical imaging files). Despite the numerous advantages EHR data offer, researchers face significant challenges (Fig. 1).
A primary use-case of EHR data is to accurately extract phenotypic information (i.e. disease onset and progression), a process known as phenotyping, for use in observational and interventional research [5]. Phenotyping however is a challenging and time-consuming process as raw EHR data require a significant amount of preprocessing before they can be transformed into research-ready data for statistical analyses [23]. The context and purpose in which data get captured (e.g. clinical care, audit, billing, insurance claims), diagnostic granularity (e.g. post-coordination in SNOMED-CT vs. fixed-depth in ICD) and data quality vary across sources [24].
EHR data preprocessing however, is not performed in a reproducible and systematic manner and as a result, findings from research studies using EHR data potentially suffer from poor reproducibility. Phenotyping algorithms defining study exposures, covariates and clinical outcomes are not routinely provided in research publications or are provided as a monolithic list of diagnostic terms but often miss critical implementation information. For example, a phenotyping algorithm using diagnostic terms in hospital care should consider whether a term is marked as the primary cause of admission or not but this important distinction is often ommited from manuscripts. Common data manipulations (Fig. 2) on EHR datasets are repeated ad nauseam by researchers but neither programmatic code nor data are systematically shared. Due to the lack of established processes for sharing and cross-validating algorithms, their robustness, generalizability and accuracy requires a significant amount of effort to assess [25]. In genomics for example, crossreferencing annotations of data produced by related technologies is deemed essential [26] (e.g. reference Single Nucleotide Polymorphism (SNP) id numbers, genome annotations), but such approaches are not widely adopted or used in biomedical research using EHR data.
Electronic health records research reproducibility
Significant progress has been achieved through the establishment of initiatives such as REporting of studies Conducted using Observational Routinely-collected Data (RECORD) [27,28] and the STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) [29,30]. RECORD and STROBE are international guidelines Fig. 2 A generic EHR analytical pipeline can generally be split into several smaller distinct stages which are often executed in an iterative fashion: 1) raw EHR data are pre-processed, linked and transformed into statistically-analyzable datasets 2) data undergo statistical analyses and 3) scientific findings are presented and disseminated in terms of data, figures, narrative and tables in scientific output for studies conducted using routinely-collected health data. The guidelines focus on the systematic reporting of implementation details along the EHR analytical pipeline: from study population definitions and data linkage to algorithm details for study exposures, covariates and clinical outcomes (Table 1). Often however, researchers lack the tools and methods to actuate the principles behind these guidelines and fail to integrate them into their analytical process from the start but rather try to incorporate them before publication in an ad hoc fashion. This lack of familiarity with best practices around scientific software development tools and methods prevents researchers from creating, maintaining and sharing high-quality EHR analytical pipelines enabling other researchers to reproduce their research.
We argue that EHR research can greatly benefit from adopting practices used in adjunct scientific disciplines such as computer science or computational biology in Table 1 REporting of studies Conducted using Observational Routinely collected Data (RECORD) recommendations on reporting details around EHR algorithms used to define the study populations, exposures and outcomes RECORD guideline principle Description id number 6. 1 The methods of study population selection (such as codes or algorithms used to identify subjects) should be listed in detail.
7.1
A complete list of codes and algorithms used to classify exposures, outcomes, confounders, and effect modifiers should be provided.
13.1
Describe in detail the selection of the persons included in the study (i.e., study population selection) including filtering based on data quality, data availability and linkage.
22.1
Authors should provide information on how to access any supplemental information such as the study protocol, raw data or programming code.
order to reduce the potential irreproducability of research findings using such complex data sources. In this manuscript, we review and identify a series of methods, tools and approaches used in adjacent quantitative disciplines and make a series of recommendations on how they can be applied in the context of biomedical research studies using EHR. These can be used to potentially address the problem of irreproducibility by enabling researchers to capture, document and publish their analytical pipelines. Where applicable, we give examples of the described methods and approaches in R. Adopting best-practices from scientific software development can enable researchers to produce code that is well-documented, robustly tested and uses standardized programming conventions which in turn extend its maintainability. The primary audience of our work is the increasingly expanding cohort of health data scientists: researchers from a diverse set of scientific backgrounds (such as for example clinicians or statisticians) that have not been exposed to formal training in computer science or scientific software development but are increasingly required to create and use sophisticated tools to analyze the large and complex EHR datasets made available for research.
Methods and results
We searched published literature, gray literature and Internet resources for established approaches and methods used in computer science, biomedical informatics, bioinformatics, computational biology, biostatistics, and scientific software engineering. We evaluated and described the manner in which they can be used for facilitating reproducible research using EHR and address the core challenges associated with this process. Reproducibility has been identified as a key challenge and a core value of multiple adjunct scientific disciplines e.g. computer science [31][32][33] bioinformatics [34], microbiome research [35], biostatistics [36], neuroimaging [37] and computational biology [38]. We identified and evaluated the following methods and approaches ( Table 2): 1. Scientific software engineering principles: modular and object oriented programming, test-driven development, unit testing, source code revision control; 2. Scalable analytical approaches: standardized analytical methods, standardized phenotyping algorithms and 3. Literate programming
Scientific software engineering
The nature and complexity of EHR data often requires a unique and diverse set of skills spanning medical statistics, computer science and informatics, data visualization, and clinical sciences. Given this diversity, it's fair to assume that not all researchers processing and analysing EHR data have received formal training in scientific software development. For the majority of researchers, unconscious practices can creep into the developed code, which if never made publicly available, will never be discovered and yet underpin most published scientific claims. No researcher wants to be put into the position of retracting their manuscript from a journal or having to contact a scientific consortium to ask they repeat months of analyses due to an error discovered in their analytical code. While these issues are not unique in EHR research, they are amplified given its multidisciplinary nature. Virtual machines can potentially be used to encapsulate the data, operating system, analytical software and algorithms used to generate a manuscript and where applicable can be made available for others to reproduce the analytical pipeline.
Literate programming Encapsulate both logic and programming code using literate programming approaches and tools which ensure logic and underlying processing code coexist There is a subtle but prevalent misconception that analytical code does not constitute software as it's written for a statistical package (e.g. R [39] or Stata [40]) and not in a formal programming language (e.g. Python [41] or Java [42]). As a result, the majority of researchers inadvertently fail to acknowledge or adopt best-practice principles around scientific software engineering. This could not be further from the truth as, by definition, code written for transforming raw EHR into research-ready datasets and undertaking statistical analyses is both complex and sophisticated due to the inherent complexity and heterogeneity of the data. While not directly a technical solution, facilitating scientists to obtain up to date training in best practices through training initiatives such as Software Carpentry [43], can potentially enable them to produce better quality code.
There is no optimal manner in which scientific software can be developed for tackling a particular research question as this is intrinsically an extremely problem-dependent set of tasks. Adopting scientific software engineering best practices however can provide EHR researchers with the essential bedrock of producing, curating and sharing highquality analytical code for re-use by the scientific community. In general, scientific code development can be divided into several different phases which are usually executed and evaluated in smaller, rapid iterations: planning, coding, testing, debugging, documentation and analysis. In each of these phases, a number of tools and methods exist that enable researchers to manage provenance, readability and usability of their code. We review several of the most critical ones: modular programming, test-driven development and source code revision control in sections below.
Modular and object oriented programming
Adopting a modular programming approach will allow EHR researchers to organize their code more efficiently and subsequently enable its documentation and re-use, both by them and other researchers. Modular programming is essentially a software design technique that emphasizes separating the functionality of a program into smaller, distinct, independent and interchangeable modules [44]. This translates to splitting code (that is often produced as a single, large, monolithic file) into several smaller modules that contain similar operations or concepts and that in turn can be stored as independent source code files. These modules can contain anything from a collection of functions to process a particular set of data fields (e.g. convert laboratory test measurements from one measurement unit to another) or entire modules dealing with the intricacies of one particular data source (e.g. extract and rank the causes of hospitalization from administrative data).
Common EHR data transformation or analysis operations (Table 3) can be created as functions which can then be shared across modules. Defining functions that can be repeated across different modules significantly reduces the complexity and increases the maintainability of code. The majority of software applications used in EHR research allow both the sourcing of external files and libraries. For example, in R, the source command sources an external R file and the library command loads an external library into the current namespace. Functions can be easily defined using the function command.
Adjacent to modular programming is the concept of object oriented programming (OOP) [45]. OOP is a software programming approach based on the concept of objects which contains both data (attributes) and procedural code (methods) to work on the data and can interact with other objects. Formal definitions of objects (i.e. available attributes and methods) are provided by classes and objects themselves are instances of classes. Central to OOP is the concept of encapsulation which abstracts the data and methods of an object from other objects which are only allowed to interact with them through a predefined template called an interface. Interfaces in OOP are a paradigm which allows the enforcement of certain predefined properties on a particular class object. Finally, the concept of inheritance allows objects to be organized in a hierarchical manner where each level defined a more specific object than the parent level and inherits all the attributes and methods of the parent level. Methods of classes can have a number of preconditions and postconditions defined i.e. predicates that must always hold true just before or right after the execution of a piece of code or the execution is invalidated [46]. Finally, formal software design modelling languages, such as the Unified Modelling Language (UML), [47] can assist researchers in designing and visualizing complex software applications and architectures. Furthermore, modelling languages can be used as a common point of reference and communication across multidisciplinary EHR research groups as they provide non-technical, unambigious graphical representations of complex approaches. An example of a very simple UML class diagram is provided in Fig. 3. Table 3 Example of an R function for converting lipid measurements between mmol/L and mg/dL units Function arguments (value and units) are validated prior to performing the calculation and an error is raised if incorrect or missing parameters are supplied
Test-driven development and unit testing
Test-driven development (TDD) is a software development approach where automated tests are created prior to developing functional code [48]. In addition to testing, TDD involves writing automated tests of a program's individual units, a process defined as unit testing [49]. A unit is the smallest logical component of a larger software application that can be tested. The majority of tools and languages used in EHR research have some Fig. 3 Simple example of a Unified Modelling Language (UML) class diagram Class diagrams are static representations that describe the structure fo a system by showing the system's classes and the relationships between them. Classes are represented as boxes with three sections: the first one contains the name of the class, the second one contains the attributes of the class and the third one contains the methods. Class diagrams also illustrate the relationships (and their multiplicity) between different classes. In this instance, a patient can be assigned to a single ward within a hospital whereas a ward can have multiple patients admitted at any time (depicted as 1..*) mechanism to directly facilitate code testing. For example, in Stata code testing can be implemented using the testcase class in Mata while in SAS [50] unit testing is facilitated through the FUTS [51] or SASUnit [52] libraries. TDD can enable complex EHR preprocessing and analytical code to be adequately and thoroughly tested iteratively over the lifetime of a research project.
Several R libraries exist (e.g. testthat [53], RUnit [54] and svUnit [55]) that enable researchers to create and execute unit tests. RUnit and svUnit are R implementions of the widely used JUnit testing framework [56] and contain a set of functions that check calculations and error situations ( Table 4). Such tests can then be integrated within a continuous integration framework [57], a software development technique that enables the automatic execution of all tests whenever an underlying change in the source code is made and in a way ensures that errors are detected earlier. More advanced methods, such as executing formal software verification methods [58] against a predefined specification can be useful for larger and more complex projects. Table 4 Using the RUnit library to perform unit tests for a function converting measurements of lipids from mmol/L to mg/dL
Source code revision control
EHR research invariably generates a substantial amount of programming code across the entire EHR pipeline. Even minor changes, accidental or intended (e.g. updating a disease exposure definition), in the code can have large consequences in findings. Given the collaborative and iterative nature of EHR research, it is essential for researchers to have the ability to track changes in disease or study population definitions over time and share the code used in a transparent manner. The standard solution for tracking the evolution of code over time is to use a version control system (Table 5) such as Git [59] or subversion [60]. Version control systems, widely used in software engineering, are applications that enable the structured tracking of changes to individual text-based files both over time and across multiple users [61]. Version control also enables branching: the duplication of the core code for the purpose of parallel development independent of the parent code-base. Branching enables the isolation of code for the purposes of altering or adding new functionality or implementing different approaches. In the case of phenotyping algorithms, several branches of the same project may contain alternate implementations of the algorithm with the same core features but slight variations. If only one approach is needed in the end, the relevant branch can then be merged back into the main working code-base (Fig. 4). The use of version control systems in EHR research can enable researchers to keep versioned implementations of exposure, outcomes and phenotype definitions and document the reasons for any changes over time. Computable versions of phenotyping algorithms [62] can also be stored within a version control system for the same reasons.
Phenotyping algorithms defining disease cases and controls are often developed iteratively and refined when new data become available or changes in the underlying healthcare process model cause the data generation or capture process to change. In the CALIBER EHR research platform for example [3], phenotyping algorithms and their associated metadata are stored and versioned in a private version control system. This includes the actual SQL code for querying the raw data, the implementation details and logic of the algorithm, the diagnostic terms and their relative position used and any other relevant metadata (such as author, date of creation, date of validation) in a bespoke textbased format. This enables researchers to keep track of changes of definitions at the desired time granularity and facilitates the collaborative creation of algorithms. The metadata and implementation details are then made available through the CALIBER Data Portal [63] for other researchers to download and use.
Standardized analytical methods
Scientific software is often at first developed behind closed doors and public release is only considered around or after the time of publication [64]. The standardization of common analytical approaches and data transformation operations in EHR research will potentially enable the reproducibility of scientific findings and fuel a sustainable community around the use of EHR data for research. Adjunct scientific disciplines have adopted this principle through the creation of large software libraries that contain a variety of common analytical approaches [65,66]. For example, Bioconductor [67] was established in 2001 as an open source software for performing bioinformatics operations based on R. It serves as a common software platform that enables the development and implementation of open and accessible tools. Bioconductor promotes high quality Table 5 Example of using git to initialize an empty repository and track changes in a versioned file defining a study cohort documentation, and enables standard computing and statistical practices to produce reproducible research. The documentation across sections of each project is clear, accurate and appropriate for users with varying backgrounds on the programming languages and analytic methods used. There is particular emphasis on programming conventions, Fig. 4 Example of an algorithm managed by version control software Example of an algorithm managed by version control software. The master algorithm version is located on the main development line that is not on a branch, often called a trunk or master, is in green. An individual refinement branch, currently being worked on without affecting the main version is in green and is eventually merged with the main development version [95] guidance, and version control, all of which greatly benefit from being perfomed in a standardized manner.
Adopting similar approaches in EHR research can arguably allow researchers to use and re-use standardized data cleaning, manipulation and analysis approaches. The reproducibility pipeline may require a more explicit structure where specific analytic workflows are tied to complete processes illustrating the decision tree from data preparation to data analysis. The ability to reproduce biomedical research findings depends on the interconnection of stages such as detailing the data generation processes, phenotyping definitions and justifications, different levels of data access where applicable, specifics on study design (e.g. matching procedures or sensitivity analyses) and the statistical methods used. Building generic and re-usable software libraries for EHR data is challenging due to the complexity and heterogeneity across data sources. While some libraries for manipulating and analysing EHR data exist [68], these are narrowly focused on specific data sources are challenging to generalize across other sources, countries or healhcare systems. Building and curating software libraries following the best practices outlined in this manuscript and disseminating them with standard scientific output is recommended in order to grow and sustain a community of tools and methods that researchers can use. Examples such as Bioconductor can offer inspiration on how to build an active community around these libraries that will facilitate and accelerate their development, adoption and re-use by the community.
A key aspect of developing software tools for data processing is estimating the expected data growth and designing modules and tools accordingly to accommodate future increases in data. Given the steady increase in size and complexity of EHR data, workflow management systems used in bioinformatics such as Galaxy [69], Taverna [70], and Snakemake [71] can enable the development of scalable approaches and tools in EHR research. Workflow management systems enable researchers to break down larger monolithic tasks or experiments into a series of small, repeatable, well defined tasks, each with rigidly defined inputs, run-time parameters, and outputs. This allows researchers to identify which parts of the workflow are a bottleneck or in some cases which parts could benefit from pararellization to increase throughput. They also allow the integration of workload managers and complex queuing mechanisms that can also potentially lead to better management of resources and processing throughput [72]. Pipelines can be built to obtain snapshots of the data, validate using predefined set of rules or for consistency (e.g. against controlled clinical terminologies) and then transform into research-ready datasets for statistical analysis. Such pipelines can then potentially be shared and distributed using container technologies such as Docker [73] or package managers like Conda [74]. Docker is an open-source platform that uses Linux Containers (LXC) to completely encapsulate and make software applications portable. Docker containers require substantially less computational resources than virtual machine-based solutions and allow users to execute applications in a fully virtualized environment using any Linux compatible language (e.g. R, Python, Matlab [75], Octave [76]). Docker libraries can be exported, versioned and archived, thus ensuring that the programmatic environment will be identical across compatible platforms.
Limited backwards compatibility can often hinder the reproducibility of previous scientific findings. For example, newer versions of a statistical analysis tool might not directly support older versions of their proprietary data storage format or an analytical tool might be compiled using a library that is no longer available in newer versions of an operating system. These issues can potentially be mitigated through the use of virtualization [77]. Virtual machines (e.g. Oracle VirtualBox [78], VMware [79]) are essentially containers that can encapsulate a snapshot of the OS, data and analytical pipelines in a single binary file. This can be done irrespective of the "host" operating system that is being used and these binary files are compatible across other operating systems. Other researchers can then use these binary files to replicate the analytical pipeline used for the reported analysis.
Standardized phenotyping algorithms
No widely accepted approach currently exists for storing the implementation and logic behind EHR phenotyping algorithms in a machine-readable and transportable format. The translation from algorithm logic to programming code is performed manually, and as a result, is prone to errors due to the complexity of the data and potential ambiguity of algorithms [25]. In their work, Mo and colleagues describe the ideal characteristics of such a format such as the ability to support logical and temporal rules, relational algebra operations, integrate with external standardized terminologies and provide a mechanism for backwards-compatibility [80]. The creation and adoption of computational representations of phenotyping algorithms will enable researchers to define and share EHR algorithms defining exposures, covariates and clinical outcomes and share them in a standardized manner. Furthermore, machine-readable representations of EHR phenotyping algorithms will enable their integration with analytical pipelines and will benefit from many of the approaches outlined in this manuscript such as version control, workflow systems and standardized analytical libraries [62,81]. Finally, algorithm implementations can also be uploaded in open-access repositories [82] or software journals [83] where they could be assigned a unique Digital Object Identifier (DOI) and become citeable and cross-referenced in scientific output.
Literate programming
Publishing study data online or in secure repositories alongside the code to preprocess and analyse it may be possible in some biomedical research domains, but is typically not an option for EHR research given the strict information governance restrictions and legal frameworks researchers operate under. Additionally, EHR sources typically contain the entire patient history and all their interactions with health care settings but only a subset of the original data is used and it is therefore equally important to document and disseminate the process of data extraction as well as the post-processing and analysis.
Extracting the appropriate dataset for research involves specifying lists of relevant controlled clinical terminology terms, timing windows for study population and patient phenotypes, and eligibility criteria. The work of defining the extraction criteria is usually performed by data managers in conjunction with domain experts such as clinicians. The algorithm parameters and implementation details are subsequently converted to machine instructions (e.g. SQL), executed and resulting data are usually exported from a relational database. Although the rationale behind the extraction process and the machine readable code should bear equivalent information content, it is extremely difficult for a human reader to understand the underlying logic and assumptions by reading the code itself [80]. It is also very challenging to fully reproduce the extraction using only the human readable instructions of the agreed protocol given the ambiguity of algorithms and the complexity of the data. Similar challenges exist for the preprocessing of EHR data, such as the definition of new covariates and clinical outcomes, as well as for the analysis and post-processing (such as plotting) of the extracted dataset and results.
A simple and time-honoured solution to this challenge is the provision of documentation alongside the code used for the extraction/preprocessing/analysis of EHR data. This approach however is often problematic as documentation can often be out-of-date with regards to the code, might be incomplete as it is often written after the analysis is finished and for large projects, linking the correct pieces of documentation to the specific locations in the code can be cumbersome. A potential approach to solving this challenge is literate programming. The concept of literate programming was introduced by Donald E Knuth [84] and is not limited to a specific analytical tool or programming language. Literate programming is the technique of writing out program logic in a human language with included (separated by a primitive markup) code snippets and macros. In practice, both the rational behind the data processing pipeline as well as the processing code itself are authored by the user using an appropriate integrated development environment (IDE). The resulting plain text document subsequently undergoes two processes, the first in which the code itself is executed (often referred to as tangling) and one in which the formatted documentation is produced (often referred to as weaving). The result is a well formatted rich text document, for example Hypertext Markup Language (HTML), which can often include the output of the executed code (e.g. plots, summary tables, analysis results) alongside the original code snippets and documentation (Fig. 5).
The most popular modern day literate programming tool in R is roxygen [85] while popular report generation packages are Knitr [86] and Sweave [87]. Another widely used tool is Jupyter [88], which is often used with (but not limited to) Python. For example, Johnson et al. published all code necessary to reproduce the data description of the MIMIC-III [89] database in the form of Jupyter notebooks on a GitHub repository [90]. The use of Jupyter notebooks was encouraged, and a specially developed software platform that integrated with notebooks was provided for a datathon using the MIMIC-III database, rendering all resulting analysis fully reproducible [91]. Most EHR research Fig. 5 Example of using the Knitr R package to produce a dynamic report with embedded R code and results including a plot. Documentation and data processing code chunks are written in plain text in a file that is processed as RMarkdown. At the top of the file, a series of key: value pair statements in YAML set document metadata such as the title and the output format. Code chunks are enclosed between```characters and executed when the document is compiled. Parameters such as echo and display can be set to specify whether the results of executing the code or whether the actual code itself is displayed. Example taken from http://jupyter.org/ analytical tools support literate programming (Table 6) and its use can greatly facilitate closing the gap between code and narrative.
Taking the literate programming paradigm ever further, compendia [92] are containers for distributing and disseminating the different elements that comprise a piece of computational research. These elements are also fundamental in the concept of literate programming, however in the case of compendia, the data are also contained in the output [93].
Conclusion
The challenge of reproducibility in science has been widely recognized and discussed [94]. Scientists using EHR data for biomedical research face a number of significant challenges which are further amplified due to the complexity and heterogeneity of the data sources used and the cross-disciplinarity of the field. It is crucial for researchers to SAS SASWeave [97], StatRep adopt best-practices across disciplines in order to enable the reproducibility of research findings using such data. In this manuscript we identify and present a set of principles, methods and tools from adjunct scientific disciplines that can be utilized to enable reproducible and transparent biomedical research using EHR. Enabling reproducible research using EHR is an ongoing process that will greatly benefit the scientific and wider community. | 7,684.6 | 2017-09-11T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
A global land cover map produced through integrating multi-source datasets
ABSTRACT In the past decades, global land cover datasets have been produced but also been criticized for their low accuracies, which have been affecting the applications of these datasets. Producing a new global dataset requires a tremendous amount of efforts; however, it is also possible to improve the accuracy of global land cover mapping by fusing the existing datasets. A decision-fuse method was developed based on fuzzy logic to quantify the consistencies and uncertainties of the existing datasets and then aggregated to provide the most certain estimation. The method was applied to produce a 1-km global land cover map (SYNLCover) by integrating five global land cover datasets and three global datasets of tree cover and croplands. Efforts were carried out to assess the quality: 1) inter-comparison of the datasets revealed that the SYNLCover dataset had higher consistency than these input global land cover datasets, suggesting that the data fusion method reduced the disagreement among the input datasets; 2) quality assessment using the human-interpreted reference dataset reported the highest accuracy in the fused SYNLCover dataset, which had an overall accuracy of 71.1%, in contrast to the overall accuracy between 48.6% and 68.9% for the other global land cover datasets.
However, it is always expensive to improve the quality of the global land cover datasets . Another option is to quantify the uncertainty associated with individual land cover dataset and develop a harmonized land cover from these existing datasets. Initiatives like Global Observation of Forest and Land Cover Dynamics (GOFC-GOLD) in conjunction with the Food and Agricultural Organizations (FAO) and the Global Terrestrial Observing Systems (GTOS) have fostered harmonization and strategies for interoperability and synergy of all existing and upcoming global land cover maps . See and Steffen (2006) present a methodology based on fuzzy logic to generate an improved hybrid land cover map for Northern Europe by taking individual similarities and differences of only two global land cover maps into account. Jung et al. (2006) defined a new land cover classification scheme and used a fuzzy lookup table to integrate existing maps to generate a new global land cover map (SYNMAP). Iwao et al. (2011) created a global land cover map by integrating three land cover maps based on the principle that the majority view prevails, but its accuracy was not significantly higher than that of original maps. Ran et al. (2010) developed a new land cover map using multi-source information based on the Dempster-Shafer evidence theory, but it was limited to China. Schepaschenko et al. (2015) develop a global forest mask through the synergy of remote sensing, crowdsourcing and FAO statistics.
Here we propose a method of fusing multi-source land cover information for land cover datasets. The approach integrates not only land cover datasets but also datasets representing the quantitative attributes of specific land cover types. A new global land cover map was produced by applying the method to fuse existing land cover datasets and global tree-cover and crop-cover datasets. Quality of the new datasets were assessed by examining the consistency among the land cover datasets and the accuracy evaluated using human-interpreted points in China.
Data
Eight widely used and publicly available global datasets (Table 1) were selected to produce the fused land cover dataset. These datasets were all produced using coarse resolution (250 m~1 km) satellite imagery, e.g. AVHRR, MODIS, SPOT-4 and MERIS.
Global land cover maps
The majority (5 out 8) of the selected datasets are land cover maps, describing the distribution of cover types over the global land surface. The datasets and their classifications are described below: (1) Global Land Cover Characterization (GLCC), produced by United States Geological Survey (USGS) to provide land cover in several land cover classification, and GLCC in International Geosphere-Biosphere Programme (IGBP) classification (17 classes) is adopted in the analysis to better matching the other selected land cover datasets (Loveland et al., 1999).
(2) University of Maryland land cover product (UMD) is one of the earliest global land cover datasets, which provides land global cover with a simplified version of IGBP classification system, which has 14 classes . (3) Moderate Resolution Imaging Spectro-radiometer annual land cover product (MODIS LC) provides global maps of land cover at annual time steps and 500-m spatial resolution for 2001-present (Friedl et al., 2010;Sulla-Menashe & Friedl, 2018). It is one of the standard MODIS data products (Justice et al., 2002), and it supports multiple land cover classification systems, and the dataset with IGBP classification was selected in the analysis. (4) Global Land Cover 2000 (GLC2000) was produced by European Commission's Joint Research Center (EC-JRC) to provide regional land cover maps for each continent with a flexible classification system based the Land Cover Classification System (LCCS) developed by FAO and UNEP (Di Gregorio & Jansen, 2005). The global map was created by combining those regional land cover maps with converted LCCS code to a less thematically detailed classification LCCS (Bartholomé & Belward, 2005). (5) GLOBCOVER Land Cover Product (GlobCover) is a global land cover dataset produced by European Space Agency (ESA) (Arino et al., 2007;Bicheron et al., 2011). It was produced using the ENVISAT satellite mission's MERIS (Medium Resolution Image Spectrometer) sensor Level 1B data with a spatial resolution of 300 m. The GlobCover products include a map produced for global land cover in 2005-2006 and another map for 2009. The two maps adopted the FAO LCCS classification system, which has 22 classes, allowing change analysis between the two representing periods. Efforts have been carried out for validating these global land cover maps. Most of the datasets (i.e. GLCC, UMD, GLC2000 and GlobCover) were validated using sample collected from a designed sampling method and visually interpreted after examining higher resolution corresponding satellite images, i.e. Landsat TM (Thematic Mapper), SPOT (Systeme Probatoire d'Observation dela Tarre), MERIS (Medium Resolution Imaging Spectrometer Instrument) and Google Maps (DeFries, Hansen, Townshend, & Sohlberg, 1998;Friedl et al., 2010;Mayaux et al., 2006;Scepan, Menz, & Hansen, 1999). The MODIS LC dataset was validated based on a cross-validation using subsets of the training data that not been used for the training (Friedl et al., 2010). The reported overall areaweighted accuracies were 66.9% for GLCC (Scepan et al., 1999), 69% for UMD (DeFries et al., 1998), 75% MODIS LC (Friedl et al., 2010), 68.6 ± 5% for GLC2000 (Mayaux et al., 2006) and 67.1% for GlobCover (Mayaux et al., 2006). However, since different approaches and reference databases were used in the evaluation, the reported accuracies are not comparable and should not be considered as truly robust quantitative estimate (Jung et al., 2006).
Other global datasets
In addition to the land cover datasets, 3 global datasets were selected in the analysis: (1) MODIS Vegetation Continuous Fields (VCF) (MOD44B), which is a collection of annual estimates of several continuous vegetation measurements at 250 m resolution. It provides a global representation of the Earth's surface as gradations of three components, i.e. tree cover, non-tree vegetation and bare land (Hansen et al., 2003(Hansen et al., , 2011Townshend et al., 2011). The percent tree cover estimate is used in the analysis. The MODIS VCF Collection 5 was downloaded from the Global Land Cover Facility (GLCF) at the University of Maryland (http://glcf.umd.edu/data/vcf/). (2) MODIS cropland extent dataset provides estimates of probability of cropland at each 250 m resolution, and it includes two layers, i.e. cropland probability and cropland/non-cropland mask (Pittman, Hansen, Becker-Reshef, Potapov, & Justice, 2010). The MODIS cropland probability layer was derived from a set of multi-year MODIS metrics with incorporating 4 MODIS land bands, NDVI and thermal data, as well as a set of training data like FAO AfriCover and United States National Land Cover Database (NLCD), using classification tree method provided by S-Plus statistical package (Pittman et al., 2010). The probability product was then thresholded to create a discrete cropland/non-cropland indicator map (MODIS Cropland/Non-Cropland), using the data from US Department of Agriculture-Foreign Agricultural Service (USDA-FAS) Production, Supply and Distribution (PSD) database describing per-country acreage of production field crops. MODIS cropland extent probability/mask products over the period 2000-2008 were downloaded from the South Dakota State University (SDSU) (http://globalmonitor ing.sdstate.edu/projects/croplands/globalindex.html).
(3) AVHRR CFTC (Continuous Fields of Tree Cover) product, which provides estimates of leaf type/leaf longevity for tree classes at 1 km resolution. It was derived from monthly Advanced Very High Resolution Radiometer (AVHRR) NDVI composites over 1992-1993 period using spectral unmixing method . The dataset provides fractional estimate of leaf attributes for each pixel, including two layers representing leaf type (broadleaf/needleleaf) and leaf longevity (evergreen/ deciduous) separately; while each pair sums up to percentage tree cover. AVHRR CFTC products at 1 km resolution were downloaded from GLCF (http://glcf.umd. edu/data/treecover/).
Methodology
The global datasets were released in different formats, structures, spatiotemporal reference systems and semantical classifications. To facilitate data fusion and comparison, these selected global datasets are processed to a uniform geospatial reference system (section 3.1) and translated to comparable semantical variables (section 3.2). Another set of methods are applied to the integration of multi-source datasets (section 3.3), and the evaluation of fused dataset (section 3.4).
Geographic reference system
MODIS Sinusoidal projection (Seong, Mulcahy, & Usery, 2002) was chosen as the base geographic reference system. The spatial extent is between 180°W~180°E and 55°S~90°N , with a spatial resolution of 1 km. Firstly, the datasets were reprojected to the base spatial reference system with a resolution of 250 m using nearest neighbor resampling. The finer 250 m resolution was selected to reduce precision loss during the re-projecting. The reprojected data are then aggregated to 1 km resolution by selecting the most dominated land cover type within the extent of each 1 km pixel. GeoTIFF format was adopted for storing the output layers.
Translating semantical definition
3.2.1. Classification scheme A land cover classification scheme is defined as the target for translating the classes of the input land cover datasets. In order to coordinate the existing land cover types, the target classification scheme was defined upon parameters of classification standard for Plant Functional Types (PFTs) (Diaz & Cabido, 1997;Milchunas & Lauenroth, 1993), i.e. the occurrence of life forms and leaf attributes (leaf type/leaf longevity), which are common classifiers for land cover classification (Neumann et al., 2007). Twelve major classes were defined in the target classification scheme (Table 2), including 8 life forms categories, and 9 classes with leaf type and longevity associated with "Trees".
Affinity scores
Affinity score measures the fuzzy relationship between a land cover class in the input dataset and its corresponding classes in the target land cover classification scheme. The scores are assigned to the metrics of life form, leaf type and leaf longevity separately. In addition to the land cover datasets, the MODIS VCF and Cropland Probability layers also contribute additional information on trees and cropland classes. The affinity scores for "Trees" and "Cropland" were assigned using different rules individually compared with the other 6 life forms and leaf attributes.
Taking "Tree" as an example, the affinity score for a land cover class in the input datasets is assigned a score between 0 and 100 in terms of the percent canopy cover and sematic definition of the class (see Table 3). For example, assuming C is a class in an input land cover dataset: (1) If the class C matches "Trees" semantically, the score is assigned as the median of canopy cover for C, otherwise the score is assigned 0 if C and "Trees" are independent from each other. For example, the percent tree cover for 'evergreen needle leaf forest' in GLCC is >60%, the affinity score of tree cover for this class is set to 80.
(2) If the class C is defined as a mosaic type of forest and other vegetation types, and its percent canopy cover is >15%, then the affinity score between C and "Trees" is assigned to the value between the minimum and the median of canopy cover flexibly using expert knowledge, according to forest percent of mosaic class and its semantic relation with "Trees". Otherwise, if the defined percent canopy cover of C is <10%, then 0 is given to the affinity score between C and "Trees". For instance, the percent canopy cover for "mosaic: cropland/tree cover/other natural vegetation" in GLC2000 is 15-100%, the affinity score between this class and "Trees" is assigned to 35.
According to the five semantic rules shown in Tables 4 and 5, the affinity score for each input class is assigned a score between 0 and 100 to represent its likelihood to cropland or other classes.
All affinity scores between the source input class and target class are shown in Appendix 1.
Data integration
The fused land cover dataset is processed by integrating the input global datasets following four key steps ( Figure 1): (1) Fused dataset of Trees and Non-Trees classes are created by combing the tree cover layer from MODIS VCF and tree cover scores produced by applying the affinity scores of "Trees" to the five original global land cover datasets.
(2) If a location is identified as high probably of "Trees" at the previous step, the final forest class with leaf type and left longevity in SYNLCover is estimated by combining the MODIS CFTC information.
(3) Otherwise, the location is considered as Non-Trees, its likeness of "Cropland" is investigated by combining the MODIS Cropland/Non-Cropland layer and crop scores estimated by applying the affinity scores of "Cropland" to the five global land cover datasets. (4) For location with low likeness of "Cropland", they are further investigated by examining the affinity scores of the other six life forms calculated from the input global land cover datasets.
Two life form classes including "Trees" and "Cropland" in the fused dataset are determined according to following Equation (1) that calculates the mean score for each life form (Lf) for grid cell with coordinates i and j of SYNLCover: where: S Lf Mean ði; jÞ is the mean score for "Trees" or "Cropland" of SYNLCover; S Lf M ði; jÞ is affinity score for "Trees" or "Cropland" in the pixel (i, j) of the input global dataset M (Appendix 1.1); Table 5. Definition example of affinity scores for input classes and target class than are not "Trees" nor "Cropland". The choice of other six life forms and leaf attributes is made according to Equations (2) and (3), respectively, which calculates total score for other life form (OLf) and leaf attributes (LA) for grid cell with coordinates i and j of SYNLCover: where: S OLf Total ði; jÞ is the total score for life form except "Trees" and "Cropland" (OLf) of SYNLCover; S LA Total ði; jÞ is the total score for leaf attributes of "Trees" in SYNLCover; S OLf M ði; jÞis affinity score for OLf in the pixel (i, j) of input global land cover dataset M (Appendix 1.1); S LA N ði;jÞ is affinity score for leaf attributes in the pixel (i, j) of input dataset N (Appendix 1.2); OLf is each life form of SYNLCover except "Trees" and "Cropland" (Table 2); LA is leaf attributes including leaf type and leaf longevity of SYNLCover; M is the five global land cover datasets, N is five input land cover datasets and MODIS CFTC; i and j is current row and column of pixels, respectively. The maximum total score of S OLf M ði; jÞ and S LA N ði;jÞ is chosen as the best estimate of the life form OLf and leaf attributes LA in pixel (i, j) of SYNLCover, respectively. The calculation example for estimate of other life forms is illustrated in Table 6, and the life form class with the highest score wins, here "Grassland".
In case two or more life forms except "Trees" and "Cropland" get the same maximum total score, the decision which life form class wins is made by a random choice. If more than one leaf attributes receive the same maximum score, a decision matrix shown in Table 7 defines the winning leaf attributes. However, if the maximum score for leaf attributes is 0, both leaf type and leaf longevity are set to "Mixed". This compromise introduces uncertainty, which is fortunately small since this case is very rare and applies only to "Trees" class, so that only part of the leaf attributes of "Trees" is biased.
Quality assessment
Quality of the fused land cover dataset and the global land cover datasets were assessed using two methods: 1) inter-comparison to evaluate their consistency, and 2) validating the datasets using human-interpreted points in China.
Consistency analysis
The five global land cover datasets are translated to life form and the target class scheme to allow comparison (Appendix 2). The fused SYNLCover is compared to the global land cover datasets by calculating the pixel-based confusion matrices to evaluate its consistency with these global datasets. From the confusion matrices the mean overall consistency is estimated by averaging overall accuracies from comparing datasets to provide a general consistency between SYNLCover and the input datasets: MeanC a ¼ ðC ab þC ac þC ad þC ae þC af Þ=5 Where: C a* separately denotes the overall consistency between pairs of dataset a and another dataset *; Indices a-e are SYNLCover, GLCC, UMD, GLC2000, MODIS LC and GlobCover, respectively.
Accuracy assessment
In addition to inter-comparison between these datasets, independently reference dataset is collected by human interpreting land cover types at randomly collected points to provide a comprehensive accuracy evaluation of the fused datasets.
A total of 3000 points were randomly collected in China ( Figure 2). Because of the complex of land cover spatial distribution in China, the MODIS land cover dataset was aggregate to six classes (trees, grassland, cropland, water, urban and others) and then used as the stratification for collecting the points to increase the efficiency on representing the various of land covers in China.
The collected points were visually interpreted by experts in a Web-based tool ( Figure 3) (Feng et al., 2012). To help the interpreters on identifying the land cover types, the tool presents maps and charts created from various sources, including: 1) Landsat images from the four epochs (1970s, 1990, 2000 and 2005) provided by the Global Land Survey (GLS) (Gutman et al., 2008;Gutman, Huang, Chander, Noojipady, & Masek, 2013, p. 2) NDVI profile derived from the 8-day composited MODIS Surface Reflectance products (MOD09A1) after cloud and shadow masking; 3) geo-tagged ground photos provided by Google Maps. Eighteen image analysts who have experience in land cover participated in the interpretation task.
Confusion matrix, including overall accuracy (OA), user's accuracy (UA) and producer's accuracy (PA), is then calculated between SYNLCover and each of input global land cover datasets using interpreted samples.
Synlcover fused dataset
The global SYNLCover life form (Figure 4(a)) and SYNLCover target classification scheme (Figure 4(b)) datasets were produced from the multi-source datasets using the proposed data fused method. They provide distribution of land cover types globally at 1 km resolution. Most of the input datasets presented the global land cover in circa-2000. Although the GLCC and UMD land cover datasets were produced using satellite data in the early 1990s and GlobCover was produced for circa-2005, which are less than a decade away from 2000. Considering their temporal closeness and the insensitivity to temporal changes at the coarse resolution (Fritz et al., 2011), the produced SYNLCover datasets are considered to delineate the global land cover in 2000. Besides the fused global land cover dataset, the affinity scores for each class is outputted, which represent the probability of the class at each pixel. These layers make it possible for applications to explore the mixture of multiple classes within the extent of a pixel extent.
Consistency comparison between the fused dataset and input datasets
After comparing the SYNLCover and the land cover datasets (Figure 5), the SYNLCover had the highest average overall accuracy for both life form (69.16%) and land cover (61.93%) classes, suggesting improved consistency in the fused dataset over the input land cover datasets. The life form datasets had higher consistency than the land cover datasets, likely due to the higher disagreements among the datasets in the detailed classes introduced in the land cover classes (Jung et al., 2006). Relatively lower consistency was found in MODIS LC, GLCC, UMD and GLC2000. GlobCover get the lowest average overall accuracy for both life forms and land cover dataset.
Accuracy validation using interpreted points in china
After comparing the fused SYNLCover and the other land cover datasets to the human interpreted dataset in China, it reported an overall accuracy of 71.1% for the SYNLCover-Life Form, which is higher than 68.9% for MODIS LC, 65.2% for GLC20000, and significantly higher than the other three global land cover datasets (57.7% for GlobCover, 57.2% for GLCC and 48.6% for UMD). Also, there were obvious differences across both UA and PA of each life form in the new and the original land cover maps ( Figure 6). The UA and PA were between 33.3% and 98.4% for the major land covers except "urban and built-up", which had lower UA and PA for the three 1-km native resolution (i.e. GLCC, Figure 3. Interpreting land cover types at samples collected in a given area (Feng et al., 2012).
UMD and GLC2000). Preliminary checking suggested that the low accuracy of "urban and built-up" class was mainly due to the poor capacity of delineating the small and fractional urban and built-up by the kilometer resolution coarse spatial scale datasets. Compared with the five original land cover maps, the UA of "Trees", "Cropland" and "Urban and built-up", as well as the PA of "Grassland" and "Water" in SYNLCover-Life Form are improved significantly. Figure 6 also presented a general pattern of class accuracy of SYNLCover-Life Form and five input land cover maps. "Trees", "Grassland", "Cropland" and "Others" are described with higher accuracy, whereas "Water" and "Urban and built-up" with lower accuracy. In addition to accuracy comparisons between individual maps, we also compare the accuracy of SYNLCover with the averaged accuracy of five input land cover maps (Figure 7). Result shows that the OA, UA and PA of six life forms of SYNLCover-Life Form, especially for "Trees" and "Grassland", are higher than the respective average OA, UA and PA of corresponding classes of the five input maps. Obviously, the SYNLCover synthesizes information about the basic appearance of vegetation (forest, shrubland, cropland, herbaceous vegetation), the leaf types (broadleaf and needleleaf), the leaf longevities (evergreen and deciduous), and also other types from the original five land cover maps, VCF, Cropland Probability and CFTC data sets.
Conclusions
The existing global land cover datasets provide great value to the land cover user communities, but the low accuracy and inconsistency among the datasets have been affecting the applications of these datasets, especially in land surface process modeling research. We proposed an integration method to produce a global land cover dataset with improved accuracy by synthesizing multi-source global land cover data products using fuzzy logic method. A global 1 km land cover dataset, SYNLCover, was produced using the method with two sets of classification systems to address the need for land cover data regarding delineation of both life forms and land covers. Although these datasets are overlapping between the two classification systems, the life form classes are more generalized than the land cover classes, which further delineated the tree class into forests with different leaf attributes.
The fused SYNLCover was produced by integrating eight global datasets, including five global land cover datasets and three datasets that representing quantitative attributes of specific land cover types. To our knowledge, this effort has been the most comprehensive integration of global land cover datasets. The quality of the fused land cover datasets was evaluated by inter-comparing with the land cover datasets and a reference data produced by human interpretation of 3000 points collected in China. The validation is limited in China, but it was a rigorous assessment of the quality of the datasets because China is considered as one of difficult area for land cover mapping due to its vast geographical extent, highly diverse and fragmental geography (Ran et al., 2010). The validation could also be considered as a representation of the quality of these datasets in larger extent. Both intercomparison and accuracy assessment suggested that the fused land cover dataset had higher accuracy and consistency than the input global land cover datasets. The life form dataset had higher consistency than the land cover classes, mainly because its classes are more general and the reduced the disagreements between the subclasses of forests in the land cover classification system. Higher consistencies were found in most of the classes, except in cropland, wetland and urban, which are more difficult to delineate at 1 km resolution. It will likely require land cover mapping at finer spatial scale to be able to capture the fragmentations of these classes. Although eight datasets were used to produce the SYNLCover dataset, more global land cover or related datasets have become available recently (Tuanmu & Jetz, 2014) or will be produced in future, and the presented method could be applied to integrate these datasets to further improve the quality of global land cover datasets.
Data availability statement
The data that support the findings of this study area available from the corresponding author upon request.
Disclosure statement
No potential conflict of interest was reported by the authors. | 5,873 | 2019-07-03T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
RuangGuru community as a reflection of future learning in time of COVID-19
An effort to educate the people of a nation is a glorious educational goal, where intelligence is not only intellectually but a deeper meaning of intelligence itself. COVID-19 period is a genuinely uplifting meaning of the education itself. RuangGuru, an education-based application, provides solutions for Indonesians with free online schools. The researchers would like to see the shifting of meaning for a community who has used this application during COVID-19. The theory of Social Actions Media Studies was used to see the change in meaning. We also used the reception analysis method by Stuart Hall to classify the interviews' results into three categories, pro, contra, or neutral. The results indicate a shift in meaning due to the interpretation process by RuangGuru community, which is also influenced by local culture, on transition and future education is perceived.
INTRODUCTION
The purpose of education based on the Indonesian 1945 Constitution is an attempt to educate the people of the nation. Intelligence is not only intellectually but contains a more profound meaning (Aziizu, 2015). However, Indonesia's national education is far from ideal due to unresolved problems which affect the achievements that should have been realised (Alkhowailed et al., 2020). Some factors that influence education are technology in communication, transportation, escalation of interstate markets, and science development (Suyitno, 2012). Also, the role of government in finding solutions is significant in solving problems in their communities, just as the government did in conflict resolution in other sectors (Fernando et al., 2019).
In the early 2020s, the world was shaken by COVID-19, which spread very quickly. Starting from Wuhan, China, in early December and spread throughout the world, including Indonesia . This outbreak forces countries to halt their activities, especially those involving groups or the existing crowd. One of the activities is education, which is greatly affected by this pandemic. Schools must be temporarily closed, and the government requests that learning be done online (Kasih, 2020). Moreover, it happens also in Southeast Asia where most of the schools stop their activities (Editor, 2020) Amid these difficulties, RuangGuru, a digital-based education platform, provides full support for Indonesia by providing free online course during the COVID-19 pandemic . Initially, online education in Indonesia was seen as an alternative in learning activities. However, during a pandemic COVID-19, onlinebased learning becomes the main source of education for students to continue teaching and learning activities, primarily when social distancing and physical distancing policies are implemented by the government (Simpson et al., 2020). Therefore, it can be said that humans are required to be closer to technology (Marta & Christanto, 2015). At the same time, this has become a challenge for online education practitioners to realise online education properly and correctly (Angdhiri, 2020) So that researchers see that there are cultural changes or learning patterns that can occur in the future. This research is essential to be carried out to see changes in educational or cultural patterns in the future due to the COVID-19 pandemic. Where digital-based learning is no longer considered as an alternative but as an essential component in a learning system (Brevik et al., 2019). The long term goal of this research is that the findings can be used as a reference for other researchers or as a reference for preparing a new era in education, where the delivery of messages from teacher to student is no longer conventional, but also the presence of collaboration between conventional and digital systems which reflects the future learning activities during COVID-19 pandemic (Iivari et al., 2020). Previously, online learning by RuangGuru was used as a supplement for students to learn digitally. Somehow, it reflects the current situation where student study online, not just as a supplement, but as core material (Shoumi, 2019).
Therefore, the researchers are interested in analysing and observing the assimilation between media discourse, which significantly influences the media's framing (Sya & Marta, 2019). We proposed using public discourse and culture from the perspectives of RuangGuru users in terms of online learning during COVID-19. The community is active in this process, so the analysis used is the analysis of the acceptance of Stuart Hall's encoding/decoding model using a qualitative research approach. The theory that will be used by researchers is Social Action Media Studies because it is important to see the opinions or perspectives of communities that have experienced online learning using RuangGuru.
METHODOLOGY
In this study, researchers used a reception analysis method called the encoding/decoding model from Stuart Hall which postulates that reception analysis observes the assimilation of discourse in the media with public discourse and culture where in the process the audience actively carries out the process of meaning production (Dwita & Sommaliagustina, 2018). Jiwandono (2015:214) strengthens this opinion. According to him, the audience plays an active role in the process of meaning production, and the analysis of the reception is a form of reader response study that places the audience as an active audience. Furthermore, Jiwandono explained that the audience is not only as a reader or object but also as an active player in constructing meaning in which socio-cultural conditions influence meaning. Thus, the audience's meaning cannot be predicted so that the possibility of cultural commodification can create different cultural shifts (Fernando & Marta, 2018). Dominant-Hegemony Position, namely, the audience receives the same message as what is reported by the media. The Negotiated Position shows a contradiction where the public has a broad meaning in many text codes that sometimes contradict or change meaning according to how the audience's experience and interests influence them. The Opposition Position is when the public develops meaning outside the existing text code altogether and occurs when the public is in a social situation that is contradictory and opposite and provides a different alternative to the dominant text code.
Figure 1. Encoding and Decoding by Stuart Hall
Source: Desliana Dwita & Desi Sommaliagustina (2018) This study's primary data collection method was participant observation while interview was used as the secondary method. Observation-participant is a semi-ethnographic data collection method in which the researchers will directly observe the interaction of subjects, namely a community that uses RuangGuru. In this study, researchers used in-depth-interview techniques as data collection techniques to get a reaction of acceptance or understanding of the media text. The researchers hoped to get spontaneous, open, and honest answers from the informants. Then, we analyse the data in this study from the narrative from the in-depth interviews. The informants were selected based on the study's objectives and background where researchers feel that it is important to hear directly from the users who immediately shifted to felt the change in teaching and learning activities during COVID-19. Other sources used to support the analysis were the literature reviews.
The researchers used the Social Action Media Studies theory to see the perspective of the community. This theory itself was first coined by James Anderson and Gerard T. Schoening and consisted of 6 main principles, namely (1) audiences produce meaning, not from the messages but interpretation; (2) meaning produced from the media is not determined passively but is active by audiences; (3) Meaning is formed not individually but communally and meaning shifts frequently where the media are interpreted in more than one way; (4) The media will end in the interaction between the group that is influenced by social action; (5) The researchers has a bond to enter the community temporarily in seeking meaning; (6) The researchers has the responsibility to inform the consequences of the construction of meaning constructed by the community.
From those principles, this theory produces three outputs classified into content, interpretation, and social action. Content is the general meaning sharing of the content viewed. Interpretation is the meaning interpreted in the same way. Social action raises how content will be absorbed or interpreted (Schoening & Anderson, 1995). To validate and analyse the data, the researchers interviewed a group of people who use RuangGuru application. The result of the interviews would be considered the interpretations of the community.
Stakeholders are all parties, internal and external, who can influence or be influenced by the company either directly or indirectly. The existence of a company is greatly influenced by the support provided by stakeholders to the company. Stakeholders become primary and secondary stakeholders. A primary stakeholder is a party that does not participate in an ongoing manner, the organisation cannot hold on. Meanwhile, secondary stakeholders are the parties that influence or be influenced by the company, but they are not involved in transactions with the company and is of little significance to continuity corporate life. The success of a company's business is determined by management that are successful in building relationships between companies and the stakeholders. Stakeholders do not only consist of investors and creditors (shareholder), but also suppliers, customers, government, local communities, employees, regulatory bodies, trade associations, including the environment as part of social life. Financial and non-financial disclosures in the company's annual report can be said as a means to communicate between management and stakeholders (Lindawati & Puspita, 2015).
A stakeholder is a community, group, or individuals with a relationship and interest in an organisation or company. A community, group, community or individual can be said to be a stakeholder if they have characteristics such as having power and interest in the organisation or company. Alternatively, stakeholders are people who have an interest or interest in a company. This can involve financial or other interests, if the person is affected by what happened to the company, be it a negative or positive impact, that person can be said to be a stakeholder (Widodo et al., 2018).
RESULTS AND DISCUSSION
RuangGuru suddenly became a national spotlight during the COVID-19 pandemic in March 2020 because it RuangGuru provides free online classes to every Indonesian student on their mobile phone or laptop (Lanny Latifah, 2020). Online learning is an alternative model in Indonesia. However, in time of COVID-19 pandemic, online learning becomes widely used because onsite schools are temporarily eliminated. The pandemic, which started in March up to the time the researchers wrote this journal in June 2020, requires schools to run online (Saputra, 2020). Researchers are interested in the changes that happened during the three months, which forced adjustments to the teaching techniques and learning. The concepts of learning are currently being adjusted to the conditions due to COVID-19 pandemic worldwide (Muhaimin et al., 2020).
The decoders of the COVID-19 pandemic are teachers and students who have different perspectives in responding to the adjustment of teaching and learning activities during the pandemic. Wrapped by the local mindset and local culture, the writer sees that there will be different opinions between one another because Indonesia is also a pluralistic nation with many and different ethnicities and cultures. Specifically, the decoder for adjusting current teaching and learning activities is the teachers who work in RuangGuru and their students. Researchers conducted interviews with seven informants, namely two teachers who worked in RuangGuru, four students who used RuangGuru, and an online learning expert. The researchers associated the interview with three outputs from Stuart Hall: dominant-hegemonic position, negotiated position, and oppositional position. From the interviews with five informants and one key informant, researchers classified it in 3 outputs. The informants are as follows in Table 1: (2020) According to YI, a teacher at RuangGuru, related to the COVID-19 pandemic that affects teaching and learning activities in schools, RuangGuru is an educational platform to help the community provide free online schools. This helps students carry out their learning activities at home, especially with the Indonesian education system, which is not yet well distributed and the condition of Indonesia which is not ready to undergo an online education system. In this condition, RuangGuru is very helpful in the learning process.
In addition, based on YI's explanation, RuangGuru is actually very ready to be used before COVID-19 because starting at the end of 2019, RuangGuru is focused on developing technology-based onsite education as outlined in a product called BrainAcademy. When video conferencing was not commonly used in 2019, RuangGuru is developing a digital Bootcamp to accommodate video conferencing for students using the RuangGuru application.
For YI, the transition is still felt. YI sees that when teaching face-toface, students absorb knowledge better, or in other words, it is easier to understand what is being taught. On the other hand, in online classes, there are still several factors that make students feel tired in teaching and learning activities, for example, looking at computers for 4-5 hours. Meanwhile, using the computer for 4-5 hours will cause the eyes to experience refraction, which causes the eyes to become tired (Putri, 2018).
On the other hand, online systems have the advantage of flexibility. Knowledge spreads quickly, so learning is more efficient. Therefore, if we talk about future teaching techniques when the COVID-19 pandemic ends, there is a possibility that learning techniques will be 50% onsite and 50% online. Indeed, the effectiveness and efficiency of a mix of onsite and online classrooms may result in better curricula or learning techniques in the future.
The second informant is NY, a teacher who also teaches at a school in Medan. Researchers want to see how the responses of teachers who teach online outside Metropolitan Jakarta. We hope to find various obstacles or experiences compared to big cities.
According to NY, there were several difficulties in carrying out online learning during the COVID-19 pandemic. Not all students he taught online had good network connectivity. Furthermore, NY felt that she could not fully see what the students were doing, resulting in difficulties in evaluating.
Compared to onsite learning, online learning requires a more indepth understanding. NY can also feel that students should be guided more because they do not absorb knowledge. In real class, NY felt that she could say more. However, on the other hand, NY cannot deny that in the Pandemic era, the method offered by RuangGuru is considered the most appropriate to do compared to onsite learning which is also prohibited by the government.
In the future, NY feels that learning techniques may change. However, the changes may not be significant, depending on the respective region, whether they support hybrid learning. It cannot be denied that online learning activities require adequate network connection facilities, where certain areas may not meet these criteria.
The researchers interviewed four students with different education levels ranging from elementary, high school and tertiary education who are users of the RuangGuru application and take onsite classes to see the overall situation. AA, MN, and BN are RuangGuru users who will share their opinions and experiences.
The third informant is BN, primary school students who like the RuangGuru application because of the stages in the application that stimulate the enthusiasm for learning. However, going to school face-toface is more fun because BN can play together with friends at school. From the interviews with BN, it can be concluded that meeting friends is more fun than learning and doing virtual activities.
The fourth informant is MU, a lower secondary school student who has experience in online learning in RuangGuru. MU feels that online classes cannot replace offline classes. Online learning cannot provide a school-like experience. For example, hanging out with friends, playing sports, and interacting with other students. In fact, MU realised that online classes were more effective during the COVID-19 period. Moreover, according to MU, RuangGuru simplifies the material so that it can be understood easily. Apart from that, there are also exercises and live teaching features where MU can ask questions about the material being explained. MU hopes that education in the future will return to normal, and students will be able to play and interact more closely than online.
The fifth informant is named AA, who is in grade 11 of upper secondary school. AA feels that the virtual classes provided by RuangGuru are very interesting because the templates they provide are visual materials that are easy to understand. During the interview, AA admitted that several subjects are more effective when done onsite than virtual. Furthermore, AA believes that face-to-face and online learning have their respective advantages and disadvantages. In the future, face-to-face and online classes may support each other, creating a better education system.
The sixth informant is MN, an undergraduate student at one of the leading state universities in Yogyakarta. MN is also a user of the RuangGuru application. From MN's point of view, RuangGuru provides a different learning experience during the COVID-19 pandemic. According to MN, online learning techniques are more attractive and comfortable to apply because they are more flexible and efficient. In terms of effectiveness, according to MN, online and onsite learning are the same because the knowledge gained can be understood comprehensively so that it is not a problem. That is what makes MN feel that online techniques have the method. Moreover, in the future, more classes will be conducted onsite than online. In fact, MN personally prefers online learning techniques such as the RuangGuru application that MN did during the pandemic. MN is still hesitant to provide answers, but in general, the education system tends to shift but not significantly.
The last informants came from universities that implemented full online learning. SH is a senior lecturer at the Universitas Terbuka. She has taught for several years at several universities and experienced full faceto-face and full online teaching methods. Apart from that, SH also works as a news anchor in a well-known television program. So, SH, who is familiarly called Bu Dian, has much experience, especially regarding online learning.
Researchers asked the difference between online and onsite learning. Bu Dian believes that in online learning, there are synchronous and asynchronous meetings which will result in different feedback between online and onsite. Bu Dian sees that there is a lack of drama or emotion in the online learning system.
In his dissertation, which discusses social contact, she explains that remote system eliminates what happens around the other person. The feeling of empathy is not the same because we cannot feel the atmosphere or what the other person is feeling. Therefore, online learning users must find ways to present themselves as in the onsite learning process. On the other hand, online activities are a part of Bu Dian's daily life as a senior lecturer at Universitas Terbuka. According to her, the online system has the advantage of being faster and more flexible in implementing teaching and learning activities. In addition, knowledge can be provided more wholly and broadly with a more massive audience. The process will run well, provided that the teacher prepares the material well combined with a high level of literacy and good intentions not to be manipulated by technology. In line with SH, Christian (2019:142) states that information and communication technology bring change in society.
She further said that there is no difference between distance learning and onsite learning in terms of effectiveness. Three important elements, namely knowledge, affection, and motor skills, can still be maximised using online and onsite learning methods. Regarding affection and motoric elements, in general, other informants felt that online learning could not be done optimally on these two elements. On the other hand, Bu Dian argues that students and instructors can do it optimally even in online learning.
For example, affection can be obtained online by increasing solidarity in cyberspace. Changes in attitudes will increase affection between students and teachers who carry out learning activities online. From a motor point of view, online learning users can carry out activities that involve movements, such as game activities or gestures from students and instructors.
Furthermore, according to SH, online learning can create a more independent and responsible person. According to her, self-discipline will be formed in online learning because students have to set a schedule for when they have to study and when to stop to start other activities. SH added that during the COVID-19 pandemic, online teaching and learning activities certainly had a significant impact in providing access to students and teachers to carry out their activities amid limitations or restrictions from the government to carry out face-to-face teaching and learning activities. Likewise, a new culture seems to have formed; for example, people are more likely to wash their hands and wear masks. Another unseen new culture will emerge, especially in education, which might be very interesting to study.
SH also believes that it is our duty as a society together with the government to create a good, orderly and equitable learning culture. Indeed, we cannot ignore that the current online system is not running well due to various obstacles in several places, especially in rural areas with limited network connectivity. Besides, generational differences can hinder the equal distribution of online learning activities. The premillennial generation is literate using the online system, but the baby boomer generation may have difficulty adapting to the online learning techniques taking place during the COVID-19 era. Continuing the last question regarding the pattern of education going forward, will there be a change in education patterns, or will it return to normal after the COVID-19 pandemic has passed? Bu Dian said that anything could happen, but with various notes. One of them is that an online learning technique is most appropriate to be undertaken at the university or college level. Bu Dian views that learning online can become independent, mature, multitalented, creative, and foster an entrepreneurial spirit.
For children, face to face with a portion of 50% -50% is required. Because after all, at the school stage, students still need figures or role models; moreover, parents' role is also very much needed in shaping growth and development. So, in this phase, Bu Dian feels the most suitable is the hybrid technique. So that in the future, there will be a pattern of education as described above. So, we will find a socially responsible, independent, multi-talented, entrepreneurial, and creative student.
In this millennial era, it is certainly not difficult to realise a shift in learning patterns from face-to-face to online because the millennial generation is a person who has passion values, utilises technology to help, values, and is communal oriented. With the values they have and our nation's diverse cultures, it is very easy to make adjustments as long as teachers can present conflict-free learning materials and prioritise their rights and obligations as social and respectful human beings. Marta (2018:25) explains that respect gets the highest place as a primary obligation to treat someone as if they treat themselves. In addition, the role of students is important to create better education and value for themselves and others (Laksono, 2017).
Researchers reveal the results of interviews that have been conducted. Table 2 shows how the informants reacted and interpreted the shift in the education system here during the COVID-19 pandemic. Researchers have found many findings and can see how education in the future will be carried out. The researchers also provide a model to see the journey of people's mindsets to see the education system that is shifting to the current situation. People who use RuangGuru application have different interpretations and opinions. They are divided into three categories, namely, pros, cons, and neutral sides. The difference between these three categories is whether they feel a change or shift during the current situation. Besides, there is also speculation and visions into how education will shape the future after the COVID-19 pandemic. The speculation of future learning can be seen in the model provided by the researchers below. It can be seen from the coding of the interviews that all the interviewees have different ways on how they interpret the meaning, aside from the message where the media wants to send (Table 3). The researchers see from the interviewees' sentences where they actively produce meaning regarding online learning in a time of COVID-19. The production which they produce is different from one another (Table 4). All the informants shifted the meaning in different ways. The researchers can see what the informants think about the shifting of education during COVID-19 and how they act or think differently from the interviews (Table 5). (2020) Through the different opinions and feelings, all the interaction eventually ended in the community, and the system keeps running on the policy where the local government has made (Table 6). The researchers enter the community to understand what the community feels and experience on the online system of RuangGuru (Table 7); simultaneously, they also experience an online system as a result of pandemic COVID-19.
Six Premises of Social Action Media Studies
The researchers share the consequences regarding the interpretations being produced by the informants where the researchers inform based on online learning expertise's experiences SH. As a result of informants' responses, the researchers get the output for the content, interpretation, and social action, as Table 8: The content is not changed by the interpretations of the community Interpretat ion The interpretation changed the pattern how they act towards the change of education system Social Action There are 3 types of social actions pro, neutral, and contra Source: Compilation of six informants interpretations related to six premises of social action media studies From the results of interviews conducted related to the 6 premises of the Social Action Media Studies theory, researchers saw that during the COVID-19 pandemic, several informants used the RuangGuru application to produce interpreted meanings, not from messages, so that the public actively produced the production of meaning. With media interpreted in more than one way, researchers see a shift in meaning communally and not individually which is influenced by social actions today. Therefore, the premise that occurs, the output that occurs is the content divided by age, namely the pros and cons, interpretations that occur from informants are interpreted equally. In the end, the social action that is happening now raises the question, what kind of educational pattern will be applied in the future? It can be said that the shift in meaning occurs unconsciously or consciously because the audience has shifted and made new meanings from onsite learning and online learning. In following this shift in meaning, the researchers found that individuals or groups with different backgrounds and ages influence how they respond to changes in current teaching patterns. This is reinforced by what SH said as a distance learning expert that online learning is very appropriate when students have adopted a university, where they will be increasingly trained for independence with an online system that trains them to learn independently. On the other hand, there is still assistance from teachers and parents from elementary to high school. They still need someone to be a guide or role model. This happens to BN primary school students who prefer to go to onsite schools rather than virtual ones. In other words, the five informants interviewed spontaneously described Ibu Dian's explanation and the future of online or hybrid education in Indonesia. In the future, online or hybrid education can be more advanced if it is supported by high literacy teaching resources using technology well and using online education-based application at the right age. In this case, RuangGuru can be a partner that supports the advancement of technology-based education. This is because this application presents learning with levels as stimulation so that students are more enthusiastic in learning. Even so, BN still feels that meeting friends in person is more fun than just learning and doing virtually.
The model on Figure 2 shows the shift in the education system from onsite to online during the COVID-19 pandemic. The flow can be seen in a model where the result of the trip is the future development of education which is listed in a fashion which can be the future learning of this country in the future.
Through the Figure 2 model, researchers can see future educational developments. Researchers predict that online education will not be considered an optional method, but one part will change the way of learning in the future (Ivanov et al., 2020). In other words, the Covid-19 outbreak provides an overview of how education will be carried out in the future. This can be interpreted as a theory in which online learning is considered part of the core system of future education (Carpenter, 2019), although there are still doubts. However, we can see that online learning is the primary system to be applied during the Covid-19 pandemic.
Moreover, it appears that there are interest groups who take advantage of this momentum to gain benefits during the Covid-19 period. RuangGuru can get stakeholders towards this momentum. In line with Covid-19, technology has power, and RuangGuru sees opportunities to create communities that benefit companies and at the same time, government and society. Source: Developed by researchers (2020)
CONCLUSION
Of the six informants interviewed, the researcher found three different outputs based on Stuart Hall's decoding and encoding methods. Some informants are in a dominant hegemony position. MN informants accept media submissions and love them. In this case, MN expects an online learning system. Second, in the negotiation position, there were informants named Yosep and AA. The public generally accepted the dominant message that was going on, but there was some resistance in some instances. As AA has pointed out, some subjects are not optimal when run online. Third, in the opposing position, an informant named NY feels that the transfer of learning techniques from face to face to online makes it difficult for NY to adapt to all the obstacles around her, such as connectivity in certain areas. NY finds it difficult to feel what students feel when they learn and whether students pay attention to or understand the material. It is not available online, so NY feels face-to-face is the right way to go. Like BN, this elementary student felt that face to face at school was more fun because he could play with friends, even though BN liked RuangGuru.
ACKNOWLEDGEMENT
We are grateful because we managed to complete this journal on time. This journal cannot be completed without the effort and co-operation by our group members. Moreover, we would like to thank NewIndianExpress.com and JakartaGlobe.id for the news which we read. This journal gives encouragements for this country to conduct maximally, even in time of COVID-19 pandemic. We also sincerely thank the Ministry of Research and Technology/National Agency for Research and Innovation of the Republic of Indonesia number 054/LL3/AM/ 2020, March 23, 2020, who fully supports us in the grant program so that this research paper can be done. | 7,098 | 2021-02-16T00:00:00.000 | [
"Education",
"Sociology",
"Computer Science"
] |
Quantifying synergy and redundancy between networks
Summary Understanding how different networks relate to each other is key for understanding complex systems. We introduce an intuitive yet powerful framework to disentangle different ways in which networks can be (dis)similar and complementary to each other. We decompose the shortest paths between nodes as uniquely contributed by one source network, or redundantly by either, or synergistically by both together. Our approach considers the networks’ full topology, providing insights at multiple levels of resolution: from global statistics to individual paths. Our framework is widely applicable across scientific domains, from public transport to brain networks. In humans and 124 other species, we demonstrate the prevalence of unique contributions by long-range white-matter fibers in structural brain networks. Across species, efficient communication also relies on significantly greater synergy between long-range and short-range fibers than expected by chance. Our framework could find applications for designing network systems or evaluating existing ones.
Supplemental Experimental Procedures
Alternative possible formulation of network redundancy for shortest paths Throughout the paper, we have seen that the PND framework applied to global efficiency can quantify redundant, unique or synergistic contributions from path lengths alone.As natural as this seems, there are certain cases where this choice might clash with our intuitions for what each of these contributions should be.As an example, consider the shortest path between nodes 1 and 5 in the following two scenarios: 1.In network A 1 the shortest path is 1 − 2 − 3 − 5.In network B 1 the shortest path is 1 − 2 − 3 − 4 − 5.
In both cases there is a unique contribution from A 1 with a path of length 4, so our current formulation considers these two cases as equivalent.Intuitively, however, some might argue that there is a sense in which A 2 is "more unique" (with respect to B 2 ) than A 1 , because in A 2 the existence of edges 2 − 6 and 6 − 5 is unique and is what enables the shortest path, while in A 1 only the existence of 3 − 5 is unique.
As another example where intuitions may differ, consider the shortest path between nodes 1 and 4 in the following two scenarios: and network B 1 has edges 1 − 2 − 3, but not 3 − 4. Suppose that in both networks there are other paths from 1 to 4 of length 5.
In this case, some might argue that A 2 and B 2 are "more synergistic" than A 1 and B 1 are, because in the second scenario, 2 − 3 is common to both networks.But this is not reflected in our approach based on the efficiency gain provided by A ∪ B because in both cases you have l A∪B = 4 and l A = l B = 5.
Note however that these examples do not disagree that there is unique path from node 1 to node 5 in network A (for the first example), nor that there is synergy between networks A and B in the second example.The disagreement is not about the presence of uniqueness, or synergy, but only about how to quantify their extent.In other words, in terms of the three-question summary of our approach ("(1) Is there synergy?(2) If not, is there uniqueness?(3) If there is synergy or uniqueness, how much of it is there?")these examples agree on the answer to questions (1) and (2), and the divergence of intuitions only applies for question (3).
A natural alternative to the operationalisation of path redundancy that may address this concern would be one whereby shortest paths in networks A and B are redundant if and only if they involve traversing exactly the same edges (note that the two criteria coincide, for every shortest path, when network B is equal to network A).This "redundancy as identity" means that the same path would literally be available on both networks.However, we can easily see that this alternative definition of path redundancy suffers from a number of drawbacks.Mathematically, this option would be ill-defined in cases where multiple shortest paths of equal length exist, but do not coincide, with no natural way of "breaking ties".From a practical standpoint, identifying the length of the shortest paths between two nodes in networks A and B, and checking whether they are the same, is much simpler than (and a subset of) enumerating all possible shortest paths and determining whether any of them match.Finally, at a more conceptual level, in many real-world settings (e.g., travellers looking for synergies between two transport networks in going between two specific locations) the ease of reaching a destination node (in terms of time, money, or number of steps that can be saved) seems to be of far greater importance than the specific identity of the intermediate steps, or their contribution to these savings, for most purposes.
For these reasons, while we agree that there can certainly be cases where the identity and contribution of the specific sub-paths are of interest, we also believe that PND as presented is the more appealing and widely applicable alternative for the question of path efficiency (which we emphasise is only one possible application of PND) Nevertheless,we do not discard future variations of PND that take these considerations into account.
Supplemental proofs
This section contains supporting proofs for the properties of the efficiency-based Partial Network Decomposition (PND), presented in the Materials and Methods section of the main text.Our main goal is to prove that the proposed network redundancy function in Eq. (??) provides a non-negative PND of the network's average efficiency, such that F α ∂ ≥ 0, ∀α.We will do so by closely following the proofs for PID in Appendix D of Williams and Beer 1 , bearing in mind the differences between our network redundancy function F α ∩ and their information redundancy function I min (S; α).Note that some proofs in Ref. 1 do not depend on the properties of I min (S; α), and rely only on the structure of the antichain lattice -which is shared between PID and PND.For further detail, please see the insightful discussion surrounding the non-negativity of certain PID measures in Appendix C of Ref. 2 .
We begin by formally defining the efficiency f of a pair of distinct nodes ω as the minimum length of all paths between them in the set of edges E: where P(ω; E) is the set of all paths between v 1 and v 2 in E. For convenience, we repeat the definition of the redundancy function presented in the main text, In the following, we may omit the dependence on ω for simplicity of notation.The following theorems (up to Theorem 6) hold for all ω.
Proof.Follows from the fact that path lengths are nonzero positive integers.Following standard convention, the shortest path length between disconnected nodes is taken to be positive infinity.
Proof.This proof depends only on the structure of the redundancy lattice and Eq.(??) defining f α ∂ as the Moebius inversion of f α ∩ , both of which are shared between Ref. 1 and this work.Proof is exactly as in the original work, replacing I min (S; α) by f α ∩ and Π R (S; α) by f α ∂ .
Theorem 4. f α ∂ can be written in closed form as Proof.First we note that, for the redundancy lattice, the following statement holds: where X is the set of minimal elements of a poset X (see Refs. 1,3 for a proof).Combining Eqs.(S2) and (S3) yield and by Lemma 1 and Eq.(S5), Then, applying the maximum-minimums identity 1 we have Or, equivalently, by applying Eq. (S2) again, ∂ ≥ 0 follows directly from Theorems 2 and 4.
Theorem 6. F α ∂ is non-negative.Proof.Follows directly from Theorem 5 and the fact that is a weighted sum of non-negative quantities with non-negative weights.
As noted by Chicharro and Panzeri 2 , the crucial aspect of this proof (in addition to f being non-negative and monotonically increasing) is that f α ∩ is defined as a minimum of a set of values, with each value associated with a set of sources.Although the overall PND framework is applicable to other utility functions f and other redundancy functions f α ∩ , these may not lead to a nonnegative decomposition.
Lemma 1 .
f (ω; E a ) increases monotonically under subset inclusion.Proof.Consider a, b ⊆ {1, ..., N }, with a ⊂ b.Recall that, by definition, E a = k i=1 E ni for any a = {n 1 , .., n k } ⊆ {1, . . ., N }.If a ⊂ b, then E a ⊆ E b .Since the set of paths P(ω; E) grows with the inclusion of more edges in E, the minimum of any function on P must decrease with said inclusion, and therefore its inverse must increase.Thus, if a ⊂ b then f (ω; E a ) ≤ f (ω; E b ).Theorem 2. f α ∩ increases monotonically in the redundancy lattice.Proof.This proof depends only on the structure of the redundancy lattice and Lemma 1, both of which are shared between Ref. 1 and this work.Proof is exactly as in the original work, replacing I min (S; α) by f α ∩ and I(S = s; a) by f (ω; E a ).Theorem 3. f α ∂ can be written in closed form as
Figure S1 .
Figure S1.Consistent effects of progressive rewiring, for networks of different density.Synergistic, unique, and redundant contributions to global efficiency, as well as the network small-world propensity, are shown as a function of the percentage of rewired edges, for 5%, 10%, and 20% density.
Figure S2 .
Figure S2.Replication with alternative reconstruction of the human structural connectome.Left: connectomes reconstructed from an independent dataset of Diffusion Spectrum Imaging data (N=70 subjects), using the Lausanne anatomical parcellation with 234 cortical and subcortical regions.Right: connectome reconstructed from an alternative sub-parcellation of the Schaefer functional atlas with 1000 cortical nodes (N=100 subjects).Y-axis: proportion of shortest paths accounted for by each PID term.Box-plots indicate the median and inter-quartile range of the distribution.Each data-point is one subject.
Figure S3 .
Figure S3.Comparison against degree-preserving randomised networks.Grey distributions indicate the corresponding values for degree-preserving randomised networks.***: p < 0.001 against null distribution of values obtained from rewired null networks.Y-axis: proportion of shortest paths accounted for by each PND term.Box-plots indicate the median and inter-quartile range of the distribution.Each data-point is one subject (N=100).
Figure S4 .
Figure S4.Prevalence of synergistic, unique, and redundant efficiency contributions, comparing mammalian structural connectomes against degree-preserving randomised nulls.Grey distributions indicate the corresponding values for degree-preserving randomised networks.***: p < 0.001 against null distribution of values obtained from rewired null networks.Y-axis: proportion of shortest paths accounted for by each PND term.Box-plots indicate the median and inter-quartile range of the distribution.Each data-point is one animal (N=220, colored dots), or its corresponding null network (grey dots).
TABLE S1 .
Results for the statistical comparison between human structural connectivity networks, and corresponding geometry-preserving rewired nulls.
TABLE S3 .
Results for the statistical comparison between human functional connectivity networks, and corresponding geometry-preserving rewired nulls.
TABLE S4 .
Results for the statistical comparison between human structural connectivity networks, and corresponding functional connectivity networks, thresholded to have the same network density.
TABLE S5 .
Results for the statistical comparison between mammalian structural connectivity networks, and corresponding geometry-preserving rewired nulls. | 2,660 | 2024-03-01T00:00:00.000 | [
"Biology",
"Physics"
] |
Genome-Wide Analysis of Sheep Artificially or Naturally Infected with Gastrointestinal Nematodes
The anthelmintic resistance of gastrointestinal nematodes (GINs) poses a significant threat to sheep worldwide, but genomic selection can serve as an alternative to the use of chemical treatment as a solution for parasitic infection. The objective of this study is to conduct genome-wide association studies (GWASs) to identify single nucleotide polymorphisms (SNPs) in Rambouillet (RA) and Dorper × White Dorper (DWD) lambs associated with the biological response to a GIN infection. All lambs were genotyped with a medium-density genomic panel with 40,598 markers used for analysis. Separate GWASs were conducted using fecal egg counts (FECs) from lambs (<1 year of age) that acquired their artificial infections via an oral inoculation of 10,000 Haemonchus contortus larvae (n = 145) or naturally while grazing on pasture (n = 184). A GWAS was also performed for packed cell volume (PCV) in artificially GIN-challenged lambs. A total of 26 SNPs exceeded significance and 21 SNPs were in or within 20 kb of genes such as SCUBE1, GALNT6, IGF1R, CAPZB and PTK2B. The ontology analysis of candidate genes signifies the importance of immune cell development, mucin production and cellular signaling for coagulation and wound healing following epithelial damage in the abomasal gastric pits via H. contortus during GIN infection in lambs. These results add to a growing body of the literature that promotes the use of genomic selection for increased sheep resistance to GINs.
Introduction
Gastrointestinal nematodes (GINs) are a critical threat to global sheep production, particularly from the highly pathogenic H. contortus. In the United States (USA), sheep are produced in a variety of systems and climates [1]. However, the overuse of the limited number of commercially available anthelmintics, even in more temperate and arid regions, has contributed to the resistance of GINs to dewormers nationwide [2][3][4]. Similar findings were reported in global parasite populations for nearly two decades [5]. The employment of rapidly developing genomic technology can be a key resource for elucidating the biological mechanisms behind the response to GINs and improving the natural resistance of sheep.
The Dorper, White Dorper and Rambouillet breeds are common in the USA, but multiple reports indicate that they are more susceptible to GINs than breeds of Caribbean descent [6][7][8][9]. A further understanding of the physiological mechanisms responsible for the host resistance to parasites in these breeds is needed for future directional selection to occur. With H. contortus regarded as the GIN of greatest concern, artificial parasite challenges were conducted in Dorper, Rambouillet and other breeds of sheep as a means to measure and improve the understanding of the biological response of lambs to this singular parasite [7,10]. Under natural grazing conditions, sheep are exposed to multiple GIN species that can be difficult to differentiate using standard fecal egg counting practices.
Artificial Parasite Challenge Data
In 2020, Rambouillet lambs (n = 81) were placed in two adjoining feedlot pens (30 m × 30 m), where they were provided grain ad libitum. The dirt feedlot pens did not contain grass, and thus, were considered a 'GIN-free' environment. After a 60-day adjustment period, all lambs were orally inoculated with 10,000 L3 H. contortus, and FECs and packed-cell volume (PCV) were recorded at 21 days post inoculation (dpi) and 35 dpi. Fecal samples were collected directly from the rectum and stored at 4 • C until analysis. All FECs were determined via a modified McMaster technique [18] that utilized a 2 g fecal sample homogenized in 28 mL of sodium nitrate solution (specific gravity = 1.25 M). Following mixing and removal of solids by straining through double-layered gauze, the solution was placed on a McMaster slide and strongyle eggs were counted under 100× magnification to a sensitivity of 50 epg. To determine PCV, whole blood was collected via the jugular venipuncture in 16 × 100 mm purple-top tubes containing EDTA and also stored at 4 • C. All samples were centrifuged at 4000 rpm for seven minutes, and hematocrit percentage was subsequently measured. For a comprehensive description of this study and the results, see [10].
In 2022, following the protocol previously employed with the RA lambs, DWD lambs (n = 64) were also subjected to an artificial H. contortus challenge in a feedlot setting. In contrast to the RA lambs, which were born in the spring and were reared by their dams on pastures during a season conducive to GIN survival, the DWD lambs were born in the late fall, and reared with their dams on pasture during the winter, which is a time of hypobiosis for H. contortus. Fecal samples from RA lambs at weaning indicated they had previous exposure (a primary challenge) to GINs. Given that not all DWD lambs may have had a primary exposure to H. contortus as was expected with RA lambs, all DWD lambs were inoculated with a dose of 2000 H. contortus L3 larvae and then were orally drenched with Prohibit (8 mg/kg; Agri Laboratories Ltd., St. Joseph, MO, USA) 10 d. later. After a two-week recovery period, lambs were orally inoculated with 10,000 H. contortus L3 larvae to initiate the artificial challenge trial. In line with sampling timepoints from RA lambs, FECs and PCV were recorded at 21 d and 35 d post infection for DWD lambs.
Natural Parasite Challenge Data
Post-weaning FECs from multiple contemporary groups of RA (n = 90) and DWD (n = 94) lambs from 2019 to 2021 were compiled for analysis ( Figure 1). All groups were managed under the same protocol, at weaning lambs were orally drenched with Cydectin and Valbazen and placed on pasture previously grazed by GIN-infected sheep. To allow for substantial time for lambs to become reinfected with GINs, fecal collections were not conducted for at least 60 d following deworming. Lambs were likely exposed to multiple GIN species on pasture, as was observed in similar studies [8], and the amount of parasites consumed by each individual could not be determined. In 2019, a coproculture analysis was performed for GIN speciation in RA grazing at the Texas AgriLife Research Station, and the results revealed 89% H. contortus, 10% Trichostrongylus and 1% Strongyloides. Further coproculture analyses were not performed in the following years.
to initiate the artificial challenge trial. In line with sampling timepoints from RA la FECs and PCV were recorded at 21 d and 35 d post infection for DWD lambs.
Natural Parasite Challenge Data
Post-weaning FECs from multiple contemporary groups of RA (n = 90) and DW = 94) lambs from 2019 to 2021 were compiled for analysis ( Figure 1). All groups were aged under the same protocol, at weaning lambs were orally drenched with Cydectin Valbazen and placed on pasture previously grazed by GIN-infected sheep. To allow substantial time for lambs to become reinfected with GINs, fecal collections were not ducted for at least 60 d following deworming. Lambs were likely exposed to multiple species on pasture, as was observed in similar studies [8], and the amount of para consumed by each individual could not be determined. In 2019, a coproculture ana was performed for GIN speciation in RA grazing at the Texas AgriLife Research Sta and the results revealed 89% H. contortus, 10% Trichostrongylus and 1% Strongylo Further coproculture analyses were not performed in the following years.
The level of GIN contamination on pastures was not determined, but fecal sam occurred during the warmer season (summer/fall, temperatures above 27 °C) when ronmental conditions were favorable for H. contortus. Entire contemporary groups also only individually fecal sampled once a subset of samples confirmed that the m FEC of the group exceeded 500 eggs per gram (epg). Previous research indicated t 500 epg FEC average indicates the occurrence of a moderate parasite challenge [19 All FECs were performed using the McMaster method described previously. Figure 1. Visual portrayal of the design utilized in this project. Fecal egg counts (FECs) fro different contemporary groups of Rambouillet (RA) and Dorper × White Dorper (DWD) lamb urally challenged with gastrointestinal nematodes were compiled for genome-wide assoc analyses. Fecal egg counts and packed-cell volume (PCV) were compiled from two separate art GIN challenges, either with RA lambs or DWD lambs.
Genotyping
All artificially and naturally infected lambs were genotyped with either the Axio Ovine Genotyping 50 K Array (Thermo Fisher Scientific, Waltham, MA, USA) o The level of GIN contamination on pastures was not determined, but fecal sampling occurred during the warmer season (summer/fall, temperatures above 27 • C) when environmental conditions were favorable for H. contortus. Entire contemporary groups were also only individually fecal sampled once a subset of samples confirmed that the mean FEC of the group exceeded 500 eggs per gram (epg). Previous research indicated that a 500 epg FEC average indicates the occurrence of a moderate parasite challenge [19][20][21][22]. All FECs were performed using the McMaster method described previously.
Genotyping
All artificially and naturally infected lambs were genotyped with either the Axiom™ Ovine Genotyping 50 K Array (Thermo Fisher Scientific, Waltham, MA, USA) or the AgResearch Sheep Genomics 60 K SNP chip (GenomNZ, AgResearch, Mosgiel, New Zealand). Using SNP and Variation Suite (Golden Helix, Bozeman, MT, USA), a combined working dataset of 41,431 SNPs was generated by retaining genomic markers that overlapped between the two panels, with the remaining SNPs being discarded. Using PLINK v1.90, quality control filtering was performed for call rate (>90%), minor allele frequency (>99%), Hardy-Weinberg equilibrium (1.0 × 10 −6 ) and the removal of duplicates, resulting in 40,598 SNPs remaining for analysis.
Statistical Analyses
To meet normality, FEC data from both the artificial and natural challenges were BoxCox transformed (TFEC) in R v 4.0.3, and GWASs were performed using PLINK v1.90 [23,24]. In both the artificial and natural analyses, phenotypic data from the two different breeds of sheep were combined to increase the power of the GWAS. Recognizing the need to account for across-and within-breed population structures, a principal component (PC) analysis was conducted via PLINK, and the top 20 PCs were fitted in the model as covariates. To ensure proper population stratification, the genomic inflation factor (λ) was calculated via PLINK, and all GWAS models reported had λ = 1.
In the artificial challenge analysis, additive and non-additive models were tested against FEC and PCV at 21 dpi, 35 dpi and the rate of change between these two timepoints. Reported are phenotypes for which significant results were obtained, which included GWAS with FEC and PCV at 35 dpi (recessive model) and a rate of change for both FEC (additive model) and PCV (recessive model) between 21 dpi and 35 dpi. The rate of change was determined by first scaling the phenotypes to a range of all positive values to accommodate the regression analysis, and then using the slope of the line fit between the two timepoints.
For the natural challenge analyses, contemporary groups each consisted of lambs of one breed type and one sex ( Figure 1). Individual sex, breed and year effects in the model could not be differentiated from one another, but to account for variable parasite levels that challenged the contemporary groups in different pasture environments, the '--family' flag was used first to cluster the samples by group (which had six unique breed, sex and year combinations). Using a recessive model, GWAS was performed for TFEC with body weight and PC included as covariates. Manhattan plots displaying the GWAS results were developed using the 'qqman' [25] and 'dplyr' [26] packages in R. Significance was determined through permutation testing of GWAS models (50,000 replications) using PLINK and set at 2.0 × 10 −5 for the artificial challenge analyses and 6.0 × 10 −5 for the natural challenge analysis.
Linkage disequilibrium between pairs of significant SNPs identified was calculated using the '--ld' flag in PLINK v1.90, which computes a haplotype-based r2 statistic. Haplotype block identification was performed using the '--blocks' flag, also in PLINK, with the default setting of identifying SNPs in strong LD (defined by [27]) within 200 kB of one another.
Gene Identification and Annotation
Reported SNP locations are from the ARS-UI_Ramb_v2.0 genome assembly [28]. Proximity of an SNP to a gene was explored using Genome Data Viewer from the National Center for Biotechnology Information (https://www.ncbi.nlm.nih.gov/genome/gdv/, accessed on 1 February 2023). Further analysis of candidate genes occurred if a significant SNP was located within 20 kB of the gene. Gene functional annotation and corresponding ontology (GO) terms for Ovis aries were sourced from the UniProt database [29]. In the instance that annotation in sheep was not available, gene function was subsequently sourced from the Bos taurus database. A heatmap visually depicting enriched biological processes was developed using 'ggplot' from the 'tidyverse' package in R [30].
Artificial Challenge GWAS Results
Genome-wide significant SNPs associated with TFEC during a H. contortus infection were identified (Figure 2). The descriptive statistics of the FEC and PCV phenotypes by breed for which the associations were tested in artificially challenged lambs are displayed in Table 1, and the significant SNP marker information is described in An SNP on chromosome 3 in exon 18 of SCUBE1 was associated with increased TFEC (Figure 3). This variant allele at rs159935395 was predominantly present in the RA lambs (freq = 0.179) versus the DWD lambs (freq = 0.008), where it was only reported in one heterozygous Ref/Alt lamb. Furthermore, four additional SNPs were identified within 9 kb of the genes DERL2, TULP1, PXDC1 and LOC114114021. An SNP on chromosome 3 in exon 18 of SCUBE1 was associated with increased TFEC (Figure 3). This variant allele at rs159935395 was predominantly present in the RA lambs (freq = 0.179) versus the DWD lambs (freq = 0.008), where it was only reported in one heterozygous Ref/Alt lamb. Furthermore, four additional SNPs were identified within 9 kb of the genes DERL2, TULP1, PXDC1 and LOC114114021.
In contrast to the natural parasite challenge, the artificial parasite challenge protocol included repeated data collections over multiple timepoints, allowing for the analysis of phenotype change over the period of time when the GIN infection was expected to develop. When performing the GWAS for rate of change of the FECs from 21 dpi of the artificial challenge to 35 dpi, concisely described as FEC slope, two more significant SNPs were identified when an additive model was employed. One of these markers, rs415241061, is located in an intron region of CAPZB. In both breeds of sheep, the lambs homozygous for the alternate SNP had a reduced FEC slope ( Figure 4). In contrast to the natural parasite challenge, the artificial parasite challenge protocol included repeated data collections over multiple timepoints, allowing for the analysis of phenotype change over the period of time when the GIN infection was expected to develop. When performing the GWAS for rate of change of the FECs from 21 dpi of the artificial challenge to 35 dpi, concisely described as FEC slope, two more significant SNPs were identified when an additive model was employed. One of these markers, rs415241061, is located in an intron region of CAPZB. In both breeds of sheep, the lambs homozygous for the alternate SNP had a reduced FEC slope ( Figure 4).
In addition to the FEC phenotypes, the PCV was also captured during the artificial challenge trials as a quantification of the lamb resilience to H. contortus infection. No SNPs meet significance for the PCV at 21 dpi with the GWAS; however, at 35 dpi, three significant SNPs were identified, including two intronic SNPs in SLC49A4 and GLCE. When analyzing the PCV slope, two significant SNPs were identified on chromosome 2, with rs422296454 being located in exon 6 of TRIM14.
Natural Challenge GWAS Results
The descriptive statistics of the phenotypic information for each of the six groups included in the natural challenge dataset are described in Table 3. The mean FEC for each group ranged from 835 epg to 1919 epg, indicating that the lambs were exposed to moderate parasite challenges. Using a recessive model, six significant SNPs were identified. As described in Table 4, two of these SNPs were associated with a decrease in TFEC, and three were associated with an increase in TFEC. Three of the identified SNPs, on chromosome 12, were located in intronic regions of the genes EXO1, BRINP3 and DNM3. Table 3. Dorper x White Dorper (DWD) and Rambouillet (RA) lambs' contemporary group descrip- In addition to the FEC phenotypes, the PCV was also captured during the artificial challenge trials as a quantification of the lamb resilience to H. contortus infection. No SNPs meet significance for the PCV at 21 dpi with the GWAS; however, at 35 dpi, three significant SNPs were identified, including two intronic SNPs in SLC49A4 and GLCE. When analyzing the PCV slope, two significant SNPs were identified on chromosome 2, with rs422296454 being located in exon 6 of TRIM14.
Natural Challenge GWAS Results
The descriptive statistics of the phenotypic information for each of the six groups included in the natural challenge dataset are described in Table 3. The mean FEC for each group ranged from 835 epg to 1919 epg, indicating that the lambs were exposed to moderate parasite challenges. Using a recessive model, six significant SNPs were identified. As described in Table 4, two of these SNPs were associated with a decrease in TFEC, and three were associated with an increase in TFEC. Three of the identified SNPs, on chromosome 12, were located in intronic regions of the genes EXO1, BRINP3 and DNM3. The frequencies of significant SNPs within each breed type are provided in Table 4, with frequencies ranging from 0.022 to 0.489 for the RA lambs, and 0.229 to 0.484 for the DWD lambs. Two SNPs on chromosome 12 associated with an increase in TFEC, rs429291496 and rs417624219, in DWD lambs, have the same frequency. Linkage disequilibrium (LD) analysis, which shows that they have an r 2 value of 1, indicates that they are in full LD (Supplementary Table S1). In the RA lambs, these two SNPs had an r 2 value of 0.49. The follow-up haplotype analysis revealed that these two SNPs are included in a five-SNP haplotype block spanning 147 kb in the DWD lambs (Supplementary Figure S1). The genes located within this block include MAP1LC3C, EXO1 and WDR64.
Gene Ontology
When searched in the UniProt database, the genes identified via the GWASs returned 110 unique GO terms (Supplementary Table S2). The associated biological processes (BPs) for the GO terms associated with positionally significant genes identified for each phenotype are reported in Figure 5. Across all the phenotypes, 'signaling' and 'anatomical structural development' were the BPs with the greatest number of subprocesses associated with the candidate genes revealed in this study.
Gene Ontology
When searched in the UniProt database, the genes identified via the GWASs returned 110 unique GO terms (Supplementary Table S2). The associated biological processes (BPs) for the GO terms associated with positionally significant genes identified for each phenotype are reported in Figure 5. Across all the phenotypes, 'signaling' and 'anatomical structural development' were the BPs with the greatest number of subprocesses associated with the candidate genes revealed in this study.
Discussion
With the data captured from two separate experimental procedures, this study utilizes GWASs to identify the markers associated with the biological response of lambs from two sheep breeds to GIN infection. Both the DWD and RA lambs in this study are from breeds that are common for lamb production in the USA, but were previously described as more susceptible to GINs than other breeds with more known resistance [7,[31][32][33]. While we do not assume that the response of the RA and DWD lambs to GINs are exactly the same, the data from the two breed types were combined for both the artificial and
Discussion
With the data captured from two separate experimental procedures, this study utilizes GWASs to identify the markers associated with the biological response of lambs from two sheep breeds to GIN infection. Both the DWD and RA lambs in this study are from breeds that are common for lamb production in the USA, but were previously described as more susceptible to GINs than other breeds with more known resistance [7,[31][32][33]. While we do not assume that the response of the RA and DWD lambs to GINs are exactly the same, the data from the two breed types were combined for both the artificial and natural challenge analyses to increase the sample population. When each breed was analyzed individually, few significant results were returned; however, this could have been due to a limited sample size for each breed. Furthermore, due to the restricted scope of this study, it is important to consider the limited number of breeding rams used (eight RA sires and seven DWD sires) in this project. The SNP and gene variant frequencies that exist in the RA and Dorper or White Dorper breeds may not be comprehensive in this study.
In total, the GWASs identified 20 significant SNPs associated with the FECs and PCV collected from the lambs under an artificial H. contortus challenge, and 6 significant SNPs associated with lamb FECs when collected following a natural parasite challenge. Of the 26 significant SNPs identified in this study, 21 were located in exons, introns or within 20 kb of genes, suggesting that SCUBE1, TRIM14, EXO1, BRINP3, DNM3, GALNT6, CEP350, IGF1R, SYNGR1, RHOA, ZBTB44, AHNAK, CTIF, CAPZB, PTK2B, DERL2, TULP1, PXDC1, SLC49A4, GLCE and LOC114114021 all potentially influence the lamb response to GINs in the populations we evaluated.
Multiple markers within the genes were identified in the GWASs for the FECs of lambs artificially challenged with H. contortus, including one SNP in the exon of SCUBE1. Signal peptide CUB domain and EGF-like domain containing 1, encoded by SCUBE1, is a member of the epithelial growth factor superfamily and is highly expressed in platelets [34] and vascular endothelial cells [35]. More specifically, it is a cell surface glycoprotein that is thought to assist in platelet aggregation, as observed in mice [36]. In our artificial parasite challenge, lambs received a large dose of H. contortus larvae in a single inoculation, which all likely reached maturity and began feeding on blood simultaneously. Previous abomasal transcriptome research revealed the increased expression of genes involved in the complement and coagulation pathways in merino lambs artificially infected with H. contortus larvae [37].
The SNP with the highest significance identified in our study when tested with FEC at 35 dpi, rs424235017 (p = 1.80 × 10 −9 ), is located within the intron of GALNT6. Al Kalaldeh et al., 2019 [38] also identified SNPs within GALNT6 associated with FECs in a large GWAS that included Dorper and Merino sheep, in addition to other breeds. This result is also in line with that of Benavides et al., 2015 [16] whose GWAS with FECs in Dorper and Red Maasai sheep identified an SNP marker within~100 kb of a gene within the same family as GALNT4. A KEGG analysis indicates that the GALNT family of genes are paramount in the Mucin type O-glycan biosynthesis pathway, which is important for modifying the serine or threonine residues of proteins. Mucins are highly glycosylated proteins and are a primary component of the mucosa layer that serves as an initial barrier against helminths attempting to burrow into the gastric pits of the abomasum [39]. GIN-susceptible sheep parasitized with H. contortus larvae were shown to have reduced and altered types of mucins present in the abomasal mucosa layer, but not in more GIN-resistant animals [40]. The mutation of GALNT6 observed in our study may be associated with a decrease in the mucin production or glycosylation, resulting in a greater establishment of H. contortus.
Another SNP with high significance in our study was located in IGF1R, which encodes the insulin-like growth factor 1 receptor. IGF1R binds IGF with a high affinity and is critical for cell growth and survival as it is an upstream activator of the PI3K-AKT/PKB and Ras-MAPK pathways. Previous research has identified associations between variants of IGF1R and increased growth in sheep [41][42][43]. Berton et al., 2017 [44] identified an SNP in IGF1R as being associated with hematocrit in naturally parasitized Santa Inês sheep. Chen et al., 2012 [45] identified increased levels of Igf-1expression in mice artificially infected with helminths. In addition, Chen et al., 2012 [45] found increased levels of IL-4 and IL-13, which are hallmark cytokines for a Th2-type immune response and promoters of localized wound healing [46].
Multiple quantitative trait loci (QTL) associated with FEC in Merino sheep were identified upstream of our identified SNP from 107.3 to 119.9 Mbp on chromosome 2 [38]. In addition, another intronic SNP on chromosome 2, rs415241061, exceeded the significance threshold in our study when testing for FEC changes. This marker is located within CAPZB, which encodes an F-actin capping protein that is important in muscle development. Hong et al., 2017 [47] also revealed that in mice, CAPZB binds to gp96, which is a member of the heat shock protein 90 chaperones, providing a potential link between CAPZB and innate immune function.
The remaining genes with an intronic SNP identified in the GWASs with artificial challenge FEC phenotypes include CEP350, SYNGR1, RHOA, ZBTB44, AHNAK and CTIF. To our knowledge, these genes have not been previously reported to be linked to parasite resistance in sheep. CEP350 plays a role in stabilizing microtubules in the Golgi apparatus of animal cells [48]. SYNGR1 is critical in the presynaptic vesicle formation in neurons and is notably associated with brain disorders in humans [49]. RHOA is a member of the Rho family that plays a role in cellular signal transduction. Mutations in RHOA were also associated with T follicular helper cell specification [50], and one of the leading GO terms associated with RHOA is GO:0044319, which is also associated with wound healing and the spreading of cells. ZBTB44 is a member of the zinc finger and BTB domain-containing family, whose functions include wide ranging B-and T-cell development [51]. AHNAK encodes a large nuclear phosphoprotein that also impacts TGFβ signaling [52], and which can ultimately have downstream immune function effects. CTIF is critical for the pioneer round of mRNA translation and gene expression [53].
In the artificial challenge GWAS analyses, two significant SNPs in the intron regions of SLC49A4 and GLCE were associated with a PCV at 35 dpi, almost exclusively in the RA lambs. Disrupted in renal cancer protein 2 (DIRC2) is an alias of SLC49A4 and encodes a metabolite transporter; previous associations to renal tumor formation were described in humans with mutant DIRC2 [54]. GLCE encodes the glucuronic acid epimerase enzyme, which plays an active role in the glycosaminoglycan biosynthesis-heparan sulfate/heparin metabolic pathway, which was shown to be enriched in a previous GWAS exploring parasite resistance in Morada Nova sheep [14]. During the synthesis of heparan sulfate proteoglycans (HSPGs), GLCE converts glucuronic acid to iduronic acid [55], which, in turn, promotes the binding ability of HSPGs [56]. While HSPGs have a multitude of biological functions, it was previously reported that HSPG mutant mice have reduced mast cell and platelet aggregation [57].
In addition, two significant SNPs associated with PCV change exceeded significance in the GWAS, including a marker in exon 6 of TRIM14, which encodes the tripartite motif 14 protein. TRIM14 is believed to have a wide range of biological roles, including affecting the innate immune response to viral infection [58]. An additional SNP in an intronic region of PTK2B was also identified in our study. PTK2B, also known as PYK2B, is a tyrosine kinase that is commonly expressed in hematopoietic cells and plays an essential role in platelet aggregation [59]. A mutation in PTK2B could limit the ability of lambs in this study with abomasal epithelial hemorrhage to coagulate at the wound-site, though further research would be needed to confirm this theory.
Three genes, EXO1, BRINP3 and DNM3, identified with the natural challenge data, were all located on chromosome 12, which harbors multiple previously identified QTLs associated with FECs in several breeds of sheep [60][61][62]. Exonuclease 1 (EXO1) was described as important for genome maintenance, playing a central role in Mre11-Rad50-Xrs2 recruitment and cellular regulation during DNA double-stranded break repair [63,64]. Mice with double-knockout Exo1 were shown to have a significantly higher cancer predisposition and 50% lower survival rate at 16 months [65]. In our results, we observed an incremental increase in the TFEC per allele of the rs429291496 SNP in EXO1, suggesting that this SNP may be associated with gene function. EXO1 is more highly expressed in mesenteric lymph nodes, lymph node prescapular and Peyer's patch compared to other tissues in sheep, insinuating that it has a role in immune function [66]. A previous study identified a QTL for FECs in French breeds of sheep that is located within 1 Mb of rs429291496, when mapped to the OAR v3.1 assembly [61].
Curiously, in DWD lambs only, rs429291496 was in full LD with four other SNPs covering a 147 kb span from position 34,187,202 to 34,334,816 of chromosome 12. Upstream of EXO1 in this haplotype block includes MAP1LC3C, which plays an important role in autophagy and cellular maintenance [67]. Downstream of EXO1, but still within the same block, includes WDR64; however, it is almost exclusively expressed in the testes [65]. The complete haplotype block identified in the DWD lambs was not present in the RA lambs in this study.
Furthermore, BRINP3 is predominantly a regulator of neuron differentiation, and knockout studies in mice have shown that Brinp3-/mice have altered sociability [68]. Interestingly, the under-expression of BRINP3 in humans was also associated with Ulcerative Colitis [69]. Also highly expressed in the central nervous system is DNM3, which is involved in microtubule formation and vesicular transport and was reported to be down regulated in human cases of colon cancer [70].
The functions of the positional candidate genes identified using either the artificial or natural parasite challenge GWASs were further explored in this study. The biological processes associated with the three fully annotated genes revealed in the natural challenge analyses included 'anatomical structural development', 'immune system processes', 'vesicle mediated transport' and 'cell differentiation'. Despite no candidate genes identified in common between the natural and artificial GWAS, all four of these biological processes were also enriched by GO terms associated with genes revealed exclusively in the artificial analyses. While 'anatomical structural development' is a broadly defined term, this BP was also highly enriched in the analyses, suggesting that tissue regeneration is a critical component of withstanding a parasite infection.
Including both natural and artificial parasite infections in this study, as well as focusing on two parasite-susceptible breeds with distinct characteristics, provided a multiplicative approach to identifying SNPs, candidate genes and physiological differences that may differentiate the ability of sheep to withstand GINs. It is important to reiterate that in the natural parasite challenge, the lambs were potentially exposed to multiple GIN species, but likely at a lower and more consistent rate than the lambs in the artificial challenge, which were inoculated with a single large dose of H. contortus larvae at a given time point. The difference in the design of the natural and artificial trials ('trickle' infection with potentially mixed species vs. one-time dose of H. contortus only), in addition to the fact that the same lambs were not subjected to both protocols, may contribute to why the variation in the genomic regions and positional candidate genes did not overlap. Even with some incongruencies between the infection scenarios and the fact that the candidate genes revealed in the analyses differed, there is evidence that there is an overlap in the physiological response to GINs regardless of how the parasites are consumed ( Figure 5). It is also evident from the annotation analyses that the ability of lambs to mobilize cellular resources to reconstruct tissue when withstanding a one-time inoculation with H. contortus is important.
Given the multiple gene regions identified in this study, as well as in other studies, it is unlikely that significant progress for improved resistance in sheep will be achieved by selecting for individuals with a single preferred gene variant or SNP genotype. Genetic selection for multiple preferred haplotypes with a larger effect on GIN resistance phenotypes or even genomically enhanced breeding values that fit the effects of numerous SNPs will be necessary for rapid progress. Additional consideration may be given to crossbreeding strategies, which could combine beneficial gene variants for parasite resistance, as this study identified novel candidate genes that were not previously discovered in other breeds with a similar research design. Further research with a larger sample size that is more robust against the founder effect may be able to more clearly delineate the individual breed differences that may exist between the RA and DWD lambs.
Conclusions
These analyses revealed 26 significant genomic markers for parasite susceptibility in hair and wool breeds of sheep when challenged either naturally or artificially with GINs. H. contortus remains a significant health challenge in Dorper, White Dorper and Rambouillet sheep, and our results support the consensus that the susceptibility to this GIN is polygenic and variable across and within breed type.
Importantly, for future research, significant SNPs identified in this study also provide insight into the physiological mechanisms that are responsible for the resistance or susceptibility to GINs. Our results further reiterate the importance of effective cellular signaling to aid not only in a timely immune response for the host defense against GINs, but in the efficient regeneration of tissue following parasite infection. Furthermore, we identified markers that can serve as a foundation and resource for future parasitology research within these breeds and be used in concert with other markers for directional selection towards animals that are better equipped to withstand parasite challenges. | 7,821.8 | 2022-09-21T00:00:00.000 | [
"Agricultural And Food Sciences",
"Biology"
] |
One-step and Two-step Classification for Abusive Language Detection on Twitter
Automatic abusive language detection is a difficult but important task for online social media. Our research explores a two-step approach of performing classification on abusive language and then classifying into specific types and compares it with one-step approach of doing one multi-class classification for detecting sexist and racist languages. With a public English Twitter corpus of 20 thousand tweets in the type of sexism and racism, our approach shows a promising performance of 0.827 F-measure by using HybridCNN in one-step and 0.824 F-measure by using logistic regression in two-steps.
Introduction
Fighting abusive language online is becoming more and more important in a world where online social media plays a significant role in shaping people's minds (Perse and Lambe, 2016). Nevertheless, major social media companies like Twitter find it difficult to tackle this problem (Meyer, 2016), as the huge number of posts cannot be mediated with only human resources. Warner and Hirschberg (2012) and Burnap and Williams (2015) are one of the early researches to use machine learning based classifiers for detecting abusive language. Djuric et al., (2015) incorporated representation word embeddings (Mikolov et al., 2013). Nobata et al. (2016) combined pre-defined language elements and word embedding to train a regression model. Waseem (2016) used logistic regression with n-grams and user-specific features such as gender and location. Davidson et al. (2017) conducted a deeper investigation on different types of abusive language. Badjatiya et al. (2017) experimented with deep learning-based models using ensemble gradient boost classifiers to perform multi-class classification on sexist and racist language. All approaches have been on one step.
Many have addressed the difficulty of the definition of abusive language while annotating the data, because they are often subjective to individuals (Ross et al. 2016) and lack of context (Waseem and Hovy, 2016;Schmidt & Wiegand, 2017). This makes it harder for non-experts to annotate without having a certain amount of domain knowledge (Waseem, 2016).
In this research, we aim to experiment a twostep approach of detecting abusive language first and then classifying into specific types and compare with a one-step approach of doing one multiclass classification on sexist and racist language.
Moreover, we explore applying a convolutional neural network (CNN) to tackle the task of abusive language detection. We use three kinds of CNN models that use both character-level and word-level inputs to perform classification on different dataset segmentations. We measure the performance and ability of each model to capture characteristics of abusive language.
Methodology
We propose to implement three CNN-based models to classify sexist and racist abusive language: CharCNN, WordCNN, and HybridCNN. The major difference among these models is whether the input features are characters, words, or both.
The key components are the convolutional layers that each computes a one-dimensional convolution over the previous input with multiple filter sizes and large feature map sizes. Having different filter sizes is the same as looking at a sentence with different windows simultaneously. Maxpooling is performed after the convolution to capture the feature that is most significant to the output.
CharCNN
CharCNN is a modification of the character-level convolutional network in (Zhang et al. 2015). Each character in the input sentence is first transformed into a one-hot encoding of 70 characters, including 26 English letters, 10 digits, 33 other characters, and a newline character (punctuations and special characters). All other non-standard characters are removed. Zhang et al. (2015) uses 7 layers of convolutions and max-pooling layers, 2 fully-connected layers, and 1 softmax layer, but we also designed a shallow version with 2 convolutions and maxpooling layers, 1 fully-connected layers, and 1 softmax layers with dropout, due to the relatively small size of our dataset to prevent overfitting.
WordCNN
WordCNN is a CNN-static version proposed by Kim (2014). The input sentence is first segmented into words and converted into a 300-dimensional embedding word2vec trained on 100 billion words from Google News (Mikolov et al., 2013). Incorporating pre-trained vectors is a widely-used method to improve performance, especially when using a relatively small dataset. We set the embedding to be non-trainable since our dataset is small.
We propose to segment some out-ofvocabulary phrases as well. Since the Twitter tweets often contain hashtags such as #womenagainstfeminism and #feminismisawful we use a wordsegment library (Segaran and Hammerbacher, 2009) to capture more words.
HybridCNN
We design HybridCNN, a variation of WordCNN, since WordCNN has the limitation of only taking word features as input. Abusive language often contains either purposely or mistakenly misspelled words and made-up vocabularies such as #feminazi.
Therefore, since CharCNN and WordCNN do not use character and word inputs at the same time, we design the HybridCNN to experiment whether the model can capture features from both levels of inputs.
HybridCNN has two input channels. Each channel is fed into convolutional layers with three filter windows of different size. The output of the convolution are concatenated into one vector after 1-max-pooling. The vector is then fed into the final softmax layer to perform classification (See Figure 1).
Datasets
We used the two English Twitter Datasets (Waseem and Hovy, 2016;Waseem, 2016) published as unshared tasks for the 1 st Workshop on Abusive Language Online(ALW1). It contains tweets with sexist and racist comments. Waseem and Hovy (2016) created a list of criteria based on a critical race theory and let an expert annotate the corpus. First, we concatenated the two datasets into one and then divided that into three datasets for one-step and two-step classification (Table 1). One-step dataset is a segmentation for multi-class classification. For two-step classification, we merged the sexism and racism labels into one abusive label. Finally, we created another dataset with abusive languages to experiment a second classifier to distinguish "sexism" and "racism", given that the instance is classified as "abusive".
Training and Evaluation
We performed two classification experiments: 1. Detecting "none", "sexist", and "racist" language (one-step) Figure 1 Architecture of HybridCNN 2. Detecting "abusive language", then further classifying into "sexist" or "racist" (twostep) The purpose of these experiments was to see whether dividing the problem space into two steps makes the detection more effective.
We trained the models using mini-batch stochastic gradient descent with AdamOptimizer (Kingma and Ba, 2014). For more efficient training in an unbalanced dataset, the mini-batch with a size of 32 had been sampled with equal distribution for all labels. The training continued until the evaluation set loss did not decrease any longer. All the results are average results of 10-fold cross validation.
As evaluation metric, we used F1 scores with precision and recall score and weighted averaged the scores to consider the imbalance of the labels. For this reason, total average F1 might not between average precision and recall.
As baseline, we used the character n-gram logistic regression classifier (indicated as LR on Table 2-4) from Waseem and Hovy (2016), Support Vector Machines (SVM) classifier, and FastText (Joulin et al., 2016) that uses average bag-of-words representations to classify sentences. It was the second best single model on the same dataset after CNN (Badjatiya et al., 2017).
Hyperparameters
For hyperparameter tuning, we evaluated on the validation set. These are the hyperparmeters used for evaluation.
One-step Classification
The results of the one-step multi-class classification are shown in the top part of Table 2.
Our newly proposed HybridCNN performs the best, giving an improvement over the result from WordCNN. We expected the additional character input channel improves the performance. We assumed that the reason CharCNN performing worse than WordCNN is that the dataset is too small for the character-based model to capture word-level features by itself.
Baseline methods tend to have high averaged F1 but low scores on racism and sexism labels due to low recall scores.
Two-step Classification
The two-step approach that combines two binary classifiers shows comparable results with one-step approach. The results of combining the two are shown in the bottom part of Table 3.
Combining two logistic regression classifiers in the two-step approach performs about as well as one-step HybridCNN and outperform one-step logistic regression classifier by more than 10 F1 points. This is surprising since logistic regression takes less features than the HybridCNN.
Furthermore, using HybridCNN on the first step to detect abusive language and logistic regression on the second step to classify racism and sexism worked better than just using Hy-bridCNN. Table 4 shows the results of abusive language classification. HybridCNN also performs best for abusive language detection, followed by WordCNN and logistic regression. Table 5 shows the results of classifying into sexism and racism given that it is abusive. The second classifier has significant performance in predicting a specific type (in this case, sexism Since the precision and recall scores of the "abusive" label is higher than those of "racism" and "sexism" in the one-step approach, the twostep approach can perform as well as the one-step approach.
Conclusion and Future work
We explored a two-step approach of combining two classifiers -one to classify abusive language and another to classify a specific type of sexist and racist comments given that the language is abusive. With many different machine learning classifiers including our proposed Hy-bridCNN, which takes both character and word features as input, we showed the potential in the two-step approach compared to the one-step approach which is simply a multi-class classification. In this way, we can boost the performance of simpler models like logistic regression, which is faster and easier to train, and combine different types of classifiers like convolutional neural network and logistic regression together depending on each of its performance on different datasets.
We believe that two-step approach has potential in that large abusive language datasets with specific label such as profanity, sexist, racist, homophobic, etc. is more difficult to acquire than those simply flagged as abusive.
For this reason, in the future we would like to explore training the two-step classifiers on separate datasets (for example, a large dataset with abusive language for the first-step classifier and smaller specific-labelled dataset for the secondstep classifier) to build a more robust and detailed abusive language detector. | 2,328.2 | 2017-06-01T00:00:00.000 | [
"Computer Science",
"Linguistics",
"Sociology"
] |
Indoor Positioning System Infrastructure Based on Triangulation Method through Visible Light Communication
— Autonomous mobile robots are widely used in industry to help human’s work, but the concept has a weakness, that is robot still does not know its position in a room so it can not detect whether the robot has been at destination point. The technology commonly used to determine the position of objects is the Global Positioning System (GPS). However, GPS does not detect objects that are indoors. Previous research used Wi-Fi as a reference for designing an indoor positioning system, but the system could not determine the position in detail because Wi-Fi could only detect object zones. Based on these problems, this research will propose the infrastructure prototype design of an indoor positioning system based on Visible Light Communication (VLC). The main focus of this research is designing a VLC transmitter and receiver system, estimating the distance between the receiver and transmitter based on the received signal strength, and estimating the receiver's position using the triangulation method based on a minimum of 3 distance estimates. The estimating distance system get average accuracy is 76.47%. The estimated accuracy of the x-coordinate position has the best accuracy is 77.05% and the y-coordinate estimate has the best accuracy is 86.54%.
I. INTRODUCTION
Remotely Operated Underwater Vehicle (ROV) is a submersible robotic systems, used to examine various underwater characteristics and controlled by operators from shore [1].With complex, dangerous and limited areas urgently explored, there is an urgent need for an underwater machine that can replace humans to complete underwater detection. ROV were developed to perform resource exploration tasks in the ocean [2]. The applications of ROV are widely diverse, such as the oil and gas industry, discovery, aquaculture, marine biology, and military purpose [3] [4][5] [6].
Numerous of ROV designs are assembled around or inside cubic structured frame along with a buoyant on top body. The robotic vehicle utilized umbilical data cable as electrical cable and controlled by operator positioned on surface vessel.
The heavier tools are settled down on lowest possible position to keep the point of center buoyancy is higher than center of gravity, thus adequate stability is obtained. Sufficient stability of ROV meant ROV has capability to withstand against in disturbance for instance moment of rolling and pitch in longitudinal and lateral axis. Steady hydraulic rods suited of lifting and carrying particular equipment for specific purpose and placed in the fore along with cameras and lights.
Underwater ROVs are generally divided categories based on size, weight, capacity, or performance. Two common design of ROV which widely use are: the ROV Micro-Class with weigh not higher than 3 kg, are used especially in narrow area where a diver might not be capable to enter. And the ROV Mini-Class that weigh about 15 kg are also used as a diver alternative, Inspection-Class are typically rugged ROVs for commercial or industrial use, data acquisition and observation ROVs, Light weight duty category has generally no more than 50 hp on propulsion, While Heavy duty type has less than 220 hp with the ability to capability lift at least two handlers and work at depths up to 3500 m, and Trenching & Burial work class is the largest with more than 200 hp propulsion, with the ability to take a cable, put down sleds and working at depths until 6000 m in several cases [7][8] [9] [10]. The ROV design and construction is a robust solution to encounter the various require of an ROV to be used in wide implementations, it is compact, handy to use and inexpensive, and allow for confined space exploration. The main form of the body is designed to encounter the particular utilization needs. Commonly, almost ROV attached by torpedo form and an hydrodynamic main body was used at high-speeds [11]. Another type is the torpedo-less form, worked mainly in remote control ISSN 2355-3286 vehicles (ROVs) that are often worked for shorter period of time purpose or the assessment of other large subsea areas like huge ice floe or water-dams [12]. Toward to emphasize the performance in water resistance, Computational Fluid Dynamics (CFD), drives valuable part in Unmanned vehicle design. Nowadays, CFD approach has been widely applied in underwater vehicle design, as it can simulate the fluid flow field around the bodies and to find out with comprehend the physics of fluid flow phenomenon. CFD captured the flow field such as vortex in very small to major scales which is hard to obtain through field experiments.
CFD methods enable to predict and diagnostic technique for identifying the cause of particular problem in physic phenomena. The latest CFD approaches provide wealth data generated emphasize to visualize the implication of the result requires considerable skill. Due to this CFD become as an integral part of the design process.
From the economic cost, numerical simulation can deliver result within sufficiently short time span in the design process, thus optimization design can be detailed in addition to addressing this issue during the design process. Numeric simulation is considered low cost analysis hence became early stage to understanding along side with experimental test which can be over cost and intensive time.
In this project, the CFD analysis assists in investigate and demonstrate variable model for the hull body form based on initial prototype [13]. In this research, a design of Mini-ROV is developed especially the principal of body device in each element and electric parts is embedded. Furthermore, the hydrodynamic parameter and analysis was provided based on CFD simulation.
II. METHODS
The experimental method was used in this study, where a series of experiments were directly conducted by theoretical studies. The overall block diagram of the system built is shown in Fig. 1. The explanation in Fig. 1 is as follows: first Data ID will be initialized on Arduino, next it converted into ASCII data and will generate binary code that will be sent with serial communication UART (Universal Asynchronous Receiver-Transmitter). The modulation process is carried out, namely the process of converting the information into the carrier signal in the form of LED light. LED Driver is used as a switching circuit that converts the UART signal into LED's light, on the receiver side, the TSL252 sensor works to receive the light intensity and then performs a demodulation process so that the ID data is retrieved, next estimating distance process is based on the received signal strength and estimating position based on the triangulation method.
The prototype that will be made consists of 4 lamps with a height is 80 cm from the floor, between lamps given a distance is 40 cm and the height of the photodetector sensor to the lamp is 65 cm. Based on the existing block diagram, the first step is to initialize the ID data which is the letters of the alphabet a, b, c, d. This ID will be identity and coordinate(x, y) information of each lamp [11]. Next, Arduino Uno which as a microcontroller transmitter will convert data ID into UART serial format and forward it to LED Driver MOSFET IRF520 module. Data will be sent via visible light communication using modulation on-off keying [12].
The receiver device will use the TSL252 sensor to receive the light intensity and convert it into electrical pulses [13]. It is a combination of photodiode and transimpendance amplifier. Photodiode and transimpendance amplifier function to convert light intensity into voltage. The receiver circuit cable configuration is done by connecting the sensor ground pin to the Arduino ground pin, the VCC sensor pin to the 5V Arduino pin, the sensor OUT pin is connected to the Arduino RX pin. The system uses a grid separate and 4 TSL252 sensors to avoid incoming data collisions and cause data bits to be interference and can not be converted to ID data. Identify the incoming signal from which lamp based on the ID data received by the receiver system. Calculating the power of each received signal use log normal shadowing equation to estimate the distance receiver to transmitter. In this research, the process of measuring the distance between the transmitter and the receiver is based on the Received Signal Strength (RSS) method.
Several methods are conceptually capable to solve the problems. Examples of the most common methods are Time Different of Arrival (TDOA) and RSS. Based on TDOA the distance calculates by utilizing the required time to send data from transmitter to receiver so the time will be comparable with the traveled distance. Whereas the RSS used the current signal at the receiver where if the distance between transmitter and receiver is short that received signal will get more power. Because the speed of lights is very high, applying the TDOA method to design a prototype is difficult to calculate how much time it takes for transmitter to send data to receiver at each test point. Therefore an easier method to apply in this research is the RSS method which used the signal strength and ISSN 2355-3286 partition between sensors so each sensor only receives one signal from a spesific transmitter and avoids interference of signals from other transmitters.
Based on the RSS method when the received power is greater, the distance between transmitter and receiver is getting closer, and vice versa [14]. The log normal shadowing model is used to calculate the estimated distance by knowing in advance the power value at the reference point, the power at the test point and the pathloss exponent value. The power at the reference point is the power when the receiver is directly under a lamp or transmitter. The pathloss exponent value is a coefficient that indicates a reduction in transmit power when the transmitter sends data until the data is successfully received at the receiver. Next step, estimating receiver position using distance estimation data and triangulation method [15].
The triangulation method can be used to determine the position of an object only by referring to the distance of three existing points. The way to do this is to draw a straight line that connects two points whose distance can be measured directly. The distance between the two points is carefully measured. Then the angle formed by a line connecting one point to the object to be determined is the distance from the line connecting the two points is measured. Then the angle between the line connecting the second point with the object with the line connecting the two points is also measured. Based on the information of two angle values and the distance of three points, the object distance can be calculated. The design of system will be divided into 3 parts, namely: transmitter device, receiver device, and then design of estimated distance and position. This is an illustration of the overall positioning system that will be created. Fig. 2 is prototype will be designed to use 4 LED lights as part of the transmitter device that will send ID data, then the illustration of the brown box is the object whose position will be estimated, in which a receiver sensor will be installed to receive data and based on the data and the intensity of the light it will be estimated the distance and position of the object to the transmitter, then the circle in the picture shows the range of data reception
A. Transmitter Device
The transmitter hardware design consists of a 20 Watt 12 VDC LED lamp, LED Driver IRF520, 12 VDC power supply, and an Arduino Uno microcontroller. The explanation in Fig. 4 is a process of sending data by the transmitter. First, the data will be initialized on the Arduino then the baudrate communication will be set for UART format serial communication. The data in the form of an alphabet will be converted into an ASCII code in the form of a binary code, then the process is continued to the LED driver for data adjust the blinking of the lamp to match the transmitted binary data.
B. Receiver Device
The receiver hardware design consists of 4 sets of TSL252 sensors and Arduino Nano, Arduino Mega, and LCD Display 16x2. The TSL252 sensor consists of a photodiode and a transimpendance amplifier function to convert intensity light into voltage or electrical pulses in UART serial format to be translated into ID data. Next, Arduino will calculate the Received Strength for each signal.
ISSN 2355-3286
The software design of receiver is shown in flowchart Fig. 6. Because the received signal must know 2 types of data, the first stage of the receiver flow diagram is initializing analog pins to read voltage data and the baudrate initialization used must be the same as the transmitter baudrate.
Furthermore, if there is already a data ID in the serial buffer, if there is, then proceed to the data reading process, otherwise the system will wait for the data to be available again. After the ID data is obtained, the process of converting the ADC value to the voltage value is continued. The last process on the VLC Receiver device is the packaging of ID and voltage data which will later be used as input data in the distance estimation process. Fig. 7 is a process that occurs in the receiver. first, initialize the output pin TSL252 sensor then initialize the baudrate used and must be the same as the baudrate on the transmitter, and then proceed with asynchronous serial communication, if there is data in the serial buffer then, the system detect ID data and ADC value then proceed to the process of estimating the distance from the receiver to the transmitter based on the data ID data and received signal strength.
ISSN 2355-3286
The explanation for Fig. 8 and 9 are the illustration of the object to be estimated position where the TSL252 sensor has been placed to read the data, Arduino to process the data, and a 16x2 LCD to display the estimated position of the object coordinate. The Separator is used so that each sensor only focuses on receiving 1 ID data and avoids the influence of other ID data signals. When the ID data is received by the receiver, the system will simultaneously read the signal voltage and calculate the power. Furthermore, the receiving power of each signal will be calculated then the system will use the log normal shadowing equation to estimate the direct distance (d) of the receiver to the lamp that becomes transmitter.
C. Design of Estimated Distance and Position Algorithm
After that, by using the Pythagoras theorem, the radius distance will be obtained. This is an equation to find the coordinate of an object's position based on estimated distance data and the triangulation method. This equation requires a distance estimate of at least 3 data (2) This is the prototype of the designed system:
III. RESULT AND DISCUSSION
The testing in this research is divided into 3 parts: testing of send and receive data through Visible Light Communication (VLC), estimating the distance between receiver and transmitter based on Received Signal Strength (RSS), and estimating the position of object based on triangulation method.
A. Analysis of Sending Data
The analysis is done by observing the transmit signal that appears on the oscilloscope screen, started with connecting the oscilloscope signal cable to the Arduino TX pin as a transmitter and the oscilloscope ground cable to the Arduino ground pin.
a) Signal of ID "a"
The transmit signal is converted into binary data so that ID "a" becomes 0110001, then the oscilloscope will read the signal in the Least Significant Bit (LSB), which is reading data start from the binary data row which has the smallest value, the signal displayed is 0100001101. The number 0 at the beginning start bit and the last 1 indicates the stop bit
) Signal of ID "b"
The transmit signal is converted to binary data becomes 01100010, on the oscilloscope the signal that appears is 0010001101.
c) Signal of ID "c"
The transmit signal is converted to binary data becomes 01100011, on the oscilloscope the signal that appears is 0110001101.
d) Signal of ID "d"
The transmit signal is converted to binary data becomes 01100100, on the oscilloscope the signal that appears is 0001001101.
B. Analysis of Light Intensity to Radius Distance
This analysis is to determine the value of the light intensity transmitter lamp when the TSL252 sensor can receive ID data at the farthest radius distance from the transmitter. The measurement process uses lux meter HS1010. Based on Table 1, the signal can be received with a maximum radius distance is 25 cm and the required light intensity in the range of 150 -370 lux.
C. Analysis of Received Signal
This Analysis is carried out at several points representing positions in center and extreme positions. Center Positions were tested at (-10, 15); (-30, 15); (-30, 25); (-10, 25) while extreme positions were tested at (-10, 30) and (-10, 10). Extreme positions are coordinate position x, y which tend to be very close to one lamp or transmitter but far enough from the other lamp. The purpose of this test is to find out what ID data is received by the receiver device and then it was processed as input to estimate the distance between receiver and transmitter. Distance Estimation System is done by first calculating the pathloss exponent value of each lamp based on the received signal strength and the log normal shadowing equation. The suitability of the pathloss exponential value is tested by comparing the estimated radius distance based on the system with the real radius distance.
D. Analysis of Estimated Distance between Receiver and Transmitter a) Calculating Pathloss Exponent Value
Distance Estimation System is carried out by calculating the pathloss exponent value of each lamp based on the received signal strength and the log normal shadowing equation. The suitability of the pathloss exponential value is tested by comparing the estimated radius distance based on the system with the real radius distance. The pathloss exponential value contained in table III is based on calculations using the log normal shadowing equation.
It requires some parameters, namely: P0 : Received Signal Strength at reference point when the radius distance is 0 cm (receiver is right below the lamp as transmitter). Prx : Received Signal Strength at position test. n : pathloss exponential value. d : direct distance between receiver, and transmitter. d0 : direct distance between receiver and transmitter when radius distance is 0 cm.
After being measured and calculated the results obtained are as follows: d0a = d0b = d0c = d0d = 65 cm, P0a = 3.066 mW, P0b = 3.1665 mW, P0c = 3.049 mW, P0d = 3.094 mW. For the value of d and the value of n as represented in Table III. The following graph shows the radius distance will decrease signal strength ISSN 2355-3286
b) Analysis of Estimated Distance Radius with Pathloss Exponent Value
Testing the pathloss exponent value is done by estimating the direct distance (d) using the log normal shadowing equation and then comparing it with the real direct distance (d). Furthermore, the distance d is used to calculate the value of radius distance (r), next it was compared with the actual radius distance r. The explanation of Table V is showed radius distance data based on estimates and real conditions, after calculated the average accuracy reaches 76.47%
E. Analysis of Estimated Position Based on Triangulation Method
This test uses several coordinate positions that represent positions in center and extreme position such as at the ends of the lamp a) Center Position area The explanation Table VI is showed the test results when the object is positioned in the center of area, then ID and received signal strength are noted on the receiver side next using equation 1 in the method section to obtain the value of d which is the distance between the transmitter and the receiver. The explanation Table VIII is showed the test results when the object is positioned in an extreme area, then ID and received signal strength are noted on the receiver side next using equation 1 in the method section to obtain the value of d which is the distance between the transmitter and the receiver.
F. Analysis of Environment Condition
In this research, the lamp used is limited to 20 watts, because based on testing the transmitter device with specifications of the lamp is good enough to send ID data signal but if the lights used are brighter or dimmer the next research it can be added to the design of the Automatic Gain Controlled (AGC) circuit which is a circuit able regulating the gain in a system and controlling it automatically. This prototype is expected to be able to detect the robot when it reaches the destination point but indeed with all limitations. The focus of this work is only on making the infrastructure of the system which includes sensors, microcontrollers, entering formulas into the programming code and the sequence of processes carried out from receiving data by sensors. To be processed using equations if in a real condition prototype must add some additional parts such as filter circuit and optical lens in the receiver device.
A. Research's Results
The Transceiver device of Visible Light Communication (VLC) succeeded in sending and receiving data in the form of alphabets letters which consists of "a" until "d" with a maximum radius distance is 25 cm and a maximum angle for data transmission is 21.056⁰.
The design process of positioning estimation system can be solved through several stages are as follows: first step, the receiver of VLC can receive data from the three closest transmitters when tested in the center area and the extreme area. Second step, the distance estimation system between receiver and transmitter can be done with an average accuracy is 76.47%. Furthermore, based on ID and distance estimation data, a positioning estimation system can be carried out and the system get the best accuracy of xcoordinate estimation is 77.05% and the best accuracy of y-coordinate estimation is 86.54%.
Then if it is calculated, the average accuracy of xcoordinate is 30.27% and y-coordinate is 64.67%. The average accuracy is not so good because the data received at a certain test point does not obtain data with maximum accuracy because there are still deficiencies in the positioning of the sensor.
B. Future Work 1) Designing device interfaces can display estimated position data. 2) Two-way communication can be implemented by combining with infrared (IR) communication. 3) Design of filter circuits and optical lenses on the receiver so that noise can be minimized and the signal becomes more focused to receive | 5,262.6 | 2022-06-30T00:00:00.000 | [
"Computer Science"
] |
2-Bromoethyl 2-chloro-6-methylquinoline-3-carboxylate
In the title compound, C13H11BrClNO2, the two rings of the quinoline group are fused in an axial fashion at a dihedral angle of 1.28 (9)°. In the crystal, molecules are arranged in zigzag layers along the c axis. The crystal packing is stabilized by weak C—H⋯O hydrogen bonds and intermolecular interactions between Br and O atoms [Br⋯O= 3.076 (2) Å], resulting in the formation of a three-dimensional network.
In the title compound, C 13 H 11 BrClNO 2 , the two rings of the quinoline group are fused in an axial fashion at a dihedral angle of 1.28 (9) . In the crystal, molecules are arranged in zigzag layers along the c axis. The crystal packing is stabilized by weak C-HÁ Á ÁO hydrogen bonds and intermolecular interactions between Br and O atoms [BrÁ Á ÁO= 3.076 (2) Å ], resulting in the formation of a three-dimensional network.
We are grateful to all personnel at the PHYSYNOR Laboratory, Université Mentouri-Constantine, for their assistance. Thanks are due to MESRS (Ministé re de l'Enseignement Supé rieur et de la Recherche Scientifique -Algé rie) for financial support.
Supplementary data and figures for this paper are available from the IUCr electronic archives (Reference: BQ2201).
Comment
Benzylic bromination can be carried out using N-bromosuccinimide (NBS) under photocatalytic conditions (Djerassi, 1948;Newman et al., 1972). It is also known that NBS react with benzaldehyde diethylacetal to give corresponding ester (Marvell et al., 1951;Markees et al., 1958). Although extensive studies have been carried out in the past, selectivity clearly remains a common problem in radical bromination (Kikichi et al., 1998;Xu et al., 2003). In previous works, we have reported structure determination of some new quinoline derivatives (Benzerka et al., 2008;Ladraa et al., 2009;Ladraa et al., 2010). In this paper, we report the synthesis and structure determination of new compound, resulting from the radical bromination of 2-chloro-3-(1,3-dioxolan-2-yl)-6-methylquinoline, (I), under photocatalytic conditions. Our attempt to brominate the methyl group linked at C-6 position of quinoline ring, which has an acetal function at C-3, was failed and led to the 2-bromoethyl 2-chloro-6-methylquinoline-3-carboxylate (I). This compound is the result of the unwanted conversion of the acetal to the corresponding ester.
The molecular geometry and the atom-numbering scheme of (I) are shown in Figure 1. The asymmetric unit of title molecule contains a 2-bromoethylcarboxylate group linked to quinolyl moiety. The two rings of quinolyl moiety are fused in an axial fashion and form a dihedral angle of 1.28 (9)° The crystal structure can be described as layers in zig zag along of c-axis which quinoline rings are parallel to the (110) plane. The crystal packing is stabilized by weak hydrogen bonds Table 1.
Experimental
The title compound (I) was synthesized by treating 1 mmol. of 2-chloro-3-(1,3-dioxolan-2-yl)-6-methylquinoline with 1 mmol. of N-bromosuccinimide in the presence of 0.5 mmol. of dibenzoylperoxide in CCl4 under photocatalytic conditions. The contents were then cooled and filtered off and the filtrate was concentrated under reduced pressure. The residue was subjected to column chromatography (silica gel, eluent: CH2Cl2) to afford pure product. Crystals suitable for x-ray analysis were obtained by slow evaporation of a dichloromethane solution of (I).
Refinement
All H atoms were localized on Fourier maps but introduced in calculated positions and treated as riding on their parent C atom. (with C-H = 0.93Å, 0.96Å, 0.97Å and U iso (H) =1.2 or 1.5(carrier atom)).
supplementary materials sup-2 Figures Fig. 1. (Farrugia, 1997) the structure of the title compound with the atomic labelling scheme. Displacement are drawn at the 50% probability level.
Special details
Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes.
Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating Rfactors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. | 1,051.4 | 2010-03-31T00:00:00.000 | [
"Chemistry"
] |
Lipid droplets contribute myogenic differentiation in C2C12 by promoting the remodeling of the acstin-filament
Lipid droplet (LD), a multi-functional organelle, is found in most eukaryotic cells. LDs participate in the regulation of many cellular processes including proliferation, stress, and apoptosis. Previous studies showed the athlete’s paradox that trained athletes accumulate LDs in their skeletal muscle. However, the impact of LDs on skeletal muscle and myogenesis is not clear. We discovered that C2C12 myoblast cells containing more LDs formed more multinucleated muscle fibers. We also discovered that LDs promoted cell migration and fusion by promoting actin-filaments remodeling. Mechanistically, two LD-proteins, Acyl-CoA synthetase long chain family member 3 (ACSL3) and lysophosphatidylcholine acyltransferase 1 (LPCAT1), medicated the recruitment of actinin proteins which contributed to actin-filaments formation on the surface of LDs. During remodeling, the actinin proteins on LDs surface translocated to actin-filaments via ARF1/COPI vesicles. Our study demonstrate LDs contribute to cell differentiation, which lead to new insight into the LD function.
INTRODUCTION
Lipid droplets (LDs) are multi-functional organelles in the cell [1], consisting of a neutral lipid core and a monolayer of phospholipid membranes. Previous studies suggest that LDs are deeply involved in biological processes such as cellular resistance to oxidative stress [2][3][4], inflammation and immunity [5], and lipid synthesis and metabolism [6,7]. LDs provide necessary lipid components to cell proliferation including synthesis of membranes and production of metabolic energy [8][9][10]. LDs perform multiple functions by interacting with other organelles, such as endoplasmic reticulum, mitochondria, peroxisome, and nucleus. For example, LDs interact with the endoplasmic reticulum to regulate cellular lipid synthesis [6]. The contact between LDs and the endoplasmic reticulum and the formation of "lipidic bridges" allows the phospholipid membrane of LDs to link to the outer membrane of the endoplasmic reticulum [11]. Many proteins, such as triglyceride synthases (DGAT2 and GPAT4), are transferred from the endoplasmic reticulum to the LD surface due to the fluidity of the membrane, which accelerates the growth of LDs and allows them to store more lipids [6]. In addition, during LDmitochondrial interactions, LDs come into contact with mitochondria and fatty acids from LDs are rapidly transferred to the mitochondria for oxidative degradation, providing large amounts of energy [12].
LDs have also been found to interact with the microfilament and microtubules [13]. Recent studies have shown that many LDs are distributed along microtubules [14] and can move unidirectionally or bidirectionally along microtubules [15]. This interaction between LDs and microtubules is important for the LD motility, which allows LDs to interact with other organelles and for the distribution of LD and peroxisomes in the cell [14,16]. LDs are also found to interact with microfilaments. For example, NMIIa and FMNL1 mediate the assembly of F-actin on the surface of LDs, which in turn affects LD volume and regulates lipid storage in the organism [17]. In addition, proteins of another class of cytoskeletal system (Septin), septin9 [18] and septin11 [19], can regulate LD morphology and growth by regulating the assembly of microfilament microtubules, thereby controlling triglyceride storage and influencing body lipid metabolism [20,21].
In the present study, we sought to determine the role of LDs and clarify the potential regulatory mechanism of LDs in myoblast differentiation. We show that LDs promote the migration and fusion of myoblast in myogenesis. We provide evidence that LDs regulate actin-cytoskeleton remodeling via buffering the actinin proteins. Our results also suggest the potential link between LDs and muscle development and injury-regeneration.
MATERIALS AND METHODS Cell lines
The C2C12 cell line was purchased from CoWin Biosciences (#CW2915F, Beijing, China). The STR profiling and test of mycoplasma contamination can be provided if required.
Antibodies
The details are provided in SI Materials and Methods.
Regents
The details are provided in SI Materials and Methods.
Cell culture and transfection
The C2C12 cells were cultured in Dulbecco's Modified Eagle Medium (DMEM; HyClone, Logan, UT, USA) with 10% fetal bovine serum (FBS; #SH30396.03, Hyclone, Canada), 100 unit/mL penicillin, and 100 μg/mL streptomycin in dishes at 37°C, in a humidified atmosphere with 5% CO 2 . For transfection, a total of 2 μL of Lipofectamine® 2000 transfection reagent (11668-019, Thermo Fisher) diluted in 25 μL of Opti-MEM (51985-034, Thermo Fisher) was prepared. In addition, 1 μg of plasmid was diluted with 25 μL of Opti-MEM and incubated for 5 min at room temperature. The 50 μL mixture of Lipofectamine2000 and plasmids was added into one well (24-well plate). After 5-6 h, the medium was renewed and the cells were incubated for 24-48 h for further use in the following experiments. All analyses were done with three biological replications (three wells of cells per replication).
C2C12 differentiation
To induce cell differentiation, the C2C12 cells were transferred to DMEM containing 2% horse serum (Gibco) (differentiation medium, DM). All cells were grown to near confluence (90%) before the induction of differentiation.
Protein mass spectrometry
The protein mass spectrometry detection for this study was commissioned by Novogene Technology Co., Ltd., Beijing, China. In brief, protein alkylation reduction and enzymatic hydrolysis were first performed, followed by peptide desalting, and mass spectrometric detection. The obtained mass spectrometry data were subjected to further quality analysis and protein function annotation for further cluster analysis. For the testing methods and specific parameter settings, we can provide detailed instructions if necessary. The analyses were done with three independent samples replications.
Lipid droplets isolation
In our previous research, we performed the nuclear separation of cells. In this study, we focused on the function of cytoplasmic lipid droplets. Therefore, we first used the cytoplasmic separation kit to separate the cytoplasmic group, and then we isolated and purified the lipid droplets in the cytoplasm according to Ding et al.'s method [22]. Briefly, we added the cytoplasmic components to buffer A, then ultracentrifuging (185,000×g) for 2 h, and taking the upper lipid droplets. The lipid droplet components obtained after ultracentrifugation were washed three times with buffer B, each with a 15,000×g centrifugation, and finally the supernatant was discarded. The obtained lipid droplet fractions were used to extract proteins using a protein lysate.
Western blot
The details are provided in SI Materials and Methods.
Microfilament marking
In this study, TRITC Phalloidin (#40734ES75, 300T, YEASEN, Shanghai, Chain) was used to label the microfilament cytoskeleton in cells. Briefly, cells were fixed with 4% paraformaldehyde for 30 min. The fixed cells were then incubated with TRITC Phalloidin for 10 min at room temperature. The staining solution was discarded, washed three times with PBS, and the slides were mounted and observed.
Lipid droplet marking
Lipid droplet marking was performed as reported previously [23]. The cell slides were fixed with 4% paraformaldehyde for 15 min at room temperature. The slides were stained with BODIPY 493/503 (#D3922, Invitrogen, Carlsbad, CA, USA) for 10 min at 37°C and were then stained with DAPI for 10 min at 37°C. After washing three times with PBS for 10 min each, the slides were sealed with an anti-fluorescent quenching solution (#P36961, ProLong™ Diamond Antifade Mountant, Invitrogen, Thermo Fisher, USA) for confocal microscopic observation (63× oil lens, BODIPY FL and DAPI channels, Zeiss LSM 800, Germany).
Cytochalasin treatment
In this study, cytochalasin D was used to disaggregate the microfilaments and destroy the intracellular microfilament cytoskeleton. Briefly, we discarded the original cell culture medium, washed it twice with PBS, added 10 μmol/L Cytochalasin D (#C102396, CAS: 22144-77-0, aladdin, Shanghai, China), and then incubate at 37°C for 2 h.
Oleic acid medium treatment
Oleic acid treatment was carried out as described in a previous study [23]. For oleic acid treatment, a 20 mM oleic acid phosphate buffer saline (PBS) mixture and 20% FA-free bovine serum albumin (BSA) medium were prepared, and both media were heated in a 70°C water bath for 30 min. Finally, the media were mixed. The 10 mM oleic acid-BSA mixture was added to the cell cultural medium at a ratio of 1:49 (v:v). The cells were then either seeded on slides or on plates that had been washed three times using PBS. Then, 1 mL of the oleic acid medium was added to the well and the cells were cultured for 12 h.
Subcellular components isolation
The protein of subcellular organelle components was isolated with the subcellular Structure Protein Extraction Kit (#C500073, Sangon, Shanghai, China). The details are provided in SI Materials and Methods.
Plasmid construction
Plasmid construction was performed as reported previously [23]. The details are provided in SI Materials and Methods.
qPCR assay
Real-time PCR was performed as reported previously [23]. Primer sequences are shown in SI Table S1. The details are provided in SI Materials and Methods.
Silver staining
Silver staining was performed according to the manual (#P0017S, Beyotime, Nanjing, China). The details are provided in SI Materials and Methods.
Immunoprecipitation and co-immunoprecipitation
Immunoprecipitation and nuclear co-precipitation were performed according to the manufacturer's instructions (Protein A/G beads, #B23201, bimake, Shanghai, China). The details are provided in SI Materials and Methods.
Live cell workstation
Cells were seeded on a 3.5 cm glass-bottom cell culture dish (Ibidi, 80466). Cells were treated with OA or BSA (control) for 12 h. Real-time visualization was made by differential interference contrast (DIC) microscopy, utilizing the Zeiss inverted fluorescence microscope within an incubated chamber (37°C, 5% CO 2 ). Images were captured at 15 min intervals for 2 days.
Fluorescence images analysis
The ImageJ software was utilized to analyze the co-localization. Briefly, the image was split into the red/green/blue channels. Then, the fluorescence intensity was analyzed, along with the marked line. The values of different channels were plotted in one graph.
Y. Tan et al.
Statistical analysis
All quantitative experiments were evaluated for statistical significance using the software GraphPad Prism v.5.0 (GraphPad Software, Inc. 7825 Fay Avenue, Suite 230 La Jolla, CA 92037 USA). Because the sample size in the experiment was small, Wilcoxon-Mann-Whitney nonparametric tests were employed. The statistical significance (p-value) is indicated (*p < 0.05; **p < 0.01). All analyses were done with three biological replications (three samples per replication).
Statements of approval
We confirm that all methods were performed in accordance with the relevant guidelines and regulations of Ethics Committee of Huazhong Agricultural University.
RESULTS
LDs promote myoblast to form multinucleated myotubes To determine whether LDs play a role in myogenesis, we examined the formation of multinucleated myotubes in C2C12 myoblast with more or less LDs. The high-LD-content myoblast (named loaded cells) were constructed by incubation with 200 μM oleic acid for 12 h before differentiation (the control group used 3% BSA). Then the fresh differentiation medium was replaced to induce the myoblast differentiation. Four days later, the shape and number of myotubes were both detected by immunofluorescence. The results showed that the number of myotubes and the fusion rate were significantly higher in the loaded cells ( Fig. 1A-C). Additionally, the myosin heavy chain (MyHC) expression was significant upregulate (~1.6-fold) in loaded cells (Fig. 1D, E). Furthermore, the expressions of myogenic differentiation 1 (MyoD) and myogenin (MyoG), the important transcription factors of myogenesis, were not changed (Fig. 1F).
LDs promote migration and fusion of myoblast
Since LDs cannot affect myogenic regulatory factors, we then examined the migration activity of loaded and control myoblast by scratch assay (Fig. 2A). The results showed the loaded cells formed more filopodia and lamellipodia as compared to unloaded cells at 360 min. Subsequently, more loaded cells were migrated as compared to the unloaded cells at 1605 and 2655 min ( Fig. 2B and movie S1). The number of cells with migration was counted, which showed the migration capacity of loaded cells was enhanced (Fig. 2C, D). Furthermore, the F-actin was stained by rhodamine phalloidine, which indicated the loaded cells formed more filopodia and lamellipodia (Fig. 2E, F). Moreover, the expression level of myoblast fusion-associated genes, myomaker, and caveolin were increased significantly in the loaded cells (p < 0.05, Fig. 1E).
LDs accelerate microfilament remodeling
Since LDs affected myoblast migration and fusion, we then examined the impact of LDs on remodeling rate. Cytochalasin D, as the specific inhibitor of microfilament polymerization, can inhibit cytoskeleton remodeling. C2C12 cells were treated with cytochalasin for 2 h and microfilaments were then labeled with rhodamine-phalloidin. The microfilament structures were completely destroyed, and the rhodamine signals were diffused (SI Appendix Fig. S1A). Subsequently, fresh medium was changed and the cells were fixed for 0.5, 1, 1.5, 2, 4, and 6 h, respectively. The results showed that microfilaments (peripheral stress fiber) were formed at the edges of cells at 0.5, 1.5, and 2 h (SI Appendix Fig. S1A). The number of new polymerized microfilaments returned to a normal level after 2 h (SI Appendix Fig. S1B). Interestingly, the number of cellular LDs increased during the microfilament polymerization process (SI Appendix Fig. S1A, C). We then examined the remodeling rate in loaded cells and control cells. The results showed that more actin-filaments were formed in loaded cells at 10, 30, and 60 min than in control cells (Fig. 3A, B).
In order to confirm the effect of LD on remodeling rate, instead of OA, the C2C12 cells were treated with DGAT1 and DGAT2 inhibitor before OA treatment, which could abrogate the formation of LD. The result showed that the rate of remodeling was not accelerated in cells with DGATi (Fig. 3A, B).
LD proteomic data show many actinin proteins on LDs
Ping sheng Liu's lab reported the proteome of C2C12 LD previously [25]. Our lab isolated and purified cytoplasmic LDs of HepG2 cells and then analyzed by mass spectrometry. We found total 132 cytoskeleton-related proteins in LD proteome, including 63 microtubule cytoskeleton and 46 actin cytoskeleton proteins (SI Appendix Fig. S2A). Interestingly, the C2C12 LD proteome also showed many actin cytoskeleton proteins on LD surface [25] (see Appendix file of the reference). Some of them were selected for Western blot verification in both HepG2 and C2C12 cells, including the actinin proteins ACTN1, ACTN2, ACTN3, ACTA1, and tubulin TUBA4A (SI Appendix Fig. S2B).
LD proteins ACSL3 and LPCAT1 recruit actinin by binding SR protein domains Actinin protein contains three protein domains, namely, the actinbinding domain, spectrin repeat (SR) domain, and EF domain. There are no LD-targeting sequences, such as a giant hydrophobic helix. Therefore, actinin is not a natural LD-targeting protein.
To determine how actinin recruits on LDs surface, we performed the immunoprecipitation assay to find the LD-proteins that bind to actinin proteins in C2C12 (SI Appendix Fig. S3A). The protein silver staining image showed large differences in bands between the LD fraction and whole cell lysate (SI Appendix Fig. S3B). The mass spectrometry result showed that ACTN3 binds to many microfilaments, microtubules, and intermediate fibrin (SI Appendix Fig. S3C), and, additionally, the molecular functions and protein domains were also compatible with cytoskeleton-related proteins (SI Appendix Fig. S3D). Furthermore, two LD-proteins, ACSL3 and LPCAT1, were found in the data (SI Appendix Fig. S3C). The two proteins are well-known LD-targeting proteins because they contain LD-targeting structures [26][27][28]. We then verified that ACTN3 binds to ACSL3 and LPCAT1 via a co-immunoprecipitation assay in C2C12 cells (Fig. 4A). Furthermore, we confirmed the binding effect by using tag vectors in C2C12 cells (SI Appendix Fig. S3E, F). In order to investigate the binding domain of ACTN3, we constructed six different fragments, A to F, containing the FLAG tag according to the domain of ACTN3 (Fig. 4B). The results showed that the A, C, E, and F fragments could bind to ACSL3 and LPCAT1, which indicated the SR domain was the key protein domain for ACTN3 binding to ACSL3 and LPCAT1 (Fig. 4B). In order to further detect whether ACSL3 and LPCAT1 are important factors for ACTN3 localization on LDs, we interfered with the expression of ACSL3 and LPCAT1 by RNAi. Then, the LD-ACTN3 level was detected by Western blotting. The results showed that ACSL3 and LPCAT1 levels were inhibited, and that the LD-ACTN3 level decreased significantly (p < 0.05) (SI Appendix Fig. S3G). Therefore, we drew the model to show the binding effect between ACTN3 and ACSL3/LPCAT1 (Fig. 4C). We further investigated whether ACTN3 recruitment on LDs affected their stability through half-life assay. The result indicated ACTN3 degradation was slower in loaded cells than unloaded cells (SI Appendix Fig. S3H). We then detected the affect of ACSL3/LPCAT1 knockdown on the actinfilament remodeling. The result showed that the rate of remodeling did not changed in loaded C2C12 cells with ACSL3/ LPCAT1 knockdown (Fig. 4D, E).
Transfer of actinin from LD surface to microfilament during microfilament remodeling Actinin proteins are involved in crosslinking actin-containing thin filaments, are important to the stability and activity of actinfilaments. We further investigated the role of actinin proteins on LDs surface. Firstly, we confirmed the localization of ACTN3 on LDs surface through laser confocal microscope and western blot. The results showed ACTN3 co-localized to whole microfilament (including ventral stress fibers, back stress fibers, peripheral stress fibers, and actin cap stress fibers) in C2C12 cells (Fig. 5A). We then detected the level of LD-ACTN3 during actin-cytoskeleton remodeling. After treating the cells with cytochalasin D for 2 h, the cells were washed by phosphate buffer saline (PBS) twice and then changed into a fresh medium, and the cells were collected for LD isolation at 0, 0.5, 1, and 2 h, respectively. The results show that LD-ACTN3 decreased at 0.5, 1 h and returned at 2 h (Fig. 5B). In order to elucidate why LD-ACTN3 decreased during remodeling, we reformed ACTN3 to be a LD-targeting protein through inserting a PAT protein domain (1-193aa of PLIN1, a giant amphipathic helix) at the N-terminus of ACTN3-EGFP (Fig. 5C). Firstly, the total EGFP signals were localized to the surface of LDs, while no EGFP signal was detected on the microfilaments in C2C12 cells (Fig. 5C). After the C2C12 cells were treated with cytochalasin D for 2 h, the microfilament structure was destroyed, and the localization between the LDs and EGFP protein remained unchanged (Fig. 5C). After replacing the fresh medium for 1 h, new microfilaments were formed, and the EGFP signal was positive on the microfilaments, although EGFP signals were still present surrounding the LDs (Fig. 5C). To further confirm the translocation of LD-ACTN3 to actin-filaments, the proteins of subcellular fractions (such as cytoskeletons, cytomembranes) were isolated, and the levels of EGFP protein in the cell membrane and cytoskeleton components at 0 and 1 h were detected. The results show that the level of EGFP in the cytoskeleton protein at 1 h increased (p < 0.05, Fig. 5D) and that the level of EGFP in the cytomembrane protein also increased slightly (Fig. 5D).
ARF1-dependent vesicles mediate the transfer of actinin from LDs to microfilaments
In order to investigate how LD-ACTN3 transferred from LD to microfilaments during remodeling, we detected whether the ARF1/COPI membrane vesicle transport system was involved in this process, as previous studies have indicated that the ARF1/ COPI system is important to LD morphology and LD-related protein transport [6,29]. Co-localization analysis of ARF1 vesicles and LD-ACTN3 showed these two factors contact each other during actin-cytoskeleton remodeling but no contact at normal state (Fig. 6A). Furthermore, LDs (marked by LipidTOX) also contacted with ARF1 vesicles (ARF1-EGFP) during actincytoskeleton remodeling in C2C12 cells (Fig. 6B). We then used RNAi to interfere with ARF1 expression (SI Appendix Fig. S4A-C), and simultaneously used brefeldin A (to destroy Golgi apparatus) to inhibit the vesicle system. At this time, the ARF1-DsRed signal disappeared (SI Appendix Fig. S4D, E). Subsequently, the result showed LD-ACTN3 cannot translocate to actin-filament during remodeling when ARF1 vesicles were inhibited (SI Appendix Fig. S4F). We then detected whether brefeldin treatment abrogate accelerated microfilament remodeling in OA-loaded cells. The result showed that the rate of actin-filament remodeling in loaded C2C12 cells was not changed after brefeldin treatment (Fig. 6C, D), which indicated the ARF1 vesicles were necessary for LD-ACTN3 translocated.
LDs modulate microfilament remodeling during C2C12 differentiation
The cytoskeleton is the main source of power in the process of migration and fusion during the differentiation of myoblast. Cell migration and fusion occur during the differentiation of myoblast, and finally multinucleated myotubes are formed (SI Appendix Fig. S5A). In this process, the number of microfilaments in the cell increases and the morphology changes (SI Appendix Fig. S5A). By labeling the microfilament proteins in the cells before and after differentiation, we found that the cell morphology and the number of microfilaments were significantly changed (SI Appendix Fig. S5B, C). A Western blot assay also confirmed that the microfilament-related proteins ACTN1, ACTN2, ACTN3, and ACTA1 increased significantly during C2C12 differentiation (p < 0.05, SI Appendix Fig. S5D). We further investigated whether LD-ACTN3 transferred during myoblast differentiation. PAT-ACTN3-EGFP was expressed in myoblast. The co-localization analysis indicated that the EGFP signals were total localized surrounding the LDs (Fig. 7A). Then, the differentiation medium was changed to induce C2C12 Fig. 5 Transfer of actinin from LD surface to microfilament during microfilament remodeling. A Co-localization of ACTN3, microfilaments, and LDs in C2C12 cells. B The C2C12 cells were treated with cytochalasin D for 2 h. Then, the medium was replaced with a fresh medium. The LDs were isolated at 0 h, 0.5 h, 1 h, and 2 h, respectively. The ACTN3 levels were detected by Western blotting. C The LD-targeting ACTN3 plasmid was constructed by inserting a PAT domain (PLIN1 1-193aa) at the N-terminus of ACTN3. The co-localization of PAT-ACTN3-EGFP, LDs, and microfilaments in C2C12 cells was detected by a co-confocal microscope. The fluorescence intensity, along with the arrow line, is plotted in the below graphs. D The C2C12 cells (expressing PAT-ACTN3-EGFP) were treated with cytochalasin D for 2 h, then the medium was replaced with a fresh medium. The subcellular components were isolated at 0 and 1 h, respectively. The levels of EGFP were detected. ACTB was utilized as the microfilament reference protein and ATP1A was utilized as cytomembrane reference protein. *p < 0.05. Results are from three technical repeats (N = 3) for a representative of three biological repeats (N = 3). differentiation for 4 days. EGFP signals were detected in the microfilaments and LDs of myotubes (Fig. 7A). Moreover, the subcellular components were isolated and the EGFP protein levels in the cytoplasm, cytoskeleton, and cell membrane components of the proliferative and differentiated phases C2C12 were detected. The results showed that the EGFP levels were increased in the cytoskeleton component in differentiated C2C12 cells (Fig. 7B), which suggests that the LD-ACTN3 transferred to cytoskeleton microfilaments during differentiation. For further verification, we marked myotubes by MyHC immunofluorescence. EGFP signals did localize in the microfilaments of myotubes, whereas the control group (expressed PAT-EGFP) showed no signals in microfilaments (Fig. 7C). We further investigated whether PAT-ACTN3-EGFP affected C2C12 differentiation by overexpression. The qPCR and WB results show that MyHC was upregulated in cells with PAT-ACTN3-EGFP overexpression as compared to the control group cells (Fig. 7D, E). Moreover, we detected the number of multinucleated myotubes through MyHC immunofluorescence. The results show that more myotubes were formed in cells with PAT-ACTN3-EGFP overexpression, and, additionally, the number of nuclei in one myotube was higher in cells with PAT-ACTN3-EGFP overexpression also (Fig. 7F, G). In summary, during the differentiation of C2C12 cells, LD-ACTN3 can be transferred to microfilaments and promote cell fusion into multinucleated myotubes.
DISCUSSION
LDs play an important role in the regulation of cellular lipid metabolism and stress resistance [30]. The LDs themselves are generally not directly involved in the regulation of these biological processes, which they regulate through contact and interaction with other organelles [31]. LDs were reported to interact with microfilament and microtubules. It is well known that LDs are attached to microfilaments and microtubules in cells and can move along the cytoskeleton [13]. Some of the kinetic proteins provide energy for LD movement, making LDs a highly dynamic organelle, which is the basis for interaction with other organelles. Previous studies have shown that microfilament microtubulerelated proteins, including NMIII/MYH-9, FMNL1 [17], SEPT9 [18], SEPT11 [19], and SPASTIN [32], can regulate the morphology of LDs. For example, FMNL1 mediates the assembly of NMIIIa and actin microfilaments on LDs, while NMIIIa and microfilaments surround LDs and form a transient foci between the two separated lipid droplets, hindering the fusion and expansion of LDs, thereby reducing the accumulation of triglycerides and enhancing the contact between lipase and LDs, which can promote the breakdown of LDs [17]. Septins are 13-member GTP-binding protein families that are highly conserved from yeast to humans [33], which are closely associated to the cytoskeleton, and also considered to be the fourth cytoskeleton structure [34]. Among them, septin-9 can bind to Ptdlns5P on the surface of LDs and simultaneously bind to microtubules, which can promote the contact between LDs and microtubes and the growth and aggregation of LDs [18]. Our study has validated the existence of multiple microfilament and microtubule-associated proteins on LDs. In addition, the actinins on LDs surface can transfer to microfilament during microfilament remodeling.
We utilized ACTN3 as the research object here to analyze the mechanism of LDs regulating cytoskeletons. ACTN3, as a member of the ACTN family (ACTN1, ACTN2, ACTN3, and ACTN4), can fix microfilaments and maintain the stability of microfilament structure [35][36][37][38]. ACTN3 is structurally conserved when compared with other proteins in the ACTNs family. It contains the ACTIN binding domain, four SR (spectrin repeat) domains, and the EF binding domain [39]. Previous studies have reported that natural LD-targeting proteins contain lipid affinity sequences, such as a giant hydrophobic helix structure [28], but ACTNs do not contain such hydrophobic helix structures. Therefore, ACTN3 does not directly bind to lipid droplets through its own protein domain. Therefore, we have revealed that two LD-related proteins, ACSL3 and LPCAT1, are bound to ACTN3 by co-immunoprecipitation. LPCAT1 and ACSL3 have a hydrophobic spiral structure that is compatible with the neutral lipid core of LDs [26,27] and recruited on the surface of LDs. Through further domain interaction assays, the SR protein domain of ACTN3 was found to interact with ACSL3 and LPCAT1. Therefore, ACTN3 can be indirectly recruited on LDs through its SR domain binding to ACSL3 and LPCAT1. This binding method is also consistent with previous studies, that is, the SR protein domain of ACTNs is usually the platform where it interacts with other proteins [40][41][42][43]. We found that PAT-ACTN3 was completely localized on LDs in normal cells and even in depolymerized microfilament cells, and no fluorescent signal was detected on the microfilaments. However, during the microfilament re-polymerization process, fluorescent signals were found on the re-polymerized microfilaments, which suggests that LD-ACTN3 is involved in the microfilament remodeling process. In addition, the phenomenon was also observed in the C2C12 differentiation process.
We considered that the transfer of ACTN3 from LDs to the microfilament is a kind of protein transport process. We found that the ARF1-COPI membrane vesicle transport system played an important role in this transfer process. After interfering with ARF1 expression in cells and using brefeldin A (i.e., dissolving the Golgi apparatus to prevent COPI vesicles) from inhibiting membrane vesicle formation, this transfer process of LD-ACTN3 also disappeared. Therefore, we believe that ARF1-COPI has mediated the transfer of ACTN3 from LDs to the microfilaments during microfilament remodeling. Previous studies have shown that the ARF1-COPI membrane vesicle transport system is very important for LD surface tension and activity. COPI is generated by the Golgi apparatus. In the Golgi apparatus, ARF1 recruits coatomers to the double layer in a GTP-dependent manner [44][45][46]. In vitro tests have shown that ARF1 and COPI can directly bind to the parent monolayer of phospholipid membranes of artificial LDs. This interaction causes the parent LDs to bud out to form 60-nm nano-LDs [29]. This budding process increases the surface tension of LDs, which makes the parent LDs more likely to react with the surrounding environment (e.g., soluble enzymes or membranes). Other studies have shown that the ARF1-COPI system is an important factor in regulating LD's contact with the endoplasmic reticulum, and is very important for the transfer of lipid synthesisrelated enzymes to LDs [6]. Although our study does not reveal how ARF1-COPI regulates the transfer of ACTN3 from LDs to microfilament, we speculate that this process is related to the ARF1-COPI-mediated nano-LD release process. Moreover, ACTN3 is indirectly recruited on LDs, and it is likely that the phospholipid cluster released by budding leaves the LD surface and then participates in the polymerization of the microfilament. There is no doubt that this process still needs further research and experimental data for support.
A number of studies have reported that LDs are in contact with mitochondria. Skeletal muscle is an oxidatively active tissue with a large number of mitochondria distributed, and electron microscopy results show that LDs are in close contact with mitochondria [47,48]. Several molecules have been identified that potentially regulate LD-mitochondrial contact, including PLIN5 [49], SNAP23 [50], Mfn2 [51], MIGA2 [52], and VPS13D [53], and deletion of each of these proteins leads to a reduction in LD-mitochondrion contact sites. These molecules may serve to tether LDs to mitochondria, allowing stable connections to form between LDs and mitochondria. A study reported that dynamic mitochondrialcytoskeletal interactions can facilitate network function and remodeling [54]. The transfer of actinin from LDs to microfilaments may also be a mitochondrial-dependent mechanism. It has been suggested that LDs can follow the movement of other organelles, including mitochondria [15]. The possibility that organelles in contact with each other may be jointly involved in a biological process deserves further investigation.
In summary, we have assumed that the model of LDs regulates microfilament remodeling (Fig. 8). Cellular LDs can recruit actinins Fig. 8 The schematic diagram of LDs modulating microfilament remodeling. Cellular LDs in myoblast can recruit actinins through the LDrelated proteins ACSL3 and LPCAT1. When the cellular microfilaments remodeling (e.g., migration, fusion) which promoted contact between LDs and ARF1-COPI vesicles. Meanwhile, the recruited actinins are released from maternal LDs. Thereafter, the released actinins from the LDs were involved the polymerization and remodeling of microfilaments. LDs contribute the process of cellular microfilaments remodeling.
through the LD-related proteins ACSL3 and LPCAT1. When the cellular microfilaments are "broken down", the shape of the cells shrink, promoting contact between the LDs and ARF1-COPI vesicles. The contact between the LDs and ARF1-COPI vesicle induces the production of 60 nm nano-LDs. Meanwhile, the recruited actinins were released from maternal LDs, along with these nano-LDs. Thereafter, the released actinins from LDs are involved in the polymerization and remodeling of microfilaments. This function of LD could lead to enhanced migration capacity of cells and promote the process of myogenic cell differentiation. In this study, we identified the effect of LD buffering actinin on myogenic cell differentiation, which provides new ideas for the role of LDs in muscle development as well as in injury repair process.
DATA AVAILABILITY
The data used to support the findings of this study are available from the corresponding author upon request. | 7,120.4 | 2021-11-23T00:00:00.000 | [
"Biology"
] |
Low Threshold, High Efficiency Passively Mode-Locked Picosecond Tm,Ho:LiLuF4 Laser
We experimentally demonstrated a passively mode-locked picosecond Tm,Ho:LiLuF4 laser with low threshold and high efficiency. The stable continuous-wave (CW) mode-locked operation with 12 ps pulse width is obtained by using a five-mirror cavity structure and semiconductor saturable absorber mirrors (SESAMs). The results indicate that the laser offers a mode-locked threshold power of 1.03 W and maximum mode-locked output power of 350 mW. The repetition rate of mode-locked pulse sequence is 98.04 MHz, corresponding a maximum single pulse energy of 3.51 nJ.
INTRODUCTION
In recent years, 2 µm ultrashort-pulsed lasers based on doped Tm 3+ or co-doped Tm 3+ & Ho 3+ are one of the frontier research contents of ultrafast laser technology. The emission peak of a Tm 3+ doped laser is located near the strong absorption peak of water, at 1.93 µm, so it has the characteristics of a low penetration depth and has important applications in ophthalmic laser surgery, tumorectomy [1,2]. Moreover, the laser has excellent atmospheric permeability and will play an important role in military and space communication fields. The laser spectrum is also located in the "fingerprint" region of the reaction molecule's absorption characteristics, so it has important application value in the field of accurate time-resolved molecular spectroscopy [3]. As a pump source, it has an important application in 3-5 µm mid-infrared band generation, mid-infrared supercontinuum generation and THz band pulse generation [4]. However, realizing a mode-locked operation is one of the technical difficulties it faces due to Q-switching, caused by water molecular absorption and gain medium spectral modulation.
Compared with other matrix materials, Tm,Ho:LiLuF 4 (Tm,Ho:LLF) is an excellent laser crystal with relatively low phonon energy, lower laser threshold and small up-conversion loss [5]. In 2010, Peng et al. reported a CW Tm,Ho:LLF laser with a central wavelength of 2.05 um and an output power of 50 mW [6]. In 2013, Zhang et al. used Cr2+:ZnS as a saturated absorber to realize Qswitching and Q-switching mode-locking operation of Tm,Ho:LLF laser [7,8]. It was not until 2018 that Ling et al. realized the Q-switched mode locking operation of Tm:LLF laser [9].
It is well known that the passively mode-locked operation of a 2 µm band can be obtained by periodically controlling the loss of the resonator with the saturable absorber (SA). Based on characteristics of tunable material non-linear absorption coefficient, short relaxation time and recovery time and low optical loss, more and more optical materials have been applied in the field of lasers, such as SESAM, graphene, graphene oxide, topological insulators (TI), single-walled carbon nanotubes (SWCNT), etc. [10][11][12]. The SESAM is a major mode-locked element with mature commercial performance and stable mode locking characteristics. Picosecond or femtosecond mode-locking operation has been realized in crystals Tm: CLNGG [13], Tm,Ho:NaY (WO 4 ) 2 [14], Tm: Sc 2 O 3 [15], Tm:CaGdAlO 4 [16], Tm: CaYAlO 4 [17], Tm: Lu 2 O 3 ceramics [18], Tm,Ho:CNGG [19], and Tm,Ho:CLNGG [20]. Ma et al. Comprehensively analyzed the mode-locked characteristics of this kind of lasers [21].
In this paper, we demonstrate a low threshold and high efficiency mode-locked Tm,Ho:LLF laser by using a five-mirror cavity structure. The results show that the crystal achieves the current highest efficiency of 53.6% under CW operation; the maximum mode-locked output power is 350 mW at 1,895 nm, and the typical pulse width is 12 ps. Moreover, the minimum CW threshold power is 59 mW, and the modelocked threshold power is 1.03 W. Because the output power of Ti: sapphire laser is positively correlated with the cost, the low threshold design of the laser can not only effectively reduce the cost, but also provide a new idea for the application of laser.
EXPERIMENTAL SETUP
The experimental setup of CW mode-locked Tm,Ho:LLF laser is shown in Figure 1. The five-mirror cavity system mainly includes a typical X-type four-mirror folded cavity and focused concave mirror. The resonator has an oscillating spot of tens of microns, which greatly reduces the laser threshold. The laser efficiency can be improved easily by optimizing the matching of pumping and oscillating spots. The pump source is a self-made Ti-doped sapphire laser. Figure 2 shows the absorption spectrum of Tm,Ho:LLF. From Figure 2, it is observed that the central wavelength of the strongest absorption peak of Tm,Ho:LLF crystal is 780.5 nm, and the half-width is 1 nm. Tm,Ho:LLF crystal is cut at the angle of Brewster. The two end faces are polished. The doped concentrations of Tm 3+ and Ho 3+ are 5 and 0.5%, respectively. The size of the crystal is 3 × 3 × 8 mm. The laser crystal is wrapped in indium foil and clamped in a copper radiator. The copper clamp is cooled by 8 • C constant temperature circulating water cooling system. The focal length of the focusing lens L 2 is 120 mm. The focal length of the pumping concave mirror M 9 is 100 mm, and the focal length of the M 10 is 75 mm, which has a transmittance of more than 95% at 770-1,050 nm and reflectance of more than 99.9% at 1,800-2,075 nm. The transmittance of output coupling (OC) mirror M 11 is 3%. The radius of concave mirror M 12 is 100 mm. M 13 is a plane high-reflectivity mirror with a reflectivity of more than 99.9% at 1,800-2,075 nm. The SESAM is GaAs-SESAM with a modulation depth of 1.2% and a relaxation time of 10 ps (BATOP, Germany). The waist radius of SESAM is about 180 µm, and the energy flux is 117 µJ/cm 2 , which is larger than SESAM saturated flux of 70 µJ/cm. P 1 and P 2 are CaF 2 prism pairs with a distance of 35 cm, which mainly compensate for the second-order dispersion produced by intracavity crystals and self-phase modulation.
EXPERIMENTAL RESULTS AND DISCUSSION
The folding mirror is composed of M 10 and M 9 . The radius of curvature of M 9 is 100 mm. In order to achieve high efficiency and low threshold operation by mode matching of pumping light and oscillating light, the curvature radius of the folding mirror M 10 is 100, 75, and 50 mm, respectively. The corresponding folding cavity is marked as (100, 100), (100, 75), (100, 50). The absorption efficiency of the crystal is shown in Figure 3A. It can be seen that the absorption efficiency of the crystal is 35.3% without laser oscillation. After the laser oscillation, the absorption efficiency of the crystal increases due to the consumption of a large number of upper-level particles and varies with the coupling ratio of the output mirror. The transmittance of the output mirror is 1.5 and 3% respectively. When M 10 is 100 and 50 mm, the absorption efficiency of the crystal is at the highest. When M 10 is 75 mm, the absorption efficiency of the crystal is between 57.9 and 60.8%. When SESAM and CaF 2 prisms are added into the cavity, M 10 is chosen as 75 mm, and the absorption efficiency of the crystal is about 59%. It can be seen that the beam waist radius of the oscillating light in the crystal will be determined by the choice of different folding mirrors, which will affect the absorption efficiency of the crystal. At the same pump power, CW and mode-locked operation will not affect the absorption efficiency of the crystal. As can be seen from Table 1, the 1.5% OC and (100, 75) cavity can achieve the operation of low threshold and high efficiency. The CW threshold power is as low as 59 mW, the slope efficiency is up to 53.6%, the maximum output power is 1.04 W, and the corresponding optical-optical conversion efficiency is 30.09%. So (100, 75) cavity is chosen as mode-locked cavity in the experiment. When the M 13 plane reflector is replaced by SESAM and the CaF 2 prism is inserted into the cavity to compensate dispersion, the laser threshold is increased to 67 mW. When the absorption pump power is >1.03 W, the stable CW modelocked operation is realized, and the maximum output power is 203 mW, and the slope efficiency is 12.9%. With 3% output mirror, the CW threshold power is 88 mW, and the pump power is gradually increased. When the absorption pump power is >1.3 W, a stable CW mode-locked operation is obtained. The maximum output power is 350 mW and the slope efficiency is 21.8%.
It is found that the difference between the mode-locked threshold power of 1.5% output coupling mirror and that of 3% OC is little; the mode-locking threshold of 1.5% is 1.03 W and that of 3% output coupling mirror is 1.3 W. The main reason for this is that the passive mode-locking is soliton mode-locking [22][23][24], and the threshold of mode-locking is mainly determined by the energy flux density on the surface of SESAM. So when mode locking is started, the energy flux density on the surface of SESAM for 3% OC and 1.5% OC should be the same, i.e., corresponding to the same cavity power. So theoretically, the output power of 3% OC should be twice that of 1.5% OC. From Figure 3B, it can be seen that when mode locking is started, the output power of 3% OC is 260 mw, which is approximately twice the output power 128 mW of 1.5% OC, which is consistent with our theoretical prediction. Because the central wavelength of the mode-locked spectrum is 1,895 nm, which is near the absorption band of water molecules, in order to achieve stable mode-locked operation, we use the dehumidifier to reduce the indoor air relative humidity to about 30%, and the decrease of air humidity can also reduce the cooling temperature of crystal to a lower level without condensation water. In this experiment, the cooling temperature of the crystal is 8 • C, which greatly reduces the thermal lens effect of the crystal. Thermal noise generated by heat accumulation on SESAM also affects the stability of mode-locking. Therefore, we designed a watercooling device for SESAM, which can keep the temperature of SESAM stable at about 8 • C. This cooling device improves the stability of mold locking and prolongs the operation time of mold locking.
The detection of CW mode-locked pulse sequence is realized by connecting a 200 MHz digital oscilloscope (RIGOL, DS4024) with a fast photodiode (EOT, ET-5000). Figure 4 shows a CW mode-locked pulse sequence with repetition rate of 98.04 MHz, which is obtained with scanning time of 1 ms and 10 ns. As shown in the inset of Figure 5, the spectrum of the mode-locked pulse is measured by a spectral analyzer (AvaSpec-NIR256-2.5TEC). The central wavelength of the output CW modelocked pulse is 1,895 nm, and the half-width of the spectrum is 14 nm. The autocorrelation envelope pulse width measured by autocorrelator (APE, pulse check 50) is 17 ps, and the actual pulse width is 12 ps by sech 2 function fitting, as shown in Figure 5.
CONCLUSION
In summary, the folding mirrors with different curvatures are selected for a comprehensive comparison at CW operation. It is concluded that the laser cavity (100, 75) has the best comprehensive output performance under a 1.5% OC mirror. The laser threshold power is as low as 59 mW, the maximum power is 1.04 W, the slope efficiency is 53.6%, and the opticaloptical conversion efficiency is 30.09%. So we choose (100, 75) cavity to study mode-locked operation. Using GaAs-SESAM as a mode-locked starter and self-made water cooling system to eliminate the thermal noise of SESAM, a high efficiency CW mode-locked operation of Tm,Ho:LLF laser is realized. Under 3% OC mirror, the maximum power of 350 mW, the shortest pulse width of 12 ps and the repetition rate of 98.04 MHz are obtained, corresponding to the maximum single pulse energy of 3.51 nJ. In the next step, we will use graphene, SWCNT or other low loss SA to reduce the loss of the whole cavity and achieve shorter mode-locked pulse with reasonable dispersion compensation.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/supplementary material.
AUTHOR CONTRIBUTIONS
WL was the author of the experimental scheme and the general director of the project. TX, RS, and CC were graduate students of the research group, who implement the experimental scheme. QX and YZ were the specific guidance for postgraduates during the experiment. | 2,874.2 | 2020-01-10T00:00:00.000 | [
"Physics"
] |
Using spectral and temporal filters with EEG signal to predict the temporal lobe epilepsy outcome after antiseizure medication via machine learning
Epilepsy is a neurological disorder in which the brain is transiently altered. Predicting outcomes in epilepsy is essential for providing feedback that can foster improved outcomes in the future. This study aimed to investigate whether applying spectral and temporal filters to resting-state electroencephalography (EEG) signals could improve the prediction of outcomes for patients taking antiseizure medication to treat temporal lobe epilepsy (TLE). We collected EEG data from a total of 46 patients (divided into a seizure-free group (SF, n = 22) and a non-seizure-free group (NSF, n = 24)) with TLE and retrospectively reviewed their clinical data. We segmented spectral and temporal ranges with various time-domain features (Hjorth parameters, statistical parameters, energy, zero-crossing rate, inter-channel correlation, inter-channel phase locking value and spectral information derived from Fourier transform, Stockwell transform, and wavelet transform) and compared their performance by applying an optimal frequency strategy, an optimal duration strategy, and a combination strategy. For all time-domain features, the optimal frequency and time combination strategy showed the highest performance in distinguishing SF patients from NSF patients (area under the curve (AUC) = 0.790 ± 0.159). Furthermore, optimal performance was achieved by utilizing a feature vector derived from statistical parameters within the 39- to 41-Hz frequency band with a window length of 210 s, as evidenced by an AUC of 0.748. By identifying the optimal parameters, we improved the performance of the prediction model. These parameters can serve as standard parameters for predicting outcomes based on resting-state EEG signals.
The classification performance of each time-domain feature under the OFTS strategy is presented in Table 1.Feature group F yielded the best AUC (0.838 ± 0.204), and Feature group B yielded the best accuracy (ACC; 0.824 ± 0.135), as shown in Table 1.Since Feature group B showed the highest performance on all metrics except for the AUC, Feature group B was evaluated with various ML classifiers.In this experiment, XGB showed the highest performance (AUC: 0.765 ± 0.179, ACC: 0.827 ± 0.112) on all metrics except for the true negative rate (TNR), positive predictive value (PPV), and negative predictive value (NPV).Detailed information about these analyses is provided in Supplementary Table 1.
Comparison of the major feature values between SF and NSF patients
Figure 2A shows topology plots demonstrating the ability of Feature group B (statistical parameters) to distinguish between the SF and NSF groups.The kurtosis and maximum value were extracted from the EEG signals of all TLE patients (SF group: 22 patients, NSF group: 24 patients), and the EEG channel-wise average of patients was used to obtain the kurtosis and maximum value.Among the statistical parameters for Feature group B, the kurtosis and maximum value were selected since the values showed significant differences between the SF and NSF groups ( p kurtosis = 0.002 , Cliff ′ sdelta Kurtosis = 0.576 , p max < 0.001, Cliff ′ sdelta max = 0.570 ) (Fig. 2B, C).The patterns of the topology plots were compared quantitatively using cosine similarity (CS) 30 and Euclidean distance (ED) 31 .For kurtosis, compared with NS, the OTS strategy showed a slight increase in CS ( CS OTS−NS = 0.013 ) and ED ( ED OTS−NS = 0.342 ), but the OFS strategy showed an increase in CS ( CS OFS−NS = 0.169 ) and a decrease in ED ( ED OFS−NS = − 0.023 ).The OFTS strategy yielded the highest ED (4.929), with a CS (0.936) close to 1.At the maximum value, all strategies yielded CS values close to 1. Furthermore, ED was lower under the OTS strategy ( ED OTS−NS = − 0.212 ) and higher under the OFS strategy ( ED OFS−NS = 2.199 ) than under NS.The OFTS strategy yielded the highest ED (5.935), similar to the findings for kurtosis (Supplementary Table 2).
Optimal EEG window length and optimal frequency band of the EEG signals for SF prediction
Figure 3A and B show the variance in predictive performance based on the window length of the resting-state EEG signal (average AUC of the four features at each window length is shown in Fig. 3A, the AUC of Feature group B at each window length is shown in Fig. 3B).As shown in Fig. 3A, the highest AUC of the four features at each window length was observed at 210 s (0.673 ± 0.076); this value was significantly different from that at other window lengths except 120, 150, 180, 240, and 270 s.As shown in Fig. 3B, the highest AUC of Feature group B at each window length was observed at 150 s (0.838 ± 0.102); this value was significantly different from that at almost all other window lengths.Detailed information regarding these analyses is shown in Supplementary Table 3.
Figure 3C shows the AUC of each frequency band at the optimal EEG window length for Feature 2. The highest AUC in the low gamma band (frequency band of 39-41 Hz) was 0.748 ± 0.163, and the AUC showed a tendency to increase from the low-frequency band to the high-frequency band.In particular, it was found that the high
Discussion
In this study, we showed the effects of using spectral and temporal filters on the prediction of outcomes among patients with TLE.Our main findings are as follows.
(1) When predicting the outcome of TLE patients using resting-state EEG signals, simultaneously optimizing the spectral and temporal ranges greatly improved performance.
(2) When the EEG window length is greater than 2 min, using the gamma band and statistical parameters (especially the kurtosis and the maximum value) as features had a substantial impact on the prediction performance.
Synergy of spectral and temporal filters
We evaluated the use of each analysis strategy to compare and investigate the effect of spectral and temporal filters on prediction performance.When only one filter (spectral or temporal) was optimized, the performance was increased compared to the scenario in which no filer is optimized; however, the performance was further improved when both filters were optimized (Fig. 1).These results suggest that spectral and temporal filters must be used together to achieve a significant increase in performance, especially when using long-term EEG signals such as resting-state EEG.Additionally, we quantitatively compared the topology plot of each analysis strategy in terms of CS and ED, which represent the similarity of spatial patterns and the difference between the patterns, respectively.When the spectral filter was optimized, the CS between the two groups (SF and NSF) increased, which means that these groups had similar spatial patterns.The temporal filter seemed to be related to ED, indicating an increase in the intensity of the patterns; ED increased substantially when the spectral filter was applied.These results also show that synergy in the similarities of spatial patterns (CS and ED) occurs when the spectral and temporal filters are optimized simultaneously (Supplementary Table 2).
Appropriate optimal EEG window length and EEG frequency band for SF prediction
As shown in Fig. 3A and B, we found that increasing the window length of the resting-state EEG signals to greater than 2 min led to a significant improvement in performance.We believe that the discriminative power of features could (1) exist in a specific section of the EEG signals or (2) occur over a specific length of EEG signals (or both).
As resting-state EEG involves no specific stimulus or action 32 , it is expected that the discriminative power of features would occur over a specific length of time rather than in a specific section.
As shown in Fig. 3C, we compared the SF prediction performances within narrow frequency bands.The use of the low gamma band (30-50 Hz) with Feature 2 (statistical parameters) led to a higher predictive value.Strengthening or weakening of cognitive function is one of the side effects of ASM treatment, which is secondary to the intended purpose of seizure control [33][34][35] .Although it is well known that long-term treatment with ASMs can adversely affect cognitive functions, such as attention, vigilance, and psychomotor speed 36,37 , some ASMs (e.g., carbamazepine, lamotrigine, valproate) have positive psychotropic effects 38 .
EEG modulation in the gamma band (> 30 Hz), has been shown to be correlated with large-scale brain network activity 39 .In particular, modulation in this band is known to play a crucial role in cognitive processes (e.g., working memory, attention, and perceptual grouping) and is thus assumed to reflect the consciousness level 40,41 .
Additionally, the EEG modulation in the gamma band was reflected in the kurtosis and the maximum value.One phenomenon-namely, sharp wave ripples-may account for the processes assessed by kurtosis and maximum values.Sharp wave ripples, which support the consolidation of recently acquired memories or the planning of future actions, consist of several spectral components: a slow sharp wave (5-15 Hz), a high-frequency "ripple" oscillation (150-200 Hz), and a slow "gamma" oscillation (20-40 Hz).The fusion of sharp wave ripples could also be reflected as increased power in the slow gamma band 42 .These prior findings lend credibility to our results with respect to the importance of low gamma and Feature 2.
Limitations and future work
Our study has several limitations.First, our analysis was based on individual EEG segments for each patient.We only drew segments once for all window lengths because we were comparing performance across window lengths, and thus, we suspected that having different numbers of epochs for different window lengths might affect the results.The use of only one segment per patient might not fully capture the variability inherent in EEG signals.This approach may limit the generalizability of our findings, as multiple segments could provide a more comprehensive view of each patient's EEG characteristics.Second, the highest frequency bands that can be observed through scalp EEG signals are only approximately 50 Hz 43 .Therefore, it is necessary to investigate frequency ranges higher than 50 Hz through another modality through an additional method.Third, most patients were already taking ASMs at the time of the EEG study.Fourth, because the dataset used in this study consisted of patients receiving mono-or polytherapy, it is difficult to characterize the effect of a particular ASM on the EEG signal.Finally, this retrospective study was conducted using a limited dataset, which could introduce bias in the characteristics of epilepsy patients.Future research should apply more sophisticated methods that can aggregate multiple frequency bands and multiple time segments using resting-state EEG signals.
Conclusion
This study shows that the application of spectral and temporal filters to resting-state EEG signals enhanced the prediction of long-term patient outcomes when the spectral and temporal filters were simultaneously optimized.In particular, an EEG window length of greater than 2 min and the gamma band substantially impacted the prediction performance.This optimization strategy can be applied for the early identification of patients with drug-resistant epilepsy, as they are potential candidates for nonpharmacologic intervention.
Patients and data collection
We retrospectively analyzed the medical records and EEG data of patients with TLE who visited Seoul National University Hospital between 2014 and 2021.All included patients had experienced at least one clinical seizure and were confirmed as having TLE based on seizure semiology, EEG, and/or 3.0-Tesla magnetic resonance imaging throughout the follow-up period.All patients received ASM during the follow-up period.We included patients whose initial EEG data were obtained using the NicoletOne® EEG system (Natus, San Carlo, CA, USA).Demographic and clinical characteristics, including baseline and final seizure frequencies, were obtained through a retrospective review of medical records.A total of 46 patients with TLE were selected and divided into two groups according to the final outcome: the SF group (seizure-free for the last year of follow-up, n = 22) and the NSF group (at least one seizure in the last year of follow-up, n = 24).In our capacity as a tertiary referral hospital, we identified only one treatment-naïve patient with TLE, while all other patients were already using ASMs at the time of EEG study.None of the patients had undergone ketogenic diet therapy.This study was conducted in accordance with the Declaration of Helsinki.This study was approved by the Institutional Review Board of Seoul National University Hospital (IRB No. H-2109-005-1251), and the need for informed consent was waived by the Institutional Review Board of Seoul National University Hospital due to the retrospective nature of the study.Detailed patient information can be found in Table 2.
Preprocessing
The resting-state EEG signals, recorded from 20 to 320 s, were epoched and then scaled by 10 6 to convert the measurements from volts to microvolts (µV), thus enhancing both the relevance and clarity of the data 44 .Data www.nature.com/scientificreports/from all 21 channels were used in further analysis.However, data referencing was conducted only with the following EEG channels: F3, Fz, F4, C3, Cz, C4, P3, Pz, P4, O1, and O2 45 .
Analysis strategy
To determine the effect of appropriate spectral and temporal ranges in predicting SF outcomes of TLE patients, the following three strategies were compared.(1) In the OTS strategy, a grid search was applied to optimize the temporal range of the EEG signals in a fixed frequency band (0.1-51 Hz).The temporal segments were extracted only once with different durations.All segments started precisely at the 0-s mark of the resting-state EEG data.The target temporal range was set at 1-s intervals from 1 to 30 s and 30-s intervals from 30 to 300 s (thus yielding a total of 39 temporal segments, 1, 2, 3, …, 27, 28, 29, 30, 60, …, 90, 120, 150, …, 300 s).(2) In the OFS strategy, a grid search was applied to optimize the spectral range of EEG signals from 0.1 Hz (low cut) to 2 Hz (high cut) up to 49 Hz (low cut) to 51 Hz (high cut) (a total of 50 bands spanning 2 Hz) for the total resting state (300 s).
(3) In the OFTS strategy, a two-grid search was applied to 50 spectral and 39 temporal ranges to optimize the spectral and temporal ranges simultaneously.Then, bandpass filtering, standardization 46 and segmentation were sequentially performed.Figure 4 shows the analysis pipeline for OFTS.In the case of NS, total resting state (300 s) and fixed-frequency bandpass cutoff from 0.1 Hz (low cut) to 51 Hz (high cut) were applied.
Feature extraction
Among the many EEG features, four types of features were selected based on the following three questions (1) Is it a time-domain feature?(The analysis pipeline included a narrow bandpass filter).
(2) Has it ever been used in an EEG-based epilepsy study?(3) What is the computational cost?(Supplementary Many studies have used energy as an indicator of brain activity 18,19 .Therefore, the linear and nonlinear energy of the EEG signals were included in Feature group C 52 .A total of 42 energy parameters were acquired using Eqs.( 10), (11) and used as ML inputs.
Zero-crossing rate (Feature group D)
The zero-crossing rate is the rate at which a signal changes from positive to zero to negative or from negative to zero to positive 53 .This parameter has been used to detect or classify seizures from normal EEG signals in many studies 54,55 .In this study, the zero-crossing rate and its first derivative were included in Feature group D 52 .A total of 42 zero-crossing parameters were acquired using Eqs.( 12), ( 13) and used as ML inputs.
ICC (Feature group E)
In the context of EEG analysis, cross-correlation is a powerful tool 56,57 that offers unique insights into the functional connectivity and relationships between different brain regions.We used the Pearson correlation coefficient as the correlation coefficient in Eq. ( 14), which is a measure of the linear correlation between two variables X and Y.A total of 210 ICC were acquired using Eq. ( 14) and 20 graph measurements 23,58 were used as ML inputs.Graph measurements were performed using NetworkX 59 and nilearn 60 Python libraries.
ICPLV (Feature group F)
In the context of EEG analysis, ICPLV is a powerful tool for investigating phase synchronization between brain regions.Its ability to provide insights into the timing and coordination of brain activities makes it invaluable in both research and clinical settings 61,62 .A total of 210 ICPLVs were acquired using Eq. ( 15), and 20 graph measurements 23,58 were used as ML inputs.
Spectral parameters (Feature group G)
Spectral power and phase, which are acquired using fast Fourier transform (FFT) (Eq.16), are crucial components of EEG signal analysis, These parameters provide deep insights into brain function and neural activity. 63,64e extracted five spectral parameters: mean, median, minimum, maximum, skewness and standard deviation of power and phase.A total of 126 spectral parameters were acquired using Eq. ( 16) and used as ML inputs.
Figure 1 .
Figure 1.Comparative Analysis of AUC Values Across Different Analysis Strategies and Features.(A) Boxplot representation showing the aggregate AUC values for each analysis strategy.This part of the figure combines results from distinct nine feature groups, with each strategy displaying 45 data points.These points represent the AUC values obtained from fivefold cross-validation for each of the feature groups, thereby illustrating the collective performance across multiple validation scenarios.(B) Bar charts depicting the AUC values for each individual feature under the different analysis strategies.The chart provides a feature-specific comparison, illustrating how each feature group contributes to the overall efficacy of the strategies.
Figure 2 .
Figure 2. (A) Topology plots for each analysis strategy using the kurtosis and maximum value of Feature 2. Each topology plot consisted of the average value of each group (seizure-free (SF) or non-seizure-free (NSF)).(B, C) Comparison of SF vs. NSF groups according to the kurtosis (B) and the maximum value (C) under OFTS for each patient.For each patient's kurtosis and maximum values in (B, C), the average for all EEG channels was used.Asterisks indicate that the p value is less than 0.05.
Figure 3 .
Figure 3. (A) Average performance (AUC) for all feature groups with each EEG window length on the optimal frequency band.Asterisks indicate that the p value is lower than 0.05 compared with the AUC at 270 s.(B) Performance (AUC) of Feature 2 on the optimal frequency band at each EEG window length.Asterisks indicate that the p value is lower than 0.05 compared with the AUC at 150 s. (C) Performance (AUC) of Feature 2 at the optimal EEG window length for each frequency band.The frequency range of each band is as follows: delta band, 0.1-4 Hz; theta band, 4-8 Hz; alpha band, 8-12 Hz; beta1 (low beta) band, 12-16 Hz; beta2 (beta) band, 16-20 Hz; beta3 (high beta) band, 20-30 Hz; and low gamma band, 30-50 Hz.The asterisk indicates the frequency band with the highest performance. https://doi.org/10.1038/s41598-023-49255-2
Table 1 .
The classification performance of each time-domain feature under the optimal frequency and time strategy.Significant values are in[bold].The optimal frequency band and window length for each feature were as follows.Values in bold represent the best results within each column.
Table 2 .
Demographic and clinical characteristics of the seizure-free and non-seizure-free groups.EEG, electroencephalography; SD, standard deviation; IED, interictal epileptic discharge; ASM, antiseizure medication; CNS, central nervous system.a Chi-square test.b Cramer's V. c Student 's t test.d Cohen's d. e Mann-Whitney U test.f Mann-Whitney effect size r.g Fisher's exact test. | 4,411.4 | 2023-12-18T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Involvement of Parvalbumin-Positive Neurons in the Development of Hyperalgesia in a Mouse Model of Fibromyalgia
Fibromyalgia (FM) presents as chronic systemic pain, which might be ascribed to central sensitization, in which pain information processing is amplified in the central nervous system. Since patients with FM display elevated gamma oscillations in the pain matrix and parvalbumin (PV)-positive neurons play a critical role in induction of gamma oscillations, we hypothesized that changes in PV-positive neurons are involved in hyperalgesia in fibromyalgia. In the present study, to investigate a role of PV-positive neurons in neuropathic pain, mice received reserpine administration for 3 consecutive days as an animal model of FM (RES group), while control mice received vehicle injections in the same way (VEH group). The mice were subjected to hot-plate and forced swim tests, and immuno-stained PV-positive neurons were counted in the pain matrix. We investigated relationships between PV-positive neuron density in the pain matrix and pain avoidance behaviors. The results indicated that the mice in the RES group showed transient bodyweight loss and longer immobility time in the forced swim test than the mice in the VEH group. In the hot-plate test, the RES group showed shorter response latencies and a larger number of jumps in response to nociceptive thermal stimulus than the VEH group. Histological examination indicated an increase in the density of PV-positive neurons in the primary somatosensory cortex (S1) in the RES group. Furthermore, response latencies to the hot-plate were significantly and negatively correlated with the density of PV-positive neurons in the S1. These results suggest a critical role for PV-positive neurons in the S1 to develop hyperalgesia in FM.
INTRODUCTION
Fibromyalgia (FM) presents with chronic systemic pain along with psychotic (e.g., depression) and autonomic nervous symptoms (1)(2)(3)(4). Epidemiological studies of FM in various countries have reported an average prevalence of 2.7% (5)(6)(7)(8). However, FM is refractory, and its pathophysiological mechanisms are not fully understood. Treatment methods of FM are under development, and various pharmacological therapies combined with non-pharmacological therapies have been used (9)(10)(11)(12). Recently, central sensitization, in which pain information processing is amplified in the central nervous system, has been suggested to play an essential role in FM (13)(14)(15).
Consistent with the above hypothesis, functional magnetic resonance imaging (fMRI) studies have reported hyperactivity in multiple brain areas that process pain, including the somatosensory area, prefrontal cortex, anterior cingulate cortex, and insula in response to mechanical, thermal and electrical stimulation in patients with FM as well as an animal model of FM (16)(17)(18)(19). Neurophysiological studies also reported that excitability in the primary somatosensory (S1) cortex was increased in patients with FM (20,21). Furthermore, gamma oscillations in S1 were correlated with subjective pain (or behavioral responses to nociceptive stimuli in rats) and/or physical stimulus intensity in intact humans and rats (22)(23)(24)(25)(26)(27). Gamma oscillations were elevated in the S1, motor cortex, insula, and prefrontal cortex in patients with FM compared with controls (28).
Several animal models of FM have been reported. Repeated injection of reserpine, which results in the depletion of monoamines in the nervous system, has been used as an animal model of FM (29,30). In this model, the animals showed behaviors associated with pain (hyperalgesia and allodynia), depression-like symptoms, and gastrointestinal dysfunction (autonomic symptoms), which are all observed in human FM. Furthermore, reserpine administration increased the responses of mechanoreceptive C-nociceptors and the activity of dorsal horn microglia in the spinal cord (31). These previous human and animal studies suggest that the forebrain pain matrix might be hyperactive to display complex FM symptoms. On the other hand, a recent animal study reported that optogenetic activation of parvalbumin (PV)-positive neurons in the S1 induced gamma oscillations of local field potentials and pain-related avoidance behaviors (32). Furthermore, optogenetic activation of PVpositive neurons in the prelimbic cortex also enhanced avoidance responses to nociceptive stimuli (33). Based on these findings, we hypothesized that PV-positive neurons play an essential role in pain information processing in FM. In this study, we investigated the relationship between PV-positive neurons in the forebrain pain matrix and pain sensitivity in an animal model of FM with repeated reserpine administration.
Subjects
Eight to 10-week-old C57BL/6J male mice (n = 60, Japan SLC, Hamamatsu, Japan) were used. The mice were housed in groups (four per cage) in a temperature-controlled experimental room (22 ± 1 • C) with light control (lights on from 07:00 to 19:00) and food and water available ad libitum. The mice were treated consistently with the guidelines for care and use of laboratory animals approved by the University of Toyama and the National Institutes of Health's Guide for the Care and Use of Laboratory Animals. The experimental protocol of the study was approved by the Animal Experiments and Ethics Committee at the University of Toyama (Permit No. A2016MED-2 3).
Animal Model of FM by Reserpine
An animal model of FM was produced using the protocol described in previous studies (31,34,35). Reserpine (Nacalai Tesque, Inc., Kyoto, Japan), adjusted to a concentration of 0.25 mg/mL with 0.5% acetic acid, was injected (0.25 mg/kg, s.c.) into the back skin once a day for 3 successive days (RES group). As a control, a vehicle solution (0.5% acetic acid) was similarly injected (VEH group).
Hot-Plate Test
Previous studies reported that gene expression of the acidsensing ion channel 3 (ASIC3) was increased in the dorsal root ganglion of the same animal model of FM, and that a selective blocker of ASIC3 (APETx2) decreased both mechanical and thermal hyperalgesia (31,36). Clinical studies reported that not only mechanical but also thermal hyperalgesia are important factors predicting clinical pain intensity in patients with chronic pain including FM (37,38). In the present study, thermal hyperalgesia (avoidance latency) was assessed using the hot-plate test, data of which were directly applied to correlational analyses with PV-positive neuron density (see below).
Previous studies reported that pain hypersensitivity (mechanical allodynia) was detected 3 days after the first reserpine injection in this animal model and gradually returned to the baseline levels on 10th to 14th day after the first reserpine injection (29,31). Therefore, behavioral responses to noxious thermal stimuli were observed 3 days after the first injection in the present study. After placing each mouse on a hot-plate apparatus (Muromachi Kikai, Japan), the latency of behavioral responses [hindpaw licking or jumping (whichever came first)], and the number of jumps were measured. The surface temperature of the hot-plate was set at 50 ± 0.5 • C before testing, and the test was completed in 60 s to avoid tissue damage to the animals.
Forced Swim Test
Previous studies reported that depression-like behaviors in the forced swim test were not observed 3 days after the first reserpine injection, but observed 5-14 days after the first reserpine injection (29,34,35). In the present study, two different groups of mice underwent the forced swim test 3-4 and 10−11 days after the first injection to determine the depressive behaviors caused by reserpine. The procedures were conducted in accordance with those in Porsolt et al. (39). Each mouse was placed in water (25 ± 1 • C) in a glass beaker (23 × 35 × 20 cm; diameter × height × depth) for 15 min 3 or 10 days after the first injection. Twentyfour hours after the first forced swim test (i.e., 4 or 11 days after the first injection), the mice were again placed in the same glass beaker with water for 5 min, and their behavior was recorded by a video camera. The immobility time was measured for 5 min in the second forced swim test. Immobility was defined as the absence of any movement except that to keep the mouse's head above the water. After testing, the animals were towel-dried and returned to their cages. Immunohistochemistry PV-positive neurons were immunostained based on the same protocol used in our previous studies (40)(41)(42)(43)(44). After the hotplate test was performed 3 days after the first injection, the mice were sacrificed under deep anesthesia with mixed anesthetics (5.0 mg/kg butorphanol, 4.0 mg/kg midazolam, and 0.75 mg/kg medetomidine, i.p.), by transcardial perfusion with heparinized 0.01 M phosphate buffer saline (PBS), followed by 4% paraformaldehyde dissolved in 0.1 M phosphate buffer (PB). After perfusion, the brains were post-fixed in 4% paraformaldehyde overnight. The fixed brain was then immersed in 30% sucrose until they sank to the bottom. Then, the brains were cut into 40 µm sections, collected in 0.01 M PBS, and stored in an antifreeze solution (25% glycerin, 25% ethylene glycol, and 50% 0.1 M PB) at −20 • C. Two stains were used on serial sections every 40 µm, one for PV immunocytochemistry, and the other for Cresyl violet (Nissl staining). In PV immunostaining, the sections were processed with mouse monoclonal anti-PV antibodies according to our previous protocol (40)(41)(42)(43)(44). Briefly, the sections were washed 3 times with 0.01 PBS for 5 min, blocked with 3% normal horse serum for 30 min, then mouse monoclonal anti-parvalbumin antibody (1: 10 000 dilution in 1% horse serum PBS, Sigma, St. Louis, MO, USA) was incubated overnight at 4 • C. These sections were washed 3 times with 0.01 PBS for 5 min and incubated with biotinylated horse antimouse IgG (1:200 dilution, Vector, Burlingame, USA) for 50 min at room temperature. After washing, incubated with avidinbiotin complex reagent (Vector) for 50 min and visualized with a detection solution (0.25 mg/ml 3, 3 ′ -diaminobenzidine, 0.03% H 2 O 2 in PB). Negative control sections were treated identically except for omission of the primary antibody. No reaction product was observed in any of the control sections.
The PV-positive neurons were counted using stereological software (Stereo Investigator version 7.53.1, MicroBrightField, Williston, VT, USA). The cell bodies of PV-positive neurons in the sample sites randomly dispersed in each brain region were counted using a 20× objective lens. The counting conditions were as follows; sampling grid sizes, 280.87 × 765.50-µm in the mPFC, and 259.00 × 372.40-µm in the S1, amygdala, and insula; counting frame, 200 × 200-µm; optical dissector height, 5 µm. The software automatically set up square counting frames with exclusion lines. Within the counting frame, only PV-positive cell bodies that did not contact the excluding line were counted. The detailed theoretical and technical methodology for stereological estimation of cell density has been previously reported (46). The PV-positive neuron density was estimated in each brain area of each animal.
Statistical Analysis
Data were shown as the mean ± SEM. Normality of the data was checked by D'Agostino & Pearson test. The bodyweights were compared between the two groups using a repeated measures two-way analysis of variance (ANOVA) with post-hoc tests (Bonferroni tests). In this analysis, the degrees of freedom were corrected by Greenhouse-Geisser method where appropriate. Data in the behavioral tests and PV-positive neuron density were compared between the VEH and RES groups using unpaired t-tests with Welch's correction (Welch's test) except the data in numbers of jumps in the hot-plate test, immobility time in the forced swim test 11 days after the first injection, and PVpositive neuron density in the infralimbic cortex. The data in numbers of jumps in the hot-plate test, immobility time in the forced swim test 11 days after the first injection, and PV-positive neuron density in the infralimbic cortex were analyzed by the Mann-Whitney U-test because these data did not show normal distribution. A linear regression analysis was used to analyze the relationship between the response latencies in the hot-plate test and PV-positive neuron density. Prism 8 (GraphPad Software Inc.) was used to analyze the data. A p < 0.05 was considered statistically significant.
Bodyweights of the Reserpinized Animals
The bodyweight of the RES group decreased after reserpine injection ( Figure 1A). The statistical analysis indicated significant main effects of group [F (1,26)
Hot-Plate Test
The mice underwent the hot-plate test 3 days after the first injection (3 d in Figure 1A). The response latencies to nociceptive thermal stimuli were significantly shorter in the RES Figure 1Ba). The number of jumps was significantly greater in the RES group (5.8 ± 1.6 times, n =16) than in the VEH group (0.0 ± 0.0 times, n = 16; Mann-Whitney U-test, p < 0.0001; Figure 1Bb). These results indicate that pain sensitivity was increased in the RES group.
Forced Swim Test Figure 1C shows the immobility time 4 and 11 days after the first injection (4 and 11 d in Figure 1A). On day 4, the immobility time in the RES group (156.9 ± 19.9 s, n = 8) tended to be longer than that in the VEH group (109.3 ± 17.7 s, n = 8; Welch ′ s test, p = 0.0967; Figure 1Ca). On day 11, the immobility time was significantly longer in the RES group (202.8 ± 12.1 s, n = 6) than in the VEH group (155.1 ± 8.9 s, n = 6; Mann-Whitney U-test, p = 0.0152; Figure 1Cb).
PV-Positive Neuron Density
Example microphotographs of PV-positive neurons in S1 for the VEH and RES groups are shown in Figures 2A,B. The number of PV-positive neurons was greater in the RES group than in the VEH group. Figure 2C shows PV-positive neuron density (cells/mm 3 ) in the S1 forelimb (S1FL) (Figure 2Ca) and S1 hindlimb (S1HL) regions (Figure 2Cb), and cell density in the S1L (mean cell density between the S1FL and S1HL) (Figure 2Cc). The Welch ′ s test indicated that PV-positive neuron density was significantly greater in the RES group than in the VEH group in each area (S1FL, p = 0.0002; S1HL, p = 0.0004; S1L, p = 0.0002). However, in the other brain regions, no significant differences were observed. In the mPFC (Figure 3A), there were no significant differences in PV-positive neuron density between the RES and VEH groups in the prelimbic cortex (PrL) (Figure 3Aa), infralimbic cortex (IL) (Figure 3Ab), and anterior cingulate cortex (ACC) (Figure 3Ac) (IL: Mann-Whitney U-test, p > 0.05; other brain regions: Welch's test, p > 0.05). In the amygdala (Figure 3B), there were no significant differences between the RES and VEH groups in the lateral nucleus (LA) (Figure 3Ba), basolateral nucleus (BLA) (Figure 3Bb), and intercalated cells (ITC) (Figure 3Bc) (all regions: Welch's test, p > 0.05). In the insula cortex (Figure 3C), no significant differences were observed between the VEH and RES groups in the granular insula (GI) (Figure 3Ca), dysgranular insula (DI) (Figure 3Cb), and agranular insula (AI) (Figure 3Cc) (all regions: Welch's test, p > 0.05).
Reproduction of the FM Model
A previous study reported that the metabolites of serotonin, dopamine, and noradrenaline in the cerebrospinal fluid (CSF) were lower in patients with FM, suggesting that catecholamine levels may be lower in the brain (47). Consistently, the animal model of FM with repeated reserpine administration replicated human FM symptoms and displayed decreases in catecholamines in the brain and spinal cord (29,34,35,48,49). The present study also replicated characteristic symptoms of human patients with FM and the animal model of FM reported in previous studies. First, patients with FM often present with eating disorders and/or bodyweight loss (50,51). Previous animal studies also reported that the FM mouse model displayed the lowest bodyweight 3 days after the first injection (31,48). After the reserpine administration, access to food was reduced, eating time was extended, and food intake was sharply reduced (52). In the present study, the RES group also showed a decrease in bodyweight 3-5 days after the first injection.
Second, previous studies reported that reserpine-induced changes in pain sensation include mechanical hyperalgesia of the skin and muscles and thermal hyperalgesia. A single dose of reserpine (4 to 5 mg/kg) was found to cause skin and muscle hyperalgesia several hours after injection, and transiently induced thermal hyperalgesia (53,54). Repeated administration of reserpine resulted in a decrease in the escape threshold for mechanical stimulation of skin and muscle 3 to 14 days after the first injection, while a decrease in escape latency to thermal stimulation was observed 3 to 4 days after the first injection (34,35,49). The present results, in which significant thermal FIGURE 2 | Effects of repeated reserpine injection on PV-positive neurons in S1. (A,B) Photomicrographs of the mice S1 in the VEH (A) and RES (B) groups. Insets in (a) are shown in (b) as enlarged views. The number of PV-positive neurons was increased in the RES group. S1HL, S1 hindlimb area; S1FL, S1 forelimb area. (C) Comparison of the PV-positive neuron density in the S1FL (a), S1HL (b), and S1L (c) between the VEH and RES groups. S1FL, S1 forelimb area; S1HL, S1 hindlimb area; S1L, S1 leg area (mean of S1FL and S1HL). ***p < 0.001 (Welch's test). Open circles, VEH group; filled circles, RES group. Numbers in parentheses indicate the number of animals. hyperalgesia in the hot-plate test was observed 3 days after the first injection, were consistent with those of previous studies. Third, depression is an important characteristic of human FM, and the same pathophysiological mechanisms may be involved in both depression and changes in pain sensitivity (55,56).
Depression-like symptoms (i.e., immobility in the forced swim test) were not observed 3 days after the first injection of reserpine but were observed 5-14 days after the first injection (29,34,35). Consistently, the immobility time tended to increase 4 days after the first injection and was significantly increased 11 days after the first injection in the RES group. These findings indicate that the present study replicated the symptoms of the FM model with repeated reserpine administration.
Relationship Between PV-Positive Neuron Density and Hyperalgesia
In this study, repeated reserpine administration increased PV-positive neuron density in S1, and there was a negative correlation between PV-positive neuron density and behavioral latency in the hot-plate test. Previous studies reported that optogenetic activation of fast-spike PV-positive neurons controlled pyramidal neuron activity and generated gamma oscillations above 40 Hz (57-59). Consistently, optogenetic activation of PV-positive neurons in the S1 induced gamma oscillations (32). Since gamma oscillations in S1 were correlated with behavioral responses to nociceptive stimuli and gamma oscillations were elevated in S1 in patients with FM (see Introduction), the present results with elevated PV-positive neuron density in S1 and decreases in response latencies in the hot-plate test suggest that gamma oscillations were increased in the RES group. This further suggests that increases in PV-positive neurons in S1 are involved in hyperalgesia.
Optogenetic activation of PV-positive neurons in S1 not only increased behavioral sensitivity to nociceptive stimuli but also markedly increased activity of the rostroventral medulla (RVM), which functions as the descending pain modulatory system (32). It has been demonstrated that the periaqueductal gray (PAG) and RVM in the midbrain regulate nociceptive inputs (60)(61)(62)(63)(64). ON and OFF cells are mixed in the RVM. Nociceptive information processing is suppressed by the activity of OFF cells, whereas it is promoted by the activity of ON cells (65)(66)(67). In an FM model with reserpine administration, mechanoreceptive C nociceptor responses and activity of dorsal horn microglia in the spinal cord were increased (31), and activated microglia might disinhibit dorsal horn nociceptive neurons (68). Along with the reduction of descending pain-inhibitory catecholaminergic inputs to the spinal cord by reserpine (29,31), activation of PV-positive neurons in S1 might promote the activity of ON cells in the RVM, most of which might be non-serotonergic (69), to further amplify pain information processing in the dorsal horn.
On the other hand, human fMRI studies reported increased activity in the pre-frontal cortex, anterior cingulate, amygdala, and insula at rest and in response to heat noxious stimuli in patients with FM (70)(71)(72). The size of the amygdala changes in patients with FM (73)(74)(75). These previous studies suggest that these brain regions might be involved in the pathological processes in FM. However, PV-positive neuron density did not change in these brain regions in the present study. These findings suggest that pathological alterations in PV-positive neurons specifically occur in S1 in an animal model of FM with repeated reserpine administration. However, in the present study, reserpine was administered for only three consecutive days, suggesting that the present results might reflect acute effects. A larger number of reserpine injections would induce changes in other brain regions since sustained changes in catecholamine levels are critical to inducing hyperalgesia (49). Furthermore, in the present study, the animals were sacrificed 3 days after the first injection. Therefore, it is also possible that a longer duration after the first injection might be required to induce changes in PV-positive neurons in other brain regions. Further studies are required to confirm the S1 specificity of PV-positive neuronal changes in FM.
Possible Pathophysiological Mechanisms of FM by Reserpine
A previous clinical study reported decreases in catecholamine metabolites in the CSF in FM patients, but no alteration of those levels in patients with rheumatoid arthritis, suggesting that alteration of catecholamine metabolites is a cause, but not a consequence, of chronic pain (47). Previous studies reported that catecholamines in the brain suppressed gamma oscillations, whereas their depletion increased gamma oscillations. Dopamine controls gamma oscillations differently depending on its receptor type (76). However, gross depletion of dopamine by pharmacological lesions of dopaminergic terminals in the striatum was found to increase gamma oscillations (77). Furthermore, dopamine reduced gamma oscillation through the α1-adrenergic receptor in the primary motor cortex (78). In addition, electrical stimulation of the dorsal raphe nucleus to release serotonin downregulated cortical gamma oscillation (79), while pharmacological stimulation of the locus coeruleus to release noradrenalin reduced gamma oscillation in the dentate gyrus (80). Another line of evidence also indicated an involvement of reserpine in induction of gamma oscillations: reserpine injections increased rapid eye movement (REM) sleep (81), in which gamma oscillations increased compared with non-REM sleep (82). On the other hand, pregabalin, an antagonist of voltage-dependent Ca 2+ channels (VDCCs), is reported to be effective in treating FM (83). VDCCs are reported to be critical for gamma oscillations in the thalamocortical system (84). All of these findings support the critical role of gamma oscillation in pain information processing in the forebrain of FM. Gamma oscillation is reported to induce synaptic plasticity (85,86), by which pain sensory circuits might be strengthened in FM. Alteration of PV-positive neurons in the present study may reflect these pathological changes induced by reserpine.
Limitation
Previous studies reported that non-neuronal cells express PV: ependymal cells in the ventricular wall could express PV in pathological conditions such as brain injury and ventricular stenosis (87,88). However, PV is a neuronal marker in the brain in intact animals (89,90). Furthermore, staining distributions of PV-positive cells in the present study were comparable to those of PV-positive neurons observed in the cingulate cortex and reticular thalamic nucleus, as reported previously (91,92). Although we did not perform double stanning of PV and NeuN, these findings suggest that PV-positive cells were not glial cells but neurons in the present study.
Second, we did not analyze PV-positive neurons in the dorsal horn of the spinal cord, since low frequency oscillations (5-10 Hz) were reported in the dorsal horn (93), compared with high frequency gamma oscillation in the forebrain. However, oscillation frequencies in the dorsal horn could be increased if excitatory inputs to PV-positive neurons in the dorsal horn are increased (94). Reserpine could alter descending projections from the forebrain to the dorsal horn (see above), and consequently increase excitatory inputs to PV-positive neurons, which might lead to activation of PV-positive neurons and induction of gamma oscillations in the dorsal horn. Third, although available information suggest that gamma oscillations may be increased by repeated reserpine injection (see above), there is no direct neurophysiological evidence that repeated reserpine injection induces gamma oscillation in S1 in the present as well-previous studies. Fourth, the present study lacks direct pharmacological evidence indicating that catecholaminergic depletion induces increases in PV-positive neuron density in S1 leading to hyperalgesia. However, indirect evidence supports the present idea: clinical studies reported that serotonin and noradrenaline reuptake inhibitors reduced FM symptoms including hyperalgesia (95)(96)(97) while an animal study reported that microinjection of a serotonin reuptake inhibitor into S1 attenuated thermal hyperalgesia (98). To prove or disprove the current idea of a PV-neuronal involvement in FM hyperalgesia, further studies are required to analyze relationships between changes in catecholaminergic levels in the brain and pain sensitivity-related parameters (pain sensitivity, and PVpositive neuron density and gamma oscillations in S1 and the dorsal horn).
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The animal study was reviewed and approved by the Animal Experiments and Ethics Committee at the University of Toyama.
AUTHOR CONTRIBUTIONS
HNishij and TT designed the experiment. KM and HNishim performed the experiment. KM, HNishim, and HNishij analyzed the data. KM and HNishij wrote the manuscript. KM, HNishim, JM, TS, TT, TO, and HNishij revised the manuscript. All authors discussed the results, and approved the final manuscript.
FUNDING
This study was supported by the research funds from University of Toyama. | 5,772.2 | 2021-02-26T00:00:00.000 | [
"Biology"
] |
Effects of Adjunct Questions on L2 Reading Comprehension with Texts of Different Types
Answering text-related questions while reading is a questioning strategy which is called adjunct questions or embedded questions, the benefits of which have been established in first-language reading as to enhance comprehension. The present study aims to study the effects different adjunct questions exert on second-language (L2) readers’ comprehension of texts of various types. One hundred and forty-four intermediate-level Chinese EFL learners participated in this study and were divided randomly into six groups. Each group was given either a narrative or an expository text with ‘what or why’ questions or no questions. A brief topic familiarity questionnaire was attached to the end of each text paper. The results showed that inserted adjunct questions improved the readers’ reading comprehension both in expository and narrative texts, but only narrative texts inserted with why questions had significant effects on the L2 reading comprehension. The findings suggested that text types and question types modulate the effects of inserted adjunct questions on the English reading of intermediate learners. Pedagogical implications and suggestions for future studies are provided.
Introduction
When educated people think about the importance of literacy, most agree that reading ability plays an essential role at school, work, and in society [1].In fact, reading is considered the basis for later language development [2].In foreign language classrooms in China, the ability to comprehend academic texts is widely regarded as a crucial competency that college students must develop.However, recent reforms in language education in China have advocated a reduction in classroom language teaching time after the Ministry of Education in China issued a policy on modifying subjects of study in schools and universities.Accordingly, language educators and researchers are now trying hard to find methods to cultivate language learners' ability to read academic texts independently and efficiently after class, one of which is using adjunct (also called embedded) questions in reading texts.Though no explicit definition of adjunct questions has been given, relevant research performed so far has helped us to provide the following working definition.Adjunct questions refer to content-related or text-related questions inserted into the reading material by language educators requiring readers to answer while reading as these in-text questions help readers recall text information [3,4].Researchers have found this technique helpful in enhancing reading comprehension in L1 [5].
So far, however, there are limited studies on the effects of adjunct questions on L2 readers [3][4][5][6][7][8][9][10]. Though most researchers have demonstrated a limited benefit of adjunct questions on L2 reading comprehension, a closer examination of relevant literature has revealed that the effects of adjunct questions on reading comprehension were not significant [4] and often varied with different comprehension measurements [10].Additionally, previous researchers have analyzed the effects of adjunct questions on L2 reading comprehension with either expository texts [3][4][5]10] or scientific texts [9].In addition, researchers have not reached a consensus on the effect of adjunct questions on L2 reading comprehension.Textbooks adopted by universities in China usually include materials on different subjects written in English.Though most of the reading texts are expository in nature, some are narratives.The way to comprehend expository texts differs greatly from that of narrative texts.Narrative texts are basically narrative stories, with clear themes, plots, and key elements that make the text easier to comprehend than expository texts.The present study, therefore, aims at comparing and contrasting the effects that different types of adjunct questions might have on L2 reading comprehension with both expository and narrative texts across different comprehension measurements.The results of this study might offer more experimental evidence for studying the effects of adjunct questions and practical teaching implications for Chinese EFL teachers by providing guidance on training independent and autonomous L2 readers [7].
Literature Review
Educators and language researchers have been trying for many years to find approaches to cultivating language learners' ability to read academic texts independently and efficiently.So far, two important methods have been identified.One is to focus on the readers by training them in reading skills or strategies.This has proven to be a slow and complicated process.The other is to focus on the reading texts by increasing the readability of the academic texts (also called textual enhancement) [11].Both can enhance readers' independent and autonomous reading ability.In addition, researchers anticipated that textual enhancement could aid the comprehension of reading texts [6].Inserting questions into reading passages and making analogies are two techniques that are widely used to enhance L1 reading comprehension [7].
Textual Enhancement in L1
Abundant research concerning the textual enhancement of reading comprehension in L1 has been conducted [12][13][14].They have also identified some essential methods that help improve reading comprehension.Analogies and inserting questions or reading comprehension tasks into the reading texts are considered to be the two most effective techniques.Research using analogies to increase L1 reading comprehension started in the 1980s [15].Research findings have indicated that it is a successful tool for adults used to obtain new scientific ideas [16], and it proves to be most effective for beginners such as children [15].
Research on inserting adjunct questions into reading texts has also been proven effective in the L1 reading process [12][13][14].The reading comprehension process was described as a process of constructing a mental representation.The mental representation of a text consists of three levels: surface-level representation, text-based verbatim representation, and situational representation [17].Constructing a situational representation of a text requires readers to find different types of information first.This means readers need to retain the relevant and rule out the irrelevant information.The second layer of the reading process involves making reasonable inferences by clarifying important points not explicitly stated in the text.That means readers will have to fill the information gap they encounter during the reading comprehension process.Readers can use the adjunct questions in the text as an anchor or a starting point.The adjunct questions can help them focus on the critical information, thus activating their prior relevant knowledge about the main topic and constructing a situational representation of a text [6].
Previous research on L1 reading has indicated that inserting questions into the reading passages helps to improve readers' memory of the text, thus enhancing comprehension [5,12,14].
Textual Enhancement in L2
Since L2 researchers started investigating the effects of inserted adjunct questions on L2 reading comprehension [5][6][7] they have utilized different methods.Some have tried the analogy method to assist adult L2 comprehension with domain-specific text types [18,19].When analogies failed to achieve the expected positive effect on L2 reading comprehension, other researchers began to hypothesize the influence of adjunct questions on the L2 reading comprehension because this method has been proven to be beneficial in L1 reading [3][4][5][6][7][8]10,14].The limited number of studies conducted so far examined and analyzed variables such as learners' L2 language proficiency level, the embedded question types, methods used to deal with the inserted adjuncts, and the types of assessment tasks for reading comprehension and text types.Variables like topic familiarity are also discussed in some research [4].
Learners' Language Proficiency
Constructing a situational representation requires the learners to have the ability to remove irrelevant information from the reading texts and to activate the relevant prior knowledge.These abilities are closely related to learners' language proficiency [14].It is acknowledged that low-ability readers may have problems distinguishing between relevant and irrelevant information [14].However, research on readers' language proficiency has revealed that using embedded adjunct questions to enhance readers' comprehension of both L1 and L2 proved to be more beneficial to low-ability readers than to high-level readers [3,6,9,10,14].With 97 advanced learners of Spanish, Brantmeier et al. examined the effects of embedded adjunct questions, prior subject knowledge, and L2 Spanish proficiency on the L2 reading comprehension of scientific passages.The results showed that the readers' L2 proficiency and prior subject knowledge did not enhance the effects of embedded adjunct questions on reading comprehension [9].A more recent study with intermediate and advanced learners of Chinese found that the adjunct questions only had positive effects on readers of intermediate levels with written recall as a comprehension assessment, but not on advanced L2 learners [10].
Adjunct Question Types
The embedded adjunct questions can be mainly divided into two types: what questions and why questions.The what questions are also referred to as targeted segment (TS) questions, which focus readers on detailed content identified in the text [5][6][7][8]14].However, the why questions, also called elaborative interrogation (EI) questions, attempt to help readers activate their relevant prior knowledge to draw inferences from the reading texts [5,6,8,14].In a word, what questions usually focus on defining concepts, while why questions aim at synthesizing and integrating information across different sections of the texts [3].
The effect of what questions on improving the memory of the prose has been verified [20][21][22].What questions have also been proven to have positive effects on L2 reading comprehension if questions focus on the same content as the assessment tasks [12].
Research on the effect of inserted why questions is much more complicated.Findings have shown that the positive effect of why questions has been examined in the following aspects: improving the memory of the isolated facts for learners of all ages; helping readers to remember facts in short passages; enhancing the readers' comprehension of the whole text instead of the fragmental facts in actual prose [13].Though some researchers have noted that inserting other adjunct questions may achieve better reading effects than why questions, this was because the researcher provided a detailed explanation of the text, making the why questions unnecessary for the text [5,13].
Besides the studies on the respective effects of what and why questions on reading comprehension, Liu embedded both types of questions into the exact expository text and identified positive effects on L2 reading comprehension measured by short-answer questions (SAQs), but not multiple-choice questions (MCQs) [4].
Because of the variation in the effects of different types of adjunct questions from prior studies, we need more evidence from research to verify the effects' differences.
Methods Used to Deal with the Embedded Questions
Research has shown that three methods have been adopted so far to deal with the embedded questions in the reading text.The first method is called "pause-and-consider", meaning that in the middle of the reading text, readers are asked to pause and consider the embedded questions.The second method is "pause-and-write", in which in the middle of the reading text, readers are asked to pause and then write down their answers to the embedded questions, either in their native languages or in the target languages.The third method provides no explicit instructions to the readers, leaving blank space for the readers to decide whether to answer the embedded questions.Researchers have used different methods to deal with the embedded questions and have achieved different results.
Brantmeier et al. adopted the third method in their research with a group of British learners of Spanish [6].Even though they were not required to write down anything, most learners did jot down the answers to the embedded questions in their native language.No significant difference was found in reading assessments between the two groups.They concluded that not asking learners to answer the questions might be the leading reason for the lack of positive effects.Therefore, Brantmeier et al. conducted another study focusing on the effects of different methods on reading comprehension [7].The findings indicated no significant differences among learners' reading comprehension, but the group using the pause-and-write method did worst in the written recall task.They attributed this result to the demand of writing down the answers to the questions.Readers might focus too much on the information related to the inserted adjunct questions.This might distract readers' attention from the process of constructing a clear and coherent mental picture of the text.As a result, other important information that could help readers understand the academic text might be left out.
Callender et al. adopted the pause-and-write method in their research [5].Three groups of intermediate-level Spanish learners were asked to answer the embedded questions in the target language.The results indicated that embedded questions did not enhance reading comprehension but only impeded the natural reading process.In fact, the what questions showed no effect on learners' performance in the multiple-choice task, and learners performed significantly worse in the written recall task in both passages tested.The why questions had negative effects on the performance of the written recall task.
The assessment task types and academic text types were considered to be two possible explanations.Therefore, the present study still takes these two variables into consideration.
Assessment Task Types
Four assessment tasks have been used by researchers in the study of textual enhancement.They are multiple-choice questions, written recall, short-answer questions, and sentence completion [3][4][5][6][7][8]10,14,18,23,24].Research on L2 reading has indicated that the most commonly used assessment tasks were a combination of multiple-choice tasks and written recall [3,5,6,8,10].The multiple-choice task was chosen because of its efficiency in scoring [6,8], while the written recall task was added to avoid its bias on the final result, especially when Chinese EFL learners were involved [7].
Different to multiple-choice tasks, in which both the choices and the answers were predetermined, written recall provided a perfect compensation because a free written recall task enabled readers to select the information they needed to report, and readers were free from any pre-set constraints or limitations [5].These two assessment tasks were applied to most relevant research to examine L2 learners' reading comprehension and were proven to complement each other in evaluating L2 learners' reading performance [3,[5][6][7][8]10].However, there were inconsistent findings concerning the use of these two assessment tasks.Most found that the effects of adjunct questions were not significant when assessed by multiplechoice assessment tasks with advanced readers [6][7][8][9].When assessed by multiple-choice cloze, the adjunct questions did not promote reading comprehension with intermediate learners of Chinese [10].The written recall assessment had similar inconsistent findings, with no effects found with Spanish readers [3,5] and promoting effects with Chinese readers [10].
As multiple-choice and written recall tasks are still considered the best combination for assessing readers' comprehension, this study still utilized these two types of measurements to provide further evidence for research on the effects of adjunct questions with intermediate-level Chinese L2 learners as subjects.The learners were allowed to accomplish the written recall task in either their native language (Chinese) or their target language (English) to ensure that it was a test of reading comprehension rather than a test of writing.
Text Types
The text type is defined as the structure and organization of the reading materials.Hiebert et al. [25] stated that "Extensive first language research (L1) has been conducted on how varied text types affect reading comprehension and some L2 reading research has addressed the same phenomena".Even though expository and narrative prose have been used most extensively, researchers studying the effects of the embedded questions made the expository text their first choice [4][5][6][7][8]14].
However, different types of text involve different types of processing.According to Callender et al., "expository texts encourage processing individual facts or pieces of information, whereas narratives or proses with an established schema naturally result in relational processing" [5].
We thus hypothesize that the why questions, which are said to be useful for memorizing isolated facts, will help improve the learners' reading comprehension of the narrative prose.Narrative text itself induces the processing of relational information, and the reading comprehension of such text types can be enhanced by manipulating textual difficulty.So far, few studies have been conducted on the effects of embedded adjunct questions on reading comprehension of this type, let alone comparing and contrasting the effects between expository texts and narrative texts inserted with different adjunct questions.
Reviewing prior research on textual enhancement, the present study explores the effects of embedding adjunct questions into narrative and expository texts on the L2 reading comprehension with Chinese intermediate EFL learners using multiple-choice questions and written recall tasks as assessment measures.The following research questions are to be answered in the present study: (1) What effects do embedded questions have on the intermediate-level Chinese EFL learners' reading performance?(2) What effects do the text types and embedded question types have on the intermediatelevel Chinese EFL learners' reading performance?
Participants
About two hundred convenient samples were recruited.After taking a reading comprehension test, only about 153 participants who showed no significant differences in their reading competence joined the formal study (F(5, 138) = 0.827; p = 0.533).They were randomly assigned into six different groups.Each group consisted of 25-30 students majoring in chemistry, communication engineering, biology, material engineering, medicine, etc.Their average age was 19.5 years old.Eighty percent were males while 20 percent were females, which was in proportion with the male and female ratio in technology-based universities.All expressed their willingness to participate in the study and signed the consent form in its Chinese version.
These participants had to meet the following three requirements: (1) All had passed College English Test Band 4 (CET 4) but failed in College English Test Band 6 (CET 6).Both tests were large-scale standardized proficiency tests for all Chinese college and university non-English majors.Those who passed CET 4 were rated as intermediate learners of English while those who passed CET 6 were considered to be advanced learners in English proficiency; (2) all had taken the pre-test on reading comprehension to ensure that there were no significant differences in their reading ability; (3) all had completed the experiment tasks required.Nine students failed to meet the above requirements, leaving the number of participants in the final sample to one hundred and forty-four.
Design
This experiment was a 2 × 3 between-group design.Question and text types were independent variables, while participants' reading comprehension was the dependent variable.We randomly grouped the final 144 participants into one of the six conditions.Three groups dealt with expository text, with 21 participants for no question condition, 22 for the what questions condition, and 22 for the why questions condition.The number of participants who dealt with narrative text was 26, 28, and 25, respectively, in the same condition order as the expository text.
Materials
The materials used for the pre-test were taken from TEM4 (Test for English Majors-level 4), a large-scale standardized test in China that evaluates the language proficiency level of English majors in China.The pre-test consisted of four reading passages with a total of 30 multiple-choice questions on reading comprehension.
The materials used for the experiment consisted of one narrative text and one expository text.The expository text discussed the implicit personality theories in which the attribution theory was expanded in detail.The narrative text described a memorable family trip and family members' feelings before and after this trip.Both texts were adapted so that they reached similar difficulty levels.The text features of the two English reading materials are presented in Table 1, which were analyzed using Coh-Metrix (3.0) [26].Table 1 shows the text features of the two passages.Though there were apparent differences in terms of their total word count and L2 readability, we selected the two passages from the same standardized Test for English Majors in China with its reliability and validity being tested by a group of Chinese experts in language testing.Besides, before the formal experiment, we recruited several volunteers at similar English proficiency levels to assess the difficulty level of the two passages.They claimed that these two passages were of similar difficulty for them.Therefore, we adopted these two passages for our study.
Two parallel questions were inserted into each of the two texts for the experimental groups; one was put in the middle of the text, while the other was placed at the end.Both the texts and the questions were presented in English.For example, the inserted what questions for text 2 were "What is the parents' plan to broaden their children's horizons even though their friends are against it?"and "What did the family see and experience during this trip?"The parallel why questions for this text were "Why did the parents stick to the travel plan to Istanbul?" and "Why is this trip so special for the family?"The development of the embedded questions for text 2 was under discussion among a group of experienced language teachers in the university and was aimed at helping the readers recollect individual facts or relational information from what they had read.However, the inserted questions for text 1 were adopted from previous researchers [5].The two what questions inserted in text 1 were "What are implicit assumptions?" and "What are attributions?".The two parallel why questions included "Why do we make implicit assumptions about other personalities?" and "Why do we make attributions?".
Two assessment tasks were provided for each text: a written recall task and a multiplechoice test.For the written recall task, all participants in the six groups were required to write down as much as they could remember about the text, either in English or Chinese.Allowing participants to use their L1 would rule out the effect of the participants' writing ability in L2.Both texts had ten multiple-choice questions written in English.
Previous research [4,27] has indicated that background knowledge plays a role in comprehension, so we added a simple questionnaire at the end of the reading texts, testing participants' familiarity with the topics on a scale of 1-5.
Procedure
In the eighth week of the fall semester of 2023, a reading test was given to about 200 recruited participants who expressed their willingness to participate in our study.This was convenience sampling.The reading material was taken from College English Test Band 4 (CET4).Based on the results of the test, we excluded about 50 participants whose reading proficiency was either at the top or at the bottom, leaving 150 participants whose reading proficiency was at the same intermediate level.These participants, who volunteered to join our experiment, were randomly assigned into six groups.
These participants all signed the consent form before the experiment started.During their English classroom learning sessions, their teachers gave them instructions on how to finish the reading tasks.After that, each participant received a set of corresponding materials printed on separate sheets.The materials were arranged in the following order: (1) reading passage; (2) topic familiarity questionnaire; (3) written recall task; (4) multiplechoice questions.The two reading texts with six different conditions were distributed randomly among six groups.Participants were told not to read back during the experiment and were given enough time to complete all the tasks.All six groups of participants read one text once.Three groups read text 1, and the other three groups read text 2. Each of the two texts was read under one of the three conditions: no embedded question groups (Groups 1 and 4), embedded what question groups (Groups 2 and 5), and embedded why question groups (Groups 3 and 6).
Upon completion, we noted that all participants finished within 45 min.
Scoring
We used a 100-point grading system.The total score of each participant consisted of two parts: half from the multiple-choice questions and half from written recalls.For multiple-choice questions of the two passages, we divided the 50 points by 10, attributing 5 points to each correct answer learners gave.For example, if a participant gave correct answers to six multiple-choice questions, his total score in this part would be 50/10 × 6 = 30.
To measure participants' scores in the written recall task, we calculated the number of pausal units participants could recall from the reading texts."A pausal unit is a unit or entity that readers feel the need to pause during normally paced oral reading" [23].Though there were some other scoring rubrics, such as Meyer's system [25] and Riley and Lee's "unit of analysis" [28], we still used the pausal unit in the present study because it has been demonstrated to be "more efficient and less time-consuming" and the "most consistently used method to codify written recalls" [5].
Two native English speakers were invited to read the two texts out loud and marked the pausal units in each text.One point was awarded to each pausal unit that was recalled successfully by learners.There were 33 pausal units in text 1 and 29 in text 2.
Since the two raters' reliability was very high (r = 0.95), we let each rater score the pausal units separately.After we added the scores of each participant, we divided the total score by 2, thus obtaining the final score for each participant in the written recall task.
Results
Before reporting the data about the effects of the text types and question types on the L2 reading performance, we conducted a MANOVA on the familiarity ratings for both of the two reading texts by six different conditions.No significant differences were found among the six groups in terms of familiarity (F(5, 129) = 1.590, p = 0.167).
Effects of Embedded Questions on Learners' Reading Comprehension
The two-way ANOVA analysis was conducted first.Table 2 shows the descriptive statistics.Table 2 presents the means and standard deviations of the learners' scores gained in the expository and narrative texts with different assessment tasks under three different question conditions.For the expository text, the differences among the three groups in terms of the learners' reading comprehension did not reach the significant level (p > 0.05).For the narrative text, the learners performed much better in the reading comprehension with the text inserted with why questions than with the text with "no" questions or what questions.The differences reached a statistically significant level (p < 0.01).Generally speaking, the participants obtained higher mean scores on the texts with embedded questions.This suggests that embedded questions may have positive effects on readers' reading comprehension, especially when inserting why questions into narrative texts.
To further understand the effects, we conducted an ANOVA analysis to test the effects of the question types and text types on the learners' reading performance.Table 3 provides the details.Table 3 shows that the main effect of both the text type (F(1, 138) = 8.80, p = 0.00) and question type (F(2, 138) = 3.81, p = 0.03) reached significant levels.The interaction effect of the expository text and narrative text with the different types of questions was also significant (F(2, 138) = 5.78, p = 0.00).This indicates that the text types and question types may have interacted together and significantly affected the participants' reading performance.Therefore, a simple effect analysis was required to obtain further interpretation.
Effects of Question Types and Text Types on Learners' Reading Comprehension 4.2.1. Effects of Question Types on Learners' Reading Comprehension
We conducted a simple interaction effect analysis.Table 4 presents the specific effects the question types imposed on the learners' reading comprehension.Table 4 shows that significant differences exist only between inserting the no questions and why questions into the reading texts (p = 0.03) when the text type is not taken as a variable.There was no significant difference in the reading comprehension between the texts with no embedded questions and the texts with what questions (p = 0.34), and between the embedded what and why questions (p = 0.48).
Effects of Text Types on Learners' Reading Comprehension
Table 5 shows the results of the simple effect analysis of the text type on the learners' reading comprehension.Table 5 shows that when the adjunct questions were inserted into the expository text, no significant difference was found in the learners' reading comprehension (F(2, 138) = 2.46, p = 0.09).However, when the narrative text was inserted with different questions, a significant difference was found (F(2, 138) = 6.55, p = 0.00).Table 6 provides detailed information.Table 6 shows that in dealing with the narrative text, the learners with the embedded why questions obtained much higher mean scores (M = 45.85) in the reading comprehension test than the learners with no questions (M = 36.81)or with what questions (M = 37.06).However, when dealing with the expository text, there were few differences among the three groups.
Interactive Effects of Question Types and Text Types on Learners' Reading Comprehension
Table 7 shows the interactive effects between the text types and question types.Table 7 indicates that variable text type had no effect on the learners' reading comprehension when the texts were embedded with no questions (F(1, 138) = 1.88, p = 0.17) or with what questions (F(1, 138) = 0.36, p = 0.55).Positive effects on the learners' reading comprehension were only found in the texts inserted with why questions (F(1, 138) = 17.21, p = 0.00).This might imply that inserting why questions into different types of reading helps enhance the L2 reading comprehension.However, Table 5 showed that inserting questions into expository texts did not have any positive effects on the learners' reading comprehension (p > 0.05).Thus, we could conclude that the positive effects only occur when inserting why questions into narrative texts.
Discussion
The present study found that adjunct questions had no facilitating effects on the intermediate Chinese EFL readers' reading performance with the expository texts but strong positive effects with the narrative texts.We also found that the positive effect of the adjunct questions only emerged for the written recall measurement but not for the multiple-choice tasks when we inserted why questions into narrative texts with written recall measurements.Significant interactive effects were also found between the embedded question types and text types.
The findings of no significant effects on the Chinese EFL learners' reading performance with the expository text measured by multiple-choice tasks were consistent with prior L2 research [6][7][8][9][10].One possible explanation is that the processing of the text produced by adjunct questions is redundant with the spontaneous processing of the readers, especially for more proficient readers.The context of expository texts may offer relevant explanations for the adjunct questions, thus making the questions ineffective because the answers were already embedded in such texts.
The results of inserting why questions into the narrative text eliciting better-written recalls for the Chinese EFL learners did not align with some previous Spanish studies [3,5] but were in agreement with a previous study in a Chinese setting [10].One plausible explanation is related to the cognitive skills involved.The cognitive skills required to complete the written recall task and cognitive process triggered to answer why questions are similar in that "both involve recognizing, recalling and reorganizing textual information in a coherent manner" [29].Both written recalls and why questions are open-ended questions, the completion of which requires information recollection [30].
This finding also refuted the statement that adjunct questions limited the L2 readers' processing of the text.The embedded why questions helped activate the L2 readers' prior knowledge with a scaffold to create a cognitive map of the text [5,6,8,14].The Chinese learners' language learning habits, as well as their language proficiency level, might also be attributed to the results.Chinese EFL learners today do not think that language learning only involves simple repetition.Rather, they believe that memorization, understanding, practicing, and reviewing all contribute to better understanding and learning a second language.Accordingly, when confronted with a reading passage, Chinese EFL learners tend to memorize as much as they can.These inserted why questions act like anchors to recollect and reconsider what they have read and relate it to their prior knowledge.As a result, they tend to be more familiar with the content after this recollecting process and are likely to accomplish the written recall assessment tasks more efficiently.
Apart from the explanation of the Chinese learners' language learning habits, the positive effects of the embedded why questions could be due to their language proficiency levels.As has been hypothesized in prior research [10,14], the effects of the adjunct questions might vary with the learners' comprehension abilities.For low-level readers, their ability to activate prior relevant knowledge and construct mental representations for passages is relatively limited.These inserted adjuncts, especially the why questions, require readers to understand the text and use their prior knowledge to answer.This would bring extra burden to the low-level learners and further exacerbate the difficulties they face in completing the construction of a cognitive map of the text.For high-level readers, the negative effects of the inserted questions have been proven repeatedly in previous research [6,14].The advanced learners were considered naturally competent to construct a corresponding representation of the material directed by adjunct questions which would distract their correct understanding of the text.
Learners at the intermediate proficiency level already have a certain ability to construct cognitive representations for the text, but have not yet reached the same proficiency level as the high-level construct builders.This makes the adjunct questions work as an anchor.Questions are used as a standing point to allow the readers to strengthen their memory, understand the passage, and complement the learners' reading ability.That is why the embedded why questions positively affected the intermediate-level learners' reading comprehension.
The interactive effects found between the text types and question types might be because different text types provide different ways of processing information.According to the Material Appropriate Processing (MAP) framework, a good recall of a text depends largely on the ability to encode both individual-item and relational information.Each text type is supposed to have its information processed in its own way.The difficulty of manipulation (here, in the present study, it refers to the adjunct questions) is considered to help stimulate new ways of processing texts.
The nature of the narrative text and the inserted why questions in our study helped enhance the learners' reading comprehension in the written recall because narrative text encourages processing relational information, while the inserted why questions help to stimulate individual-item information [5,31].
Different to the embedded why questions, the what questions encouraged the readers to center around the surface information or text-based information [7].If the readers were in remedial reading programs and in need of identifying specific parts of the text, this might be quite useful as they provide additional support to readers [5][6][7][8]14,31].In the present study, the reading tasks required the readers to process not only the surface information but also deep structure information.That could explain why inserted what questions could improve learners' reading comprehension to a certain extent, but the difference did not reach a significant level.
Conclusions and Implications
The present research found that adjunct questions have positive effects on Chinese intermediate EFL learners with narrative texts inserted with why questions.The interactive effects of question types and text types have also been identified.The inserted why questions in narrative texts are related to item-specific information focusing on surface or text-based level information while what questions mainly focus on relational information.
Language educators should realize the importance of reading competence and the effects of reading on students' educational achievement.This is especially true in the current and future Chinese L2 context, with classroom teaching time largely reduced.Therefore, for educators, it is important that they train students to become independent and autonomous readers.Textbook writers, who aim to teach learners effective reading skills and to improve learners' independent reading competence, should include different types of reading materials and offer training in other reading techniques to L2 readers.One limitation of this study is that the participants' proficiency level was rated using a localized test, CET 4, instead of an internationally recognized test such as TOEFL or IELTS.Another limitation is that the two texts we selected for our study differ not only in text type, but also in themes, subject, word count, and readability.These differences might affect the results.Future studies could center around investigating other types of adjunct responses, other test types, the effect of topic familiarity, and using other methods (such as think-aloud or eye-tracking techniques) to explore learners' cognitive processes.In addition, when selecting materials, issues such as the text type, readability, theme, and subject should all be considered.
Table 1 .
Text features of the two passages.
Note: T1: Expository text; T2: Narrative text; Type-token ratio: The number of unique words divided by the number of tokens of the words to indicate lexical diversity.Readability: Calculated based on content word overlap, sentence syntax similarity, and lexical frequency to indicate L2 text difficulty.
Table 2 .
Descriptive statistics by text and group conditions.
Note: T1: expository text; T2: narrative text; Control: reading with no embedded questions; Treatment 1: reading with embedded what questions; Treatment 2: reading with embedded why questions; N: number of participants.
Table 3 .
Tests of effects between text types and question types.
Table 4 .
Effects of question types on reading comprehension.: Scheffe method used to make the comparison; Type 1: text with no questions; Type 2: text embedded with what questions; Type 3: text embedded with why questions. Note
Table 5 .
Simple effects of text types on reading comprehension.
Table 6 .
Interaction effect between text types and question types.
Table 7 .
Interactive effects of text types and question types. | 8,386 | 2024-02-01T00:00:00.000 | [
"Linguistics",
"Education"
] |
p38 MAP Kinase Mediates Apoptosis through Phosphorylation of BimEL at Ser-65*
The stress-activated c-Jun N-terminal protein kinase (JNK) and p38 mitogen-activated protein (MAP) kinase (p38) regulate apoptosis induced by several forms of cellular insults. Potential targets for these kinases include members of the Bcl-2 family proteins, which mediate apoptosis generated through the mitochondria-initiated, intrinsic cell death pathway. Indeed, the activities of several Bcl-2 family proteins, both pro- and anti-apoptotic, are controlled by JNK phosphorylation. For example, the pro-apoptotic activity of BimEL, a member of the Bcl-2 family, is stimulated by JNK phosphorylation at Ser-65. In contrast, there is no reported evidence that p38-induced apoptosis is due to direct phosphorylation of Bcl-2 family proteins. Here we report evidence that sodium arsenite-induced apoptosis in PC12 cells may be due to direct phosphorylation of BimEL at Ser-65 by p38. This conclusion is supported by data showing that ectopic expression of a wild type, but not a non-phosphorylatable S65A mutant of BimEL, potentiates sodium arsenite-induced apoptosis and by experiments showing direct phosphorylation of BimEL at Ser-65 by p38 in vitro. Furthermore, sodium arsenite induced BimEL phosphorylation at Ser-65, which was blocked by p38 inhibition. This study provides the first example whereby p38 induces apoptosis by phosphorylating a member of the Bcl-2 family and illustrates that phosphorylation of BimEL on Ser-65 may be a common regulatory point for cell death induced by both JNK and p38 pathways.
Apoptosis plays a critical role in the proper development of the nervous system and the maintenance of homeostasis in the adult brain. Inappropriate apoptosis may contribute to various neurodegenerative conditions including stroke, epilepsy, and Parkinson disease. Furthermore, many environmental toxicants exert neurotoxicity by inducing apoptosis. For example, heavy metals, including arsenic, lead, mercury, and lithium, all induce neuronal apoptosis (1)(2)(3)(4)(5). Hence, elucidation of mechanisms that regulate neuronal apoptosis may provide new insights concerning strategies to counteract apoptosis associ-ated with neurodegenerative diseases or those induced by neural toxicants. JNK 2 and p38 are stress-activated MAP kinases that are preferentially activated by cell stress-inducing signals, including oxidative stress, environmental stress, and toxic chemical insults (6 -8). Sustained activation of JNK or p38 is implicated in the induction of many forms of neuronal apoptosis in response to a variety of cellular injuries (1, 8 -15). Apoptosis induced by cellular stress is often mediated through the mitochondria-initiated cell death pathway (16). The Bcl-2 family proteins regulate this process by modulating the membrane potential and function of mitochondria. There has been intense interest in understanding how both pro-and anti-apoptotic kinase-signaling pathways regulate the function of Bcl-2-related proteins. For example, the activities of many Bcl-2 family proteins, including BAD, Bcl-2, and Bcl-xL, are regulated by protein phosphorylation (12,(17)(18)(19)(20)(21)(22)(23)(24)(25)(26).
Bim is a BH3 domain-only pro-apoptotic protein and a member of the Bcl-2 family with three major forms generated by alternative splicing: Bim EL , Bim L , and Bim S (27,28). Bim EL is the most abundant isoform in neurons (29,30). Recent studies suggest that JNK induces apoptosis by directly phosphorylating BAD, Bim EL , and Bim L (31)(32)(33)(34)(35)(36). In addition, JNK also phosphorylates and inactivates the anti-apoptotic Bcl-2 and Bcl-xL (12,20,37,38). In contrast to extensive studies concerning the regulation of Bcl-2 family members by JNK, there is no evidence that p38 regulates apoptosis through direct phosphorylation of Bcl-2 family proteins.
The objective of this study was to determine whether p38induced apoptosis is dependent on p38-catalyzed phosphorylation of Bim EL using arsenite-induced apoptosis in PC12 cells as an in vitro model. Our data suggest that sodium arsenite induces apoptosis by a mechanism that depends on p38 activity and Bim EL function. Significantly, p38 was both necessary and sufficient to induce Bim EL phosphorylation at Ser-65, a known JNK phosphorylation site that plays a key role in JNK-induced apoptosis (33,35). These results define a novel mechanism for p38-mediated apoptosis and identify Bim phosphorylation at Ser-65 as a common mechanism underlying apoptosis induced through the JNK and p38-signaling pathways. * This work was supported by Grants ES 012215 and NS44069 (to Z. X.) and NS421021 (to A. B.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
Cell Culture and Transient Transfection-Rat pheochromocytoma PC12 cells were cultured in Dulbecco's modified Eagle's medium supplemented with 10% horse serum, 5% fetal bovine serum, 1% glutamine, and 0.5% penicillin-streptomycin. The cells were maintained in plates coated with rat tail collagen (Biomedical Technologies, Inc., Stoughton, MA) at 37°C with 7.5% CO 2 . Transient transfection was performed using Lipofectamine 2000 (Invitrogen) in regular growth medium. HEK293 cells were cultured in Dulbecco's modified Eagle's medium supplemented with 10% fetal bovine serum and 0.5% penicillin-streptomycin. Cells were maintained at 37°C with 5% CO 2 . Transient transfection was performed using FuGENE 6 (Roche Applied Science) according to the manufacturer's instructions. For transient transfections, both PC12 and HEK293 cells were plated a day before transfection onto plates coated with poly-D-lysine (Sigma).
Western Analysis and Protein Kinase Assay-Cell lysates were prepared as described previously (43). Thirty micrograms of protein were used for Western analysis. For the in vitro kinase assay measuring p38 phosphorylation of Bim, 250 ng of recombinant, active p38␣ protein and 1 g of recombinant Bim EL protein or Bim EL 3A protein were incubated with 100 M ATP in kinase assay buffer as described by the protocol from Upstate Biotechnology, Inc. The recombinant Bim EL and Bim EL 3A and p38␣ were from Upstate Biotechnology, Inc. The kinase reaction was carried out in 50 l of total volume and at 37°C for 30 min. The kinase reaction was stopped by adding SDS loading buffer and analyzed for Bim phosphorylation at Ser-65 by Western blotting using the phospho-Ser-65 Bim antibody (Upstate Biotechnology, Inc.).
Cell Viability and Apoptosis Assays-Cell viability was measured by MTT metabolism (44). Apoptosis was determined by nuclear condensation and/or fragmentation after staining with the DNA dye Hoechst 33258 (bis-benzimide) (44). The transfection efficiency of PC12 cells was ϳ30 -40%. To facilitate quantification of apoptosis in transfected cells, cells were cotransfected with eGFP or DsRed2 as a marker for transfection. At least 2,000 non-transfected or 1,000 transfected (eGFP ϩ or Red2 ϩ immunostaining) cells were counted for each data point. To obtain unbiased counting, slides were coded and the cells were scored blindly without prior knowledge of treatment.
Calf Intestinal Alkaline Phosphatase Treatment-Cell lysates to be treated with phosphatases were prepared in lysis buffers lacking phosphatase inhibitors (45). Ten units of calf intestinal alkaline phosphatase (Fermentas, Inc.) and 10 mM MgCl 2 were added to 150 g of protein lysates, and the mixture was incubated at 37°C for 60 min. The reaction was stopped by adding SDS loading buffer.
Data Analysis-Data were from at least three independent experiments. Statistical analysis of data was performed using one-way analysis of variance (Figs. 2-4 and 7; error bars represent S.E., ***, p Ͻ 0.001).
Sodium Arsenite Induces Apoptosis in PC12 Cells-PC12
cells were treated with 0 -50 M sodium arsenite, and cell viability was assayed at 0, 8, 24, and 48 h using the MTT metabolism assay (Fig. 1A). Sodium arsenite reduced cell viability in a dose-and time-dependent manner. To determine whether the loss of cell viability was due to apoptosis, PC12 cells were treated with 0 or 15 M sodium arsenite for 24 h and stained with the DNA dye Hoechst to visualize nuclear morphology. Sodium arsenite caused morphological changes characteristic of apoptosis, including nuclear condensation and fragmentation (Fig. 1B). Induction of the apoptotic phenotype by sodium arsenite was dependent on the time of incubation with increasing apoptosis over time (Fig. 1C).
p38 Activation Is Required for Sodium Arsenite-induced Apoptosis in PC12 Cells-Activation of p38 is required for sodium arsenite-induced apoptosis in primary cultured cortical, cerebellar neurons and in non-neuronal cells (1,2,46). To evaluate the contribution of p38 MAP kinase for arsenite-induced apoptosis in PC12 cells, p38 activity was monitored by Western analysis using an anti-phospho-p38 antibody that specifically recognizes phosphorylated and activated p38 (Fig. 2, A and B). Treatment with sodium arsenite at 15 M, or higher concentrations, induced p38 phosphorylation ( Fig. 2A), indic-p38 Phosphorylation of Bim EL ative of p38 activation. Activation of p38 was detectable 1 h after treatment, reached a maximum at 8 h, and persisted for at least 24 h (Fig. 2B).
To determine whether p38 activation contributes to sodium arsenite-induced apoptosis, PC12 cells were transiently transfected with a dominant interfering mutant of MAP kinase kinase 3 (dnMKK3) to selectively block p38 activation. MKK3 is an upstream kinase that activates and phosphorylates p38 (47). Expression of dnMKK3 significantly inhibited apoptosis induced by 15 M sodium arsenite ( Fig. 2C; ***, p Ͻ 0.001). In addition, treatment with 10 M SB202190 or SB203580, inhibitors for p38, also protected PC12 cells from sodium arsenite toxicity ( Fig. 2D; ***, p Ͻ 0.001 (and data not shown)). These data suggest that p38 is required for apoptosis in PC12 cells caused by sodium arsenite treatment.
Sodium Arsenite-induced Apoptosis and p38 Activation Are Mediated through Oxidative Stress-Because sodium arsenite can induce oxidative stress (48 -50) and p38 is activated by oxidative stress in some systems (6 -8), we considered the possibility that arsenite stimulation of p38 activity and subsequent apoptosis in PC12 cells may be due to oxidative stress. Indeed, sodium arsenite-induced p38 activation was attenuated by treatment with either of two antioxidants, GSH or NAC (Fig. 3A). Furthermore, GSH and NAC protected PC12 cells from sodium arsenite-induced cell apoptosis (Fig. 3, B and C). These data suggest that sodium arsenite causes oxidative stress in PC12 cells, which leads to p38 activation and apoptosis.
Bim EL Protein Is Required for Sodium Arsenite-induced Apoptosis-Because Bcl-2 family members, including Bim EL , are implicated in mitochondria-initiated cell death, we examined the effect of reducing Bim EL expression on arsenite-medi- (35). Transient transfection of PC12 cells with a plasmid DNA encoding a Bim EL hairpin RNA (35) almost completely knocked down Bim EL protein expression 2 days after transfection (Fig. 4A). This decrease in Bim protein significantly protected cells from apoptosis induced by constitutive p38 activation (Fig. 4B, caMKK3ϩp38␣) and sodium arsenite treatment ( Fig. 4C; ***, p Ͻ 0.001). As a control, we transfected PC12 cells with the empty vector control, an shRNA to luciferase (shLucif), shRNA to GFP, or scrambled PDE2 shRNA, all of which are driven by the human U6-RNA polIII promoter. None of the control shRNAs had any effect on Bim protein expression (data not shown) or on sodium arsenite-induced apoptosis (Fig. 4C). The effectiveness of shRNA to GFP was confirmed by its suppression of the expression of a co-transfected GFP, whereas the effectiveness of shRNA to luciferase was confirmed by luciferase assay (data not shown). These data suggest that Bim EL protein is required for p38-mediated, sodium arsenite-induced apoptosis.
Sodium Arsenite Stimulation of Bim EL Phosphorylation
Depends on p38 but Not JNK Activity-To investigate the relationship between Bim EL and p38 in sodium arsenite-induced apoptosis, we determined whether p38 phosphorylates Bim EL , thereby regulating its pro-apoptotic activity. HEK293 cells were transiently transfected with a Bim EL expression plasmid, and p38 was activated by co-transfecting the cells with constitutively active MKK3 (caMKK3) and wild-type p38␣. The cloning vector was used as a negative control. HEK293 cells were used instead of PC12 cells in this experiment because of higher transfection efficiency. Furthermore, a significant number of PC12 cells transfected with Bim EL and caMKK3ϩp38␣ underwent apoptosis, precluding sufficient Bim EL protein accumulation for this analysis. Activation of p38 by constitutive expression of caMKK3ϩp38␣ caused a reduced electrophoretic mobility (phosphorylation gel shift) of Bim EL on SDS-polyacrylamide gels, indicative of Bim EL phosphorylation (Fig. 5A). This gel
. Sodium arsenite-induced PC12 cell apoptosis requires Bim EL .
A, expression of shBim EL reduces Bim EL protein levels. PC12 cells were transfected with a Bim EL RNA interfering plasmid (shBim EL ) or an empty vector control. Twenty-four hours later, the cells were treated with 15 M sodium arsenite for another 24 h. Cell lysates were analyzed by Western blotting using an anti-Bim EL antibody. -Actin was used as a loading control. To prevent protein degradation due to caspase activation, the cells were pretreated with 10 M Z-VAD-fmk, a pan-caspase inhibitor, for 1 h before the addition of sodium arsenite. B, expression of shBim EL inhibits apoptosis induced by constitutive p38 activation. PC12 cells were transfected with 0 -1.5 g of shBim EL plasmid DNA Ϯ co-transfection of constitutively active (ca) MKK3 and wild-type p38␣. Apoptosis in transfected cells (eFGP) was scored 2 days post-transfection. C, expression of shBim EL inhibits sodium arsenite-induced apoptosis. PC12 cells were transfected with plasmid DNA encoding an empty vector control, shBim EL , shGFP, shLuciferase (shLucif ), or a scrambled shPDE2. Apoptosis in transfected cells (Red2 ϩ ) was scored 2 days post-transfection. At least 1000 transfected cells were counted for each data point. ***, p Ͻ 0.001.
p38 Phosphorylation of Bim EL
shift was eliminated when cell lysates were pretreated with a calf intestine alkaline phosphatase, demonstrating that the gel shift of Bim EL is due to its phosphorylation. These data indicate that p38 activation is sufficient to cause Bim EL phosphorylation in transfected cells.
To determine whether the endogenous Bim EL protein is phosphorylated by p38 after sodium arsenite treatment, PC12 cells were treated with 15 M sodium arsenite for 8 h, and cell lysates were analyzed by Western blotting using an anti-Bim antibody. Sodium arsenite caused a gel shift of the endogenous Bim EL (Fig. 5B). This shift was abolished by treatment with SB203580, a specific inhibitor for p38 (51,52).
Because JNK also phosphorylates Bim EL , we performed additional experiments to exclude the possibility that the effect of SB203580 on Bim EL phosphorylation is due to inhibition of JNK. SB203580 did not block sodium arsenite-induced c-Jun phosphorylation (Fig. 5C), indicating that SB203580 does not interfere with JNK signaling under these conditions. Furthermore, although the JNK inhibitor SP600125 attenuated sodium arsenite-induced c-Jun phosphorylation (Fig. 5E), it did not inhibit Bim EL phosphorylation (Fig. 5D). This JNK inhibitor also did not protect PC12 cells from sodium arsenite-induced apoptosis (data not shown). Based on these observations, we conclude that p38 activation is sufficient to induce Bim EL phosphorylation and that p38 is required for Bim EL phosphorylation after sodium arsenite treatment.
p38 Phosphorylates Bim EL at Serine 65 in Vitro and in PC12 Cells-Bim EL is phosphorylated by JNK at Ser-65, which regulates its pro-apoptotic activity (33,35). Because JNK and p38 belong to the same family of MAP kinases and share some common substrates, we hypothesized that p38 may also phosphorylate Bim EL at Ser-65. To test this hypothesis, we utilized an antibody that only recognizes Bim EL when it is phosphorylated at Ser-65 (35). Bim EL was phosphorylated at Ser-65 when cells were co-transfected with caMKK3ϩp38␣ to activate p38 signaling, and this phosphorylation was abolished when cells were treated with the p38 inhibitor SB203580 (Fig. 6A). These results are consistent with the hypothesis of Ley et al. (53) that p38 may phosphorylate Bim EL at Ser-65. They also led us to ask whether sodium arsenite induces phosphorylation of the endogenous Bim EL at Ser-65 in PC12 cells and if this phosphorylation is mediated by p38 MAP kinase. To address these questions, PC12 cells were treated with 0 -30 M sodium arsenite for 8 h in the presence or absence of the p38 inhibitor SB203580 (Fig. 6B). Sodium arsenite induced Bim EL phosphorylation at Ser-65, which was abolished by treatment with SB203580.
To determine whether p38 directly phosphorylates Bim EL at Ser-65, we performed an in vitro p38 kinase assay using a recombinant, active p38␣ isoform and a recombinant Bim EL protein. A recombinant Bim EL mutant protein that has S55A/ S65A/S73A (Bim EL 3A) was used as a negative control for the in vitro kinase assay. The kinase reaction mixture was then analyzed by Western blotting for Bim EL Ser-65 phosphorylation using the phospho-Ser-65-Bim EL antibody. Purified p38 phosphorylated Bim EL in vitro (Fig. 6C). Together, these data indicate that p38 directly phosphorylates Bim EL at Ser-65 and mediates sodium arsenite-induced Bim EL phosphorylation at Ser-65 in PC12 cells.
Ser-65 Is Required for Bim EL to Mediate Sodium Arseniteinduced Apoptosis-To determine whether Ser-65 phosphorylation regulates the apoptotic activity of Bim EL in sodium arsenite-induced apoptosis, we transiently transfected PC12 cells with a wild type and a S65A mutant of Bim EL in which the serine FIGURE 5. p38 mediates Bim EL phosphorylation. A, p38 activation is sufficient to induce Bim EL phosphorylation. HEK293 cells were co-transfected with plasmid DNA encoding Bim EL and a vector control (Vector) or constitutively active (ca) MKK3ϩp38␣. Cell lysates were prepared 2 days later and treated with (ϩ) or without (Ϫ) calf intestine phosphatase. B, sodium arsenite (15 M) induces phosphorylation of endogenous Bim EL in PC12 cells, and SB203580 inhibition of p38 MAP kinase blocks this phosphorylation. PC12 cells were pretreated for 1 h with 10 M SB203580 (SB) or vehicle control (V) before the addition of 0 or 15 M sodium arsenite for 8 h. Cell lysates were analyzed by Western blotting using an anti-Bim antibody. -Actin was used as a loading control. C, SB203580 (SB) does not inhibit c-Jun phosphorylation. PC12 cells were treated with SB203580 and sodium arsenite as described for B. Cell lysates were analyzed by Western blotting using an antibody that recognizes phosphorylated (p-) and activated c-Jun. Total c-Jun and -actin were used as loading controls. D, SP600125, a specific JNK inhibitor, does not block arsenite-induced Bim EL phosphorylation. PC12 cells were pretreated for 1 h with vehicle control or 10 M SP600125 (SP) before the addition of 0 or 15 M sodium arsenite. Western analysis was performed as in described for B. E, SP600125 is effective at blocking c-Jun phosphorylation. PC12 cells were treated as described for D. Western analysis was performed as described for C. 65 residue is replaced by a non-phosphorylatable alanine. One day after transfection, the cells were treated with 15 M sodium arsenite or vehicle control for 24 h. In untreated cells, the wild type (but not the S65A mutant Bim EL ) induced apoptosis (Fig. 7A), consistent with previous reports using cultured cerebellar granule neurons (33,35). Furthermore, the wild type (but not the S65A mutant Bim EL ) potentiated PC12 cell apoptosis after sodium arsenite treatment. This is also consistent with the report that overexpression of Bim EL potentiates insulin withdrawal-induced apoptosis in cerebellar granule neurons (35). These data suggest that Ser-65 phosphorylation is important for Bim EL to mediate sodium arsenite-induced apoptosis.
DISCUSSION
The objective of this study was to determine whether p38 MAP kinase induces apoptosis by directly phosphorylating and modulating the activity of the Bcl-2 family protein Bim EL . Using sodium arsenite-induced apoptosis in PC12 cells as a model, we have discovered that p38 activation is sufficient to induce Bim EL phosphorylationatSer-65andisrequiredforSer-65phosphorylation of the endogenous Bim EL after sodium arsenite treatment. Furthermore, we have found that p38 directly phosphorylates Bim EL at Ser-65 in vitro. Expression of the wild type (but not S65A mutant of Bim EL ) potentiated sodium arseniteinduced apoptosis. Taken with existing evidence that a phospho-mimic mutant of Bim EL at Ser-65 (S65E Bim EL ) is a more potent inducer of apoptosis than the wild-type Bim EL (33,35), our data define a novel mechanism for p38 induction of apoptosis that is mediated through phosphorylation of Bim EL at Ser-65 and Bim EL activation.
The stress-activated JNK and p38 MAP kinases have been implicated in the induction of apoptosis in response to many forms of apoptotic signals. There is considerable interest in understanding how the pro-apoptotic kinase-signaling pathways regulate the activity of Bcl-2 family proteins, which are key components of the cell death machinery. JNK induces apoptosis by directly phosphorylating and activating pro-apoptotic BAD, Bim EL , and Bim L (31)(32)(33)(34)(35)(36). In other cases, JNK phosphorylates and inactivates anti-apoptotic Bcl-2 and Bcl-xL (12,20,37,38). In contrast, relatively little is known regarding p38induced phosphorylation and regulation of the Bcl-2 family proteins. Although p38 may be involved in the phosphorylation of Bcl-xL, BAD, and Bim EL , the evidence has been limited and indirect. For example, activation of the p38-signaling pathway mediates tumor necrosis factor-induced Bcl-xL phosphorylation; however, the nature of the kinase responsible for this phosphorylation, the site, and the functional consequence of this phosphorylation are unknown (54). p38 has also been implicated in increasing or decreasing BAD phosphorylation at Ser-112, but these events are indirect via p38 regulation of other intermediate kinases or phosphatases (55,56). Although it was stated that p38 can phosphorylate Bim EL at Ser-65 in vitro (53), FIGURE 6. p38 MAP kinase phosphorylates Bim EL at Ser-65. A, activation of p38 signaling induces Bim EL phosphorylation at Ser-65. HEK293 cells were co-transfected with plasmid DNA encoding Bim EL and a vector control (V ) or caMKK3ϩp38␣. Twenty-four hours after transfection, the cells were treated with 10 M SB203580 or vehicle control for another 24 h. B, sodium arsenite induces Bim EL phosphorylation at Ser-65 in a p38-dependent manner. PC12 cells were preincubated with 10 M SB203580 or vehicle control for 1 h and then treated with 0 -30 M sodium arsenite for 8 h. C, p38 is sufficient to phosphorylate Bim EL at Ser-65 in vitro. A recombinant wild-type Bim EL protein or a Bim EL 3A mutant protein (S55A/S65A/S73A) was incubated with a purified, active p38␣ together with ATP in an in vitro kinase assay. The cell lysates from A and B or the in vitro kinase reaction product from C were analyzed for Bim EL phosphorylation at Ser-65 by Western blotting using an antibody specific to Bim EL phosphorylated at Ser-65. Total Bim EL was used as a loading control.
p38 Phosphorylation of Bim EL
no actual data were shown. Data reported in our study suggest that p38 MAP kinase induces apoptosis by directly phosphorylating Bim EL at Ser-65. To our knowledge, this is the first evidence that p38 MAP kinase regulates apoptosis by directly phosphorylating and regulating the activities of a Bcl-2 family protein.
Although JNK and p38 are both proline-directed MAP kinases and have been implicated in many different forms of apoptosis, downstream targets that mediate their apoptotic activity have not been completely elucidated. JNK-induced neuronal apoptosis requires c-Jun (1,9,12,14,15,61,62), which is not a substrate of p38. Bcl-2 is phosphorylated and inactivated by JNK (12,20,23,63,64); however, it does not seem to be directly phosphorylated by p38. Here we suggest Bim EL phosphorylation at Ser-65 as a convergent regulatory point for regulation of apoptosis by JNK and p38.
Our data using RNA interference technology suggest a critical role for Bim EL in sodium arsenite-induced apoptosis in PC12 cells. A recent report (65) suggests that sodium arseniteinduced apoptosis is largely unaffected in cortical neurons prepared from Bim knock-out mice. Although the reason for this apparent discrepancy is currently unclear, it is possible that sodium arsenite may induce apoptosis by different mechanisms in post-mitotic cortical neurons than in proliferating PC12 cells. Alternatively, it is also possible that there is developmental compensation by other BH3-only Bcl-2 families of proteins in the Bim knock-out mice. The fact that sodium arsenite induces expression of Bim proteins in cortical neurons (65) is consistent with the notion that Bim EL may play an important role in sodium arsenite-induced apoptosis in cortical neurons from the normal brain.
In conclusion, our data demonstrate that sodium arsenite treatment induces oxidative stress in PC12 cells and causes sequential activation of p38 MAP kinase, phosphorylation of Bim EL on Ser-65, and apoptosis (Fig. 7B). These data identify a novel mechanism by which p38 activation induces phosphorylation of the BH3-only pro-apoptotic Bim EL protein at Ser-65, thereby leading to the induction of apoptosis. | 5,546.8 | 2006-09-01T00:00:00.000 | [
"Biology"
] |
An E ffi cient Framework for Adequacy Evaluation through Extraction of Rare Load Curtailment Events in Composite Power Systems
: With the growing robustness of modern power systems, the occurrence of load curtailment events is becoming lower. Hence, the simulation of these events constitutes a challenge in adequacy indices assessment. Due to the rarity of the load curtailment events, the standard Monte Carlo simulation (MCS) estimator of adequacy indices is not practical. Therefore, a framework based on the enhanced cross-entropy-based importance sampling (ECE-IS) method is introduced in this paper for computing the adequacy indices. The framework comprises two stages. Using the proposed ECE-IS method, the first stage’s purpose is to identify the samples or states of the nodal generation and load that are greatly significant to the adequacy indices estimators. In the second stage, the density of the input variables’ conditional on the load curtailment domain obtained by the first stage are used to compute the nodal and system adequacy indices. The performance of the ECE-IS method is verified through a comparison with the standard MCS method and the recent techniques of rare events simulation in literature. The results confirm that the proposed method develops an accurate estimation for the nodal and system adequacy indices (loss of load probability (LOLP), expected power not supplied (EPNS)) with appropriate convergence value and low computation time. curation, A.A.M.; Formal analysis, V.O. and A.A.M.; Investigation, R.V. and M.N.I.; Methodology, V.O.; Software, A.A.M.; Supervision, V.O., M.N.I., and H.R.; M.N.I.; Visualization, V.O. and H.R.; Writing—original draft, A.A.M.; Writing—review & editing, V.O., T.S.H.,
Introduction
Power system reliability evaluation plays a crucial role in the decision-making process of power system development planning. The reliability evaluation process invariably involves consideration of adequacy and security concepts [1,2]. Security is explained as "the measure of how an electric power system can withstand sudden disturbances such as electric short circuits or unanticipated loss of system components". While adequacy is determined as "a measure of the ability of a bulk of the surrogate model. Therefore, the accuracy of the surrogate model needs to be ensured to avoid adding uncertainty to the already existing input uncertainty. The main disadvantage to this technique is that there is a difficulty in measuring the impact of the modeling errors on reliability results. Compared with the abovementioned methods, sampling-based methods have the advantages of being insensitive to the complexity of limit-state functions, avoiding errors from approximations of the limit-state function, and being straightforward to apply. Thus, more efficient sampling techniques for overcoming the MCS computation burden are adopted in this paper.
In the context of reducing the number of the states that need to be evaluated, variance reduction techniques enable us to extract the set of states, which make a significant contribution to the evaluation of adequacy indices. Since the variance of the MCS estimate is inversely proportional to the failure probability [5], the variance reduction methods [21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36] have been developed for reducing the variance of the MCS estimate through generating samples exploring the rare load curtailment events and so shortening the computation time of obtaining an accurate estimate of the adequacy indices. The name of variance reduction techniques gathers various techniques, such as subset sampling [16], importance sampling [17][18][19], control variates [27], antithetic variates [28], stratified sampling [29], line sampling [30,31], and directional sampling [32]. For the sake of conciseness, subset simulation (SS) and importance sampling (IS) are reviewed in this paper, as they are the most substantial variance reduction approaches applied in adequacy studies. SS is based on splitting the failure domain into a series of partial failure domains. This facilitates describing the probability of failure event as a product of conditional probabilities of the partial failure events. The main advantages of SS are its capability to handle complex limit-state functions (e.g., nonlinear, with possibly multiple failure regions). On the other hand, SS have some drawbacks. Firstly, the variance estimator is not directly calculated by an analytical formula as MCS and IS techniques but must be evaluated by repetition. Secondly, even if SS provides a variance reduction compared to MCS, the number of samples needed to achieve convergence is larger than that needed with other IS techniques [16]. The IS techniques propose an alternate sampling density, called the ISD, which is the density of the input variables conditional on the failure domain. The optimal selection of the ISD can result in zero variance of the estimate of failure probability. However, in practice, sampling from the theoretically optimal ISD is not handy, because it needs knowing the failure probability and failure domain in advance. To overcome this problem, the cross-entropy [33,34] and sequential importance sampling (SIS) [35,36] techniques were applied to approximate the optimal ISD in a sequential manner.
The CE method determines the ISD iteratively through defining a sequence of more frequent events (intermediate failure events) in several probability spaces that gradually reach the target theoretical ISD (small failure event). The densities of intermediate failure events are chosen as parametric family of densities. Typically, the densities are chosen as the same family as the density of the input random variables, and the initial intermediate sampling density is chosen as the original density of the random variables. For each intermediate failure event k, the CE method identifies the parameter of a chosen density model through minimizing KL divergence between the optimal ISD of k-th intermediate failure event and the chosen probability density. The optimal ISD of k-th intermediate failure event is defined based on the intermediate failure threshold (ζ k ≥ 0), which is estimated such that a fraction (θ ∈ [0.01, 0.1]) of the limit-state function values of the samples from the fitted parametric density obtained at the previous sampling step are beyond this threshold ζ k . Starting from an initial sampling density, the density parameter updating is executed until the threshold ζ k becomes beyond zero (i.e., at least θ of the limit-state function values of the samples fall in the actual failure domain). In this case, the target optimal ISD is approximated well enough by the current parametric density. The advantage of the CE-IS approach is related to the fact that analytical updating formulas can be derived for density parameters when dealing with probability densities belonging to the natural exponential family. Another advantage concerns the fact that, similarly to the standard MCS, the estimation error is directly controlled using the estimator of the variance [34]. However, the major difficulty is to construct efficient intermediate densities used in the adaptive sampling process to approach the target optimal ISD. This paper presents an improved version of the CE-IS by incorporating two enhancements. The first one is developing a new updating scheme for the parameter of the intermediate density. In the proposed method, the indicator function of the intermediate failure events is defined by a smooth approximation function instead of using step function as the traditional CE-IS method. This allows exploiting all the samples from intermediate sampling levels in the density parameter updating, contrary to the traditional CE-IS method, which uses a small portion of the samples. In addition, a smooth shifting for the optimal ISD of intermediate failure events towards the target optimal ISD of the small failure event is achieved. This effective use of the intermediate samples leads to better estimate of the density parameter and hence to a smaller sampling error in the corresponding probability estimate. Secondly, exploiting as stopping criterion the coefficient of variation of the weights according to the smooth approximation of the optimal ISD of intermediate failure events with regard to the target optimal ISD improves the robustness of the method convergence. These modifications contribute to obtaining the accurate optimal ISD of nodal generations and loads, and so, the nodal and system adequacy indices are computed accurately. We compare the performance of the proposed method to the traditional CE-IS and recently proposed techniques in literature such as sequential importance sampling and subset simulation. The paper is structured as follows. After a brief introduction, Section 2 illustrates the principles of adequacy assessment in power systems. Moreover, the mathematical objective function and constraints of the optimal power flow (OPF) algorithm are defined. Section 3 describes the framework of adequacy indices evaluation based on the proposed ECE-IS method. In addition, the mathematical description of the ECE-IS is presented. Section 4 depicts the case study and numerical results. The results discussion is explained in Section 5. Section 6 outlines the main findings.
Principles of Adequacy Evaluation in Power Systems
For adequacy evaluation purposes, a bulk power system model is normally represented by a group of areas or market nodes connected by transmission lines. Then, the uncertainties of total available generation capacity and total required load are depicted for each area or node. The available capacity of generators in each node can be defined by a binomial distribution. Figure 1 shows the binomial PDF for the failure of m generators in a power plant including y generators with the same probability of failure (q = 5%). The probability of failure is calculated using Equation (1). analytical updating formulas can be derived for density parameters when dealing with probability densities belonging to the natural exponential family. Another advantage concerns the fact that, similarly to the standard MCS, the estimation error is directly controlled using the estimator of the variance [34]. However, the major difficulty is to construct efficient intermediate densities used in the adaptive sampling process to approach the target optimal ISD. This paper presents an improved version of the CE-IS by incorporating two enhancements. The first one is developing a new updating scheme for the parameter of the intermediate density. In the proposed method, the indicator function of the intermediate failure events is defined by a smooth approximation function instead of using step function as the traditional CE-IS method. This allows exploiting all the samples from intermediate sampling levels in the density parameter updating, contrary to the traditional CE-IS method, which uses a small portion of the samples. In addition, a smooth shifting for the optimal ISD of intermediate failure events towards the target optimal ISD of the small failure event is achieved. This effective use of the intermediate samples leads to better estimate of the density parameter and hence to a smaller sampling error in the corresponding probability estimate. Secondly, exploiting as stopping criterion the coefficient of variation of the weights according to the smooth approximation of the optimal ISD of intermediate failure events with regard to the target optimal ISD improves the robustness of the method convergence. These modifications contribute to obtaining the accurate optimal ISD of nodal generations and loads, and so, the nodal and system adequacy indices are computed accurately. We compare the performance of the proposed method to the traditional CE-IS and recently proposed techniques in literature such as sequential importance sampling and subset simulation. The paper is structured as follows. After a brief introduction, Section 2 illustrates the principles of adequacy assessment in power systems. Moreover, the mathematical objective function and constraints of the optimal power flow (OPF) algorithm are defined. Section 3 describes the framework of adequacy indices evaluation based on the proposed ECE-IS method. In addition, the mathematical description of the ECE-IS is presented. Section 4 depicts the case study and numerical results. The results discussion is explained in Section 5. Section 6 outlines the main findings.
Principles of Adequacy Evaluation in Power Systems
For adequacy evaluation purposes, a bulk power system model is normally represented by a group of areas or market nodes connected by transmission lines. Then, the uncertainties of total available generation capacity and total required load are depicted for each area or node. The available capacity of generators in each node can be defined by a binomial distribution. Figure 1 shows the binomial PDF for the failure of generators in a power plant including generators with the same probability of failure ( = 5%). The probability of failure is calculated using Equation (1). For a large number of generators, according to the de Moivre-Laplace theorem, the binomial distribution could be approximated by a Gaussian one. Such an approach is accepted in this paper. The Gaussian distribution is also used for describing the aggregated load connected to each node. In order to calculate realistic adequacy indices of each node, it is necessary to take into account both load curtailment strategy and physical constraints [37,38]. For each sample within the simulation method, a power flow model must be solved to detect the state of the transmission system. If there are transmission lines outside their loading limits, an OPF algorithm is executed to apply corrective actions (e.g., generation rescheduling and/or load curtailment) by solving an optimization problem. The objective of optimization problem is to minimize the total amount of load curtailment constrained to the operating limits of generating units and transmission circuits, and to the power flow equations. As there are several procedures in the evaluation of a state, a linear representation for the power flow equations is often adopted in HL2 adequacy studies to significantly decrease the computational effort [39]. This representation cannot analyze the impact of bus voltages and reactive power on the system adequacy. This simplification is acceptable, since the adequacy studies mainly deal with long term power system analysis [16]. Hence, a DC-power-flow-based optimization method is used because it is simpler (being linear model) and faster in computation compared with AC power flow.
Objective Function
A load curtailment sharing philosophy is considered in this work. Unserved demand is shared across all the nodes of a power system. Using weight factors in the objective function, a priority order policy is employed for nodes [18]. The objective function is presented in a quadratic form: In addition to the objective function, the power system physical constraints are of paramount importance and will strongly affect resulting adequacy indices.
Constraints
The main aim of adequacy evaluation is to calculate adequacy indices of a power system, while not paying special attention to voltages drops, congestions, and frequency problems of the grid [2]. The nodal power balance constraint can be written in the next form: In addition to (3), a system power balance equation is used: The power flows vector is calculated using a simplified form of DC power flows equations: Power losses for each Monte Carlo simulation sample are calculated based on the assumption that all nodal voltages are equal to the nominal ones. Power flows through transmission lines are limited by maximum transmission line capacities: The generation and curtailed load in each node are restricted by supply and demand bid limits:
A Framework of Adequacy Indices Evaluation
The framework includes two stages. In the first stage, the samples or states of the nodal available generations and required loads that lead to load curtailment events are extracted. These samples are used in the second stage to compute the adequacy indices. The adequacy indices adopted here are loss of load probability (LOLP) and expected power not supplied (EPNS) for system and nodes.
The load curtailment event is defined as the inability to satisfy the loads at all nodes without violating the system operating constraints such as the limited capacity of transmission lines. Thus, the load curtailment event may result from low available generation or high required load or the limited capacity of transmission lines or union of some (or all) above. To carry out the first stage aim, the occurrence of load curtailment is verified at each sampled state of the uncertain inputs . . , L n , N is the number of samples, and n is the number of nodes. Suppose the set F is the failure domain in the input parameter space, i.e., x i ∈ F, which leads to a load curtailment in the system. The failure domain F is expressed by a limit-state function S(x i ) as follows: The function S(x i ) defines the degree of deficiency of system state x i . It equals the amount of load curtailment when the state fails, i.e., n d=1 L curt.d (x i ) > 0. Otherwise, the system deficiency is defined as the sum of the difference between nodal available generation capacity and the required load as shown in (10). Since the target load curtailment event has a small probability in the original sample space, the limit-state function is used to define a sequence of simulations of more frequent events (intermediate failure events) in several probability spaces. This is illustrated in detail in the following subsections. In (10), the amount of load curtailment or power not supplied (PNS) is computed through the DC-OPF algorithm. The OPF algorithm includes two actions: generation redispatch and load curtailment that aim to minimize the system load curtailment and satisfy the security constraints of the transmission network, as illustrated in the previous section.
For estimating the target failure probability (load curtailment probability), the 2n-variate normal distribution of uncertain inputs is expressed by g(x; w), which is depicted by mean µ and covariance Σ vectors, and so w = [µ; Σ]. To simplify the writing of equations, the vector {x i , i = 1, . . . , N} is symbolized by x. Hence, the probability of failure can be computed by the following expression: in which E g indicates that the expectation operator is taken with respect to the density g and L F (x) denotes the indicator function. The MCS estimator for P F is where Since the estimator is unbiased, and its variance is defined by The coefficient of variation CV is considered as the error measure for the P F estimator. The squared CV is given as follows: Hence, the CV of MCS is approximately (NP F ) − 1 2 and so, for small P F , the needed number of samples N is large for getting an accurate estimate. To improve the efficiency of MCS, the proposed method, ECE-IS, as a variance reduction technique has been developed for reducing the variance of the MCS estimator. We primarily revise the traditional CE-IS to develop the ECE-IS.
Implementation of CE-IS Method
IS introduces an alternate sampling density q(x), termed the ISD. A proper selection of q(x) is required to represent accurately the failure domain of inputs. The probability of failure shown in (10) is computed regarding q(x) and rewritten in the following manner: where W(x) = g(x;w) q(x) is the likelihood ratio or importance weight function, which is expressed as a ratio of original and proposed densities. The IS estimate of P F is given by: in which {x i , i = 1, . . . ., N} are identically distributed samples based on q(x). To obtain the optimal selection of the ISD q * (x), the variance of P F estimators have to be minimized as follows: The theoretically optimal ISD leading to zero variance is given by the following Equation [5]: Since the optimal ISD is dependent on unknown quantities, P F and L F (x), as shown in (14), the computation of the optimal ISD is not possible directly. The CE as a sampling technique is used to find a near-optimal ISD through fitting a parametric density model. It exploits the samples from intermediate sampling levels for fitting the selected parametric density. The parameter vector (u) is defined through minimizing the cross entropy or KL divergence between the unknown optimal ISD given in (15) and the selected probability density q(x; u). In this work, the selected q(x; u) is a multivariate normal probability distribution having u = [µ IS ; Σ IS ]. The cross entropy between the q * (x) and q(x; u) can be described as follows (see [17]): Then, the optimization problem can be expressed in the following manner: By substituting q * (x) in (16) with the Equation (14), the optimization problem becomes in which The CE method solves this optimization problem iteratively through defining a series of intermediate densities q(x; u k ), k = 1, . . . ., NT that gradually reach the target density representing well the failure region as shown in Figure 2. The intermediate failure domain F k is defined using a threshold ζ k as stated in (19). ζ k is calculated as the θ-quantile of the sorted limit-state function values S(x i ) from smallest to largest of the samples from the fitted parametric density obtained at the previous step q(x; u k−1 ). For each intermediate failure event k, the optimization problem is solved to get its optimal parameter u k using the samples distributed based on q(x; u k−1 ). The aim is to obtain the final parameter vector u NT , which approximates the solution of (17).
Starting from the initial parameter vector u 0 , each following u k is evaluated by solving the CE optimization problem written in (20) with target optimal ISD set to q k * (x), which is the optimal ISD of the k-th intermediate failure event. The u 0 is taken as nominal parameter vector of input variables, which is w.
in which q k * (x) q(x i ;u k−1 ) . The procedure is repeated until ζ k becomes a negative value, i.e., at least θm samples fall in the actual failure domain, where θ ∈ [0.01, 0.1] [17]. Thus, NT is set to the current event k, and the optimal ISD is approximated quite by the density q(x; u NT−1 ), and the probability of failure is estimated as follows: Starting from the initial parameter vector , each following is evaluated by solving the CE optimization problem written in (20) with target optimal ISD set to * ( ), which is the optimal ISD of the -th intermediate failure event. The is taken as nominal parameter vector of input variables, which is .
Implementation of ECE-IS Method
In The indicator failure function can be defined as follows: , where δ k is the control parameter of function bandwidth, and Φ is the standard normal CDF. When δ k approaches zero, the smooth function approaches to the target step indicator function. Hence, δ 0 > δ 1 > · · · > δ NT > 0 defines a decreasing series of bandwidths, as shown in Figure 3.
Using the smooth function of L F k , the optimization problem (20) becomes as follows: . δ k is determined such that the optimal ISD q k * (x) is approximated well enough by samples drawn from q(x; u k−1 ), i.e., the variance of the importance weights W k (x; u k−1 , δ k ) is small. This is done by minimizing the difference between the CV of the weights W k (x i ; u k−1 , δ k ), i = 1, . . . , N and the specified CV target at each intermediate event k, as written in (22). The indicator failure function can be defined as follows: is the control parameter of function bandwidth, and is the standard normal CDF. When approaches zero, the smooth function approaches to the target step indicator function. Hence, > > ⋯ . > > 0 defines a decreasing series of bandwidths, as shown in Figure 3. As shown in Algorithm 1, starting with δ 0 = ∞ and u 0 as a nominal parameter vector, this procedure is reiterated and stopped when the CV of the weights calculated in (24) of the present smooth approximation of the optimal ISD of intermediate failure events with regard to the target optimal ISD is lower than the CV target . Hence, NT is set to the current event k, and the optimal ISD is approximated well enough by the density q(x; u NT−1 ). Utilizing the CV of the weights as stopping criterion instead of the parametric density q(x; u k−1 ) improves the robustness of the method convergence, as will be shown in the section of results.
The samples from the density q(x; u NT−1 ) will be used in the second stage as shown in Algorithm 1 for computing the system adequacy indices (LOLP-EPNS) for each node and system. The system LOLP and EPNS can be expressed as follows: For nodes, the LOLP and EPNS can be rewritten as follows: in which
Calculate the LOLP and EPNS indices and their coefficient of variation (convergence)
for each node and system, as shown in the equations from (25) to (28).
Results
The proposed method is tested and evaluated for the five-bus test scheme, presented in Figure 4. The data for the buses and transmission lines are presented in Table 1. In terms of mean values of the generation and demand capacities, the test scheme includes two in surplus, one in balance, and two deficient buses. The probabilistic nature of the generation and demand could potentially result in different combinations of operational states of the power system buses. All the buses are connected by transmission lines of the same transmission capacity but different resistance and reactance. All tests are carried out using MATLAB 2017 on an Intel Core i5-8 G memory personal computer. The standard MCS simulation method is used as a benchmarking method. The maximum number of simulation samples for MCS is 5 × 10 4 . A coefficient of variation (convergence) of 5% for both system adequacy indices (LOLP-EPNS) is used as the stopping criterion. The rare events simulation techniques (CE-IS [33], SIS [35], SS [16], ECE-IS) use the following parameter values: In Figure 5, the PDFs of the system limit-state function S(x) are shown for different methods (CE-IS, SIS, SS, ECE-IS) and MCS. It illustrates that the proposed ECE-IS method has the largest probability of S(x) < 0, which is 80% in comparison with 23, 55, 32, and 20% for MCS, CE-IS, SIS, and SS, respectively. This means that the samples extracted by the ECE-IS method are more likely to cause the load curtailment events. It represents 80% of the total sampled system states. Once the samples from the optimal distributions of nodal generations and loads are obtained, the adequacy indices (EPNS, LOLP) can be evaluated. For test purposes the EPNS, LOLP are calculated for all the buses. The histogram in Figure 6 shows the computation time and the EPNS values obtained by the different methods. The computation time includes the time spent in the first stage for extracting the rare load curtailment events.
First stage
Second Stage
Results
The proposed method is tested and evaluated for the five-bus test scheme, presented in Figure 4. The data for the buses and transmission lines are presented in Table 1. In terms of mean values of the generation and demand capacities, the test scheme includes two in surplus, one in balance, and two deficient buses. The probabilistic nature of the generation and demand could potentially result in different combinations of operational states of the power system buses. All the buses are connected by transmission lines of the same transmission capacity but different resistance and reactance. All tests are carried out using MATLAB 2017 on an Intel Core i5-8 G memory personal computer. The standard MCS simulation method is used as a benchmarking method. The maximum number of simulation samples for MCS is 5 × 10 4 . A coefficient of variation (convergence) of 5% for both system adequacy indices (LOLP-EPNS) is used as the stopping criterion. The rare events simulation techniques (CE-IS [33], SIS [35], SS [16], ECE-IS) use the following parameter values: θ = 0.1, CV target = 0.05, maximum number of iterations = 50, and number of samples per iteration = 2000.
In Figure 5, the PDFs of the system limit-state function S(x) are shown for different methods (CE-IS, SIS, SS, ECE-IS) and MCS. It illustrates that the proposed ECE-IS method has the largest probability of S(x) < 0, which is 80% in comparison with 23, 55, 32, and 20% for MCS, CE-IS, SIS, and SS, respectively. This means that the samples extracted by the ECE-IS method are more likely to cause the load curtailment events. It represents 80% of the total sampled system states. Once the samples from the optimal distributions of nodal generations and loads are obtained, the adequacy indices (EPNS, LOLP) can be evaluated. For test purposes the EPNS, LOLP are calculated for all the buses. The histogram in Figure 6 shows the computation time and the EPNS values obtained by the different methods. The computation time includes the time spent in the first stage for extracting the rare load curtailment events. 1 1700 170 600 60 1 4 5 50 500 2 50 5 50 5 2 3 1 10 500 3 1440 144 900 90 2 4 3 33 500 4 1200 120 1500 150 3 4 2 10 500 5 1500 150 2000 200 4 5 5 50 500 1 1700 170 600 60 1 4 5 50 500 2 50 5 50 5 2 3 1 10 500 3 1440 144 900 90 2 4 3 33 500 4 1200 120 1500 150 3 4 2 10 500 5 1500 150 2000 200 4 5 5 50 500 The load curtailment sharing philosophy results in relatively small EPNS values of potentially deficient buses 4 and 5. At the same time, the limited transmission lines capacity does not allow unlimited power supply from surplus buses and so the greatest EPNS is at bus 5. From the simulation results, it became clear that all the discussed methods significantly reduce a computation burden and the most computationally effective is the CE-IS method. It also could be seen that the method used in extracting load curtailment events could significantly affect results. Figure 7 shows the LOLP values for all the busses of the test scheme. Theoretically, in the case of unlimited transmission lines capacity, the objective function (1) must result in equal LOLP of all the buses. However, in terms of mean values, the power surplus of the bus 1 is 10% higher than the transmission capacity of the connected lines. Possible power supply from bus 1 is locked, and as a result, in some deficient test scheme state samples, the demand at the bus is not curtailed. It can be seen from the histogram that the LOLP of bus 1 is relatively small in comparison to other bus values. Figures 8 and 9 show the errors of methods in comparison to the standard MCS method. For almost all the buses, EPNS values computed using the proposed method are accurate within range of 8%. Even though the ECE-IS method is less computationally effective than other methods, it could provide significantly more accurate results. In comparison to the traditionally used MCS approach, the proposed method is nearly eleven times faster. The smallest LOLP error values are generally also represented by the ECE-IS approach. Notice, that in the case of LOLP, even small deviations would introduce a great error, as it can be seen. The load curtailment sharing philosophy results in relatively small EPNS values of potentially deficient buses 4 and 5. At the same time, the limited transmission lines capacity does not allow unlimited power supply from surplus buses and so the greatest EPNS is at bus 5. From the simulation results, it became clear that all the discussed methods significantly reduce a computation burden and the most computationally effective is the CE-IS method. It also could be seen that the method used in extracting load curtailment events could significantly affect results. Figure 7 shows the LOLP values for all the busses of the test scheme. Theoretically, in the case of unlimited transmission lines capacity, the objective function (1) must result in equal LOLP of all the buses. However, in terms of mean values, the power surplus of the bus 1 is 10% higher than the transmission capacity of the connected lines. Possible power supply from bus 1 is locked, and as a result, in some deficient test scheme state samples, the demand at the bus is not curtailed. It can be seen from the histogram that the LOLP of bus 1 is relatively small in comparison to other bus values. Figures 8 and 9 show the errors of methods in comparison to the standard MCS method. For almost all the buses, EPNS values computed using the proposed method are accurate within range of 8%. Even though the ECE-IS method is less computationally effective than other methods, it could provide significantly more accurate results. In comparison to the traditionally used MCS approach, the proposed method is nearly eleven times faster. The smallest LOLP error values are generally also represented by the ECE-IS approach. Notice, that in the case of LOLP, even small deviations would introduce a great error, as it can be seen.
Discussion
A trade-off between the computational accuracy and computational efficiency leads to a wide number of approaches for a power system adequacy evaluation. Therefore, the comparison among methods must include the number of samples or computation time required to reach the accurate adequacy indices with acceptable convergence. In terms of the number of samples, the system LOLP and EPNS indices and their convergence using the standard MCS method and the different rare events simulation approaches are shown in Figures 10-13. For MCS, the number of samples to reach a convergence of 5% for both system LOLP and EPNS indices is 38,945 samples. The CE-IS and SS methods have the fastest convergence rate, while they do not develop accurate LOLP values. The ECE-IS has performed better, reaching a 0.0099 LOLP value with 4.7% convergence and 8.95 MW EPNS with 2.4% convergence. On the other hand, the standard MCS has achieved 0.0102 with 5% and 8.8 MW with 2.5% for LOLP and EPNS, respectively. Therefore, the ECE-IS can achieve the same accuracy of the standard MCS method for both nodal indices, as shown in Figures 8 and 9, and system indices, as shown in Figures 10 and 12. In the attitude of computational efficiency, the ECE-IS needs seven iterations for extracting 2000 samples representing the load curtailment events, and so, the total number of samples is 14,000 samples. Hence, the ECE-IS can achieve accurate results with a smaller number of samples and computation time. As shown in Figure 6, an 11-times speedup is achieved.
Discussion
A trade-off between the computational accuracy and computational efficiency leads to a wide number of approaches for a power system adequacy evaluation. Therefore, the comparison among methods must include the number of samples or computation time required to reach the accurate adequacy indices with acceptable convergence. In terms of the number of samples, the system LOLP and EPNS indices and their convergence using the standard MCS method and the different rare events simulation approaches are shown in Figures 10-13. For MCS, the number of samples to reach a convergence of 5% for both system LOLP and EPNS indices is 38,945 samples. The CE-IS and SS methods have the fastest convergence rate, while they do not develop accurate LOLP values. The ECE-IS has performed better, reaching a 0.0099 LOLP value with 4.7% convergence and 8.95 MW EPNS with 2.4% convergence. On the other hand, the standard MCS has achieved 0.0102 with 5% and 8.8 MW with 2.5% for LOLP and EPNS, respectively. Therefore, the ECE-IS can achieve the same accuracy of the standard MCS method for both nodal indices, as shown in Figures 8 and 9, and system indices, as shown in Figures 10 and 12. In the attitude of computational efficiency, the ECE-IS needs seven iterations for extracting 2000 samples representing the load curtailment events, and so, the total number of samples is 14,000 samples. Hence, the ECE-IS can achieve accurate results with a smaller number of samples and computation time. As shown in Figure 6, an 11-times speedup is achieved. For comparing the robustness of the ECE-IS method with other rare events simulation techniques, the system LOLP, LOLP relative bias, and LOLP convergence values are illustrated in Table 2 for different numbers of samples per iteration. The sample analyzing times differ from each other; hence, the computation time is included in Table 2. Taken the LOLP value (0.0102) obtained by the standard MCS method as a reference value, the relative bias of LOLP values is computed as follows: . From the results in Table 2, the convergence acceleration for all methods is For comparing the robustness of the ECE-IS method with other rare events simulation techniques, the system LOLP, LOLP relative bias, and LOLP convergence values are illustrated in Table 2 for different numbers of samples per iteration. The sample analyzing times differ from each other; hence, the computation time is included in Table 2. Taken the LOLP value (0.0102) obtained by the standard MCS method as a reference value, the relative bias of LOLP values is computed as follows: . From the results in Table 2, the convergence acceleration for all methods is achieved by increasing the number of samples. However, there is a significant accuracy loss with a small number of samples for CE-IS and SS methods, while the SIS achieves a proper LOLP bias at 1000 samples but bad convergence. However, the ECE-IS is still effective for a small number of samples. It achieves a small LOLP bias (17%) with 21% convergence at 250 number of samples. In order to verify the computation accuracy and efficiency of the proposed method with the dimension of the power network and the number of random variables being considered, the results of adequacy indices are presented for the IEEE-RTS 79 system. The system includes 24 buses and 32 generators divided among 14 generating stations totalizing 3405 MW of installed capacity. The annual system peak load is 2850 MW. More information on the IEEE-RTS 79 can be found in [40]. The mean and standard deviation values for the loads were taken at the peak value and the ±10% of the peak value, respectively. A coefficient of variation (convergence) of 5% for both system adequacy indices (LOLP-EPNS) is used as the stopping criterion. For the ECE-IS and MCS methods, Table 3 shows the results of the system LOLP and EPNS indices, number of samples, and computation time. The computation time includes the time spent in the first stage for extracting the rare load curtailment events. The results of the ECE-IS method are 0.00121 and 0.16 MW for LOLP and EPNS, respectively. On the other hand, the standard MCS has achieved 0.00119 and 0.154 MW for LOLP and EPNS, respectively. Therefore, the ECE-IS can achieve the same accuracy of the standard MCS method for both system indices. Moreover, the ECE-IS method is considerably more efficient than the standard MCS method. The proposed method needs only a small fraction (23%) of the samples needed by the MCS. However, in contrast to MCS, the ECE-IS method, as other techniques of rare events simulation, come along with its own set of tuning parameters, which are the target the coefficient of variation of importance weight function and the number of samples per iteration. The proper tuning of these parameters has consequences on the efficiency of the technique.
Conclusions
In this paper, a framework of adequacy indices evaluation has been proposed in composite generation and transmission power systems. The main purpose of the framework is to obtain accurate adequacy indices and enhance the computational efficiency of the standard MCS method. This is achieved by integrating rare events simulation methods in the framework's first stage for identifying the approximately optimal distortion of the nodal generations and load distributions, making rare load curtailment events more likely to be drawn. As a result of the integration of the rare event simulation methods, a new approach named ECE-IS is proposed that is more efficient and robust than other methods, such as traditional CE-IS, SIS, and SS in extracting the optimal distortion. From the reported results of the framework's second stage, the proposed method contributes to accurately evaluating the adequacy indices (LOLP-EPNS) and further enhancing the convergence of the indices in comparison with other methods. Moreover, a great speed-up was shown in terms of computation time with respect to the standard MCS method.
The implementation of the proposed method in adequacy evaluation could allow us to use more detailed power system models, which would more accurately reflect real power system operation. The impact of different network topologies (i.e., transmission network contingencies), non-linearity of the power flow equations, and chronological characteristics of power systems, such as time and spatially correlated load models, the time-dependency of renewable energy resources could be included into the power system model. This method can also be included into the many problems in both operation and planning phases, such as the evaluation of spinning reserve margins and the integration of renewable energy resources. Using adequacy indices assessments and knowing the critical nodes during system disturbance, planners can better manage the penetration of renewable sources, ensuring sustainable and reliable operation at both the system and nodal level. In the context of power system operation, operators schedule the generating units and allocate enough generation reserve amounts to ensure a lower adequacy index (the probability of load curtailment) than the maximum allowed. These issues will be adopted in future works. | 9,559.2 | 2020-11-13T00:00:00.000 | [
"Engineering"
] |
A pendulum of induction between the epiblast and extra-embryonic endoderm supports post-implantation progression
ABSTRACT Embryogenesis is supported by dynamic loops of cellular interactions. Here, we create a partial mouse embryo model to elucidate the principles of epiblast (Epi) and extra-embryonic endoderm co-development (XEn). We trigger naive mouse embryonic stem cells to form a blastocyst-stage niche of Epi-like cells and XEn-like cells (3D, hydrogel free and serum free). Once established, these two lineages autonomously progress in minimal medium to form an inner pro-amniotic-like cavity surrounded by polarized Epi-like cells covered with visceral endoderm (VE)-like cells. The progression occurs through reciprocal inductions by which the Epi supports the primitive endoderm (PrE) to produce a basal lamina that subsequently regulates Epi polarization and/or cavitation, which, in return, channels the transcriptomic progression to VE. This VE then contributes to Epi bifurcation into anterior- and posterior-like states. Similarly, boosting the formation of PrE-like cells within blastoids supports developmental progression. We argue that self-organization can arise from lineage bifurcation followed by a pendulum of induction that propagates over time.
I have now received all the referees' reports on the above manuscript, and have reached a decision. The referees' comments are appended below, or you can access them online: please go to BenchPress and click on the 'Manuscripts with Decisions' queue in the Author Area.
As you will see, the referees express considerable interest in your work, but have some significant criticisms and recommend a substantial revision of your manuscript before we can consider publication. If you are able to revise the manuscript along the lines suggested, which may involve further experiments, I will be happy receive a revised version of the manuscript. Your revised paper will be re-reviewed by one or more of the original referees, and acceptance of your manuscript will depend on your addressing satisfactorily the reviewers' major concerns. Please also note that Development will normally permit only one round of major revision.
We are aware that you may currently be unable to access the lab to undertake experimental revisions. If it would be helpful, we encourage you to contact us to discuss your revision in greater detail. Please send us a point-by-point response indicating where you are able to address concerns raised (either experimentally or by changes to the text) and where you will not be able to do so within the normal timeframe of a revision. We will then provide further guidance. Please also note that we are happy to extend revision timeframes as necessary.
Please attend to all of the reviewers' comments and ensure that you clearly highlight all changes made in the revised manuscript. Please avoid using 'Tracked changes' in Word files as these are lost in PDF conversion. I should be grateful if you would also provide a point-by-point response detailing how you have dealt with the points raised by the reviewers in the 'Response to Reviewers' box. If you do not agree with any of their criticisms or suggestions please explain clearly why this is so.
Advance summary and potential significance to field
The authors have previously shown that mixing embryonic stem cells (ESCs) and trophoblast stem cells (TSCs) can result in blastocyst-like structures ("blastoids") that also give rise to some primitive endoderm (PrE) cells. However, these structures previously showed little expansion of the PrE. Here the authors perform a screen to identify culture conditions to enhance the fraction of PrE-like cells in vitro, which may then support more efficient and further development of blastoid structures. They identify a PrE inductive medium and show that this supports further development of the blasted structures to a post-implantation-like morphology. This is an interesting system, which is well suited to such experiments, permitting the investigation of a large number of conditions. The authors study the effect of a variety of signalling molecules and inhibitors at different concentrations on PDGFRa expression. While this is a useful study, the authors describe the system as a model of the early mouse embryo but do not attempt to address or discuss the relevance of their findings in the context of the embryo. Thus, it is currently unclear how accurately this system and their findings recapitulate developmental events. I believe that the authors could strengthen this with some additional data and further discussion.
ISSUES TO ADDRESS: -
There is no characterization of what happens in these structures over time and therefore it is unclear how this relates to specification of Epi and PrE in the in vivo blastocyst i.e. do PrE-like cells initially arise in aggregates in a salt and pepper manner before sorting to the outer edge of the aggregates? Are epiblast and PrE markers initially coexpressed in these conditions as in the embryo? Do PrE cells recapitulate the molecular expression pattern in the embryo? - The authors identify a cocktail of signaling factors that appear to promote a PrE-like identity in vitro. However, they do not determine whether these signals affect PrE specification in vivo. Currently, FGF is the signaling factor most clearly associated with Epi/PrE specification in the preimplantation embryo. Addition of FGF to pre-implantation embryo culture promotes PrE specification and vice versa when FGF signaling is inhibited. The authors should attempt to validate the importance of these signals on PrE specification in the embryo by performing similar experiments, culturing early embryos in their PrE culture medium and its individual components. - Here the authors show the importance of RA signalling in PrE-like induction in their system. However, it has been suggested that the onset of RA signalling in the embryo is during gastrulation (PMID: 22318625). Is there any evidence of earlier signaling or expression of appropriate components of this pathway at earlier times (e.g. from published single cell sequencing data of early embryos?).
-Similarly, the data shows that Wnt promotes PDGFRa expression and, surprisingly, has a stronger effect than FGF. However, Wnt signaling is not required for pre-implantation development (PMID:21554866). As Wnt induces mesoderm, could it be that in Fig. 1E, the effect of Wnt on PDGFRa expression is related to mesoderm differentiation? - The authors conclude: "Our data support the idea that these two cell types are autonomously supporting the transition into the post-implantation stage of the Epi." -At the moment this statement seems to rely on morphology alone. This should be corroborated with some lineage markers e.g. downregulation of naïve pluripotency markers (e.g. Klf4) and upregulation of formative/primed markers (e.g. Oct6). -P7: "We concluded that the Fgf and Wnt pathways regulate both the specification and the expansion of Pdgfrα + cells." How are the effect on expansion vs. specification of these molecules in the screen distinguished? The size of EBs treated with FGF or CHIR is not altered, which might suggest that there is not an effect on expansion? If claiming effects on expansion, the authors should look directly at proliferation under these treatment conditions e.g. by quantification of proliferation markers such as phospho histone H3.
-PDGFRa is not a PrE-specific marker but is also expressed within mesoderm cells. While the authors characterize the PDGFRa+ cells in the identified PrE-induction medium later in the manuscript, without further markers (such as Sox17-endoderm and Brachyury-mesoderm) in the earlier treatments ( Fig. 1E-J), it is not clear whether the observed effects are on PrE or mesoderm differentiation. This caveat should be acknowledged or further investigated. -While there are clear benefits to the imaging analysis performed in the paper (to study the effect of these treatments on a per aggregate basis), some of the measurements are difficult to interpret. While 'yield' and 'number of PDGFRa clones' together does go some way to understand the effect on PDGFRa expression without any information about the size of the clones, the meaning of the data is still somewhat unclear. Therefore, it would be valuable for the authors confirm some of the key results by bulk flow cytometry which will show exact cell numbers and percentages for each population.
MINOR QUERIES: -Authors should be sure to add n values representing the number of independent experiments in the figure legends for each panel -it is currently missing in some places. - The authors use 0.05 mM B-mercaptoethanol for their culture medium, which seems to be half the typical dose for ESC cultures. Is there a reason for this and, if so, they should state it in the methods? -Correct/clarify meaning of P4: "enhanced the permittivity for PrE formation" - Fig. 1D -Graph refers to number of GFP+ clones while legend refers to number of GFP+ cells. Should change to the correct one.
-P9: "Epi-specific Fgf4 and PrE-specific Fgfr2 were mutually expressed" -should this be mutually exclusively expressed? If so, perhaps could plot these gene levels from the same cells against one another to show this more clearly. Figure 5A ) but enhanced the potential of the PrE-like cells to expand (96 h, Gata6+, 53% vs. 10%, Figure 5A ), an effect that depended on the initial number of PrE cells present in blastoids ( Figure 5B )." -Not clear from this analysis that survival depends on number of cells, although it may be correlated. This should be reworded. Also, the authors should plot the number of cells at the beginning vs. at the end of the experiment, rather than number of cells at the beginning vs. yes/no. This would give much more information on expansion/survival that currently cannot be gleaned. - Fig. 5C: not clear what the alternative phenotypes are that don't fall into this category -2D structures? 3D structures with only 1 cell type? Should include this in the graph or legend. - Fig 5D: the authors should supply some higher resolution images/sections of these aggregates and show separate channels for IF as it is currently difficult to discern the structure from the data provided i.e. is there another cell layer inside the Oct4+ layer in lower left panel? F-ACTIN in upper right panel hinders clearly understanding the structure of this aggregate (hence also showing separate channels would help here).
Reviewer 2
Advance summary and potential significance to field A new media to encourage primitive endoderm from small EB's of mESC.
Comments for the author
This is a reasonable contribution to a rapidly moving area, and should be publish with substantially more data analysis to compare with prior work, and ideally a few more experiments. The authors develop a media to favor the development of Prim Endoderm (PrE) cells in small embroid bodies made from naive mESC. They show these structures place the PrE on the outside and the epiblast inside cavitates. Fig 2 propose markers to define the PrE population and suggest further polarization to a VE like and Parietal endoderm-like cells. There is much more recent data that the authors do not cite which are important to situate their cells relative to the blastoderm and other claimants to the PrE state. It is also rather uncertain to what temporal stage of development their cell correspond. Specifically… 1. There is scRNA-seq data from the Hadjantonakis lab http://endoderm-explorer-app.useast-1.elasticbeanstalk.com/ that should be compared with. They have many more cells that the earlier paper from Niaken's lab in 2015. Since the authors are using Seurat to analysis their sc-RNAseq data they can easily integrate other such data sets and see how the combined data sets cluster as in Fig 2C-E. This would be much more convincing than picking certain genes. 2. There is a 2017 NCB paper from the Brickman lab, what purports to derive PrE cells from naive mESC, with a different recipe (+-insulin) and not in EB's. How do the authors cells compare? Of course it would be very interesting if signals between the future epiblast and PrE were essential to the conversion. Is there any evidence? 3. There are two recent papers, from the Zernicka-Goetz lab and Belmonte labs in Dev Cell and Cell respectively that generate blastocysts from extended pluripotency mouse ESC (the former added additional TSC and the Belmonte group did not) which are not mentioned. (Generation of Blastocyst-like Structures from Mouse Embryonic and Adult Cell Cultures, Self-Organization of Mouse Stem Cells into an Extended Potential Blastoid.). Although these papers 'scoop' this submission, I definitely feel multiple approaches to the same nominal endpoint are useful to have in the literature. Since none of these structures develop further when implanted, more needs to be learned, so parallel work should be encouraged. That said prior work can not be ignored, and these papers too should be integrated with the gene clustering shown in .889006 e (that needs to be cited), shows that the transition from ICM to separate epiblast and PrE compartments takes place via cells that are double positive for Gata6 and Nanog (at a point when the blastocyst has a defined number of cells). Is this observed in the authors system? In Fig 4 there is data on Gata6 and Nanog separately but not the overlap as best I can tell, and only end point data and not as a function of time. Such data would give us both a time point and confidence that the path to the Epi and PrE state resembles that in vivo. The concern of Fig 4 is that their new protocol gives more PrE than doing nothing, a very low bar to pass since they selected factors with this in mind. Comparison with a real blastocyst is more interesting. 5. Alternatively one could use drugs or morphogens that have an effect on the PrE-Epi transition in the blastocyst and see if they have similar effects here. One such test was with Lif to show it impedes or blocks polarization and cavitation of the epiblast as in vivo. Another more relevant test would be additional FGF4 or FGF/ERK inhibition. Does that govern the PrE/EPI ratio? The result is interesting in either case. If there is no effect, then the structures are plausibly post E4.5 when the corresponding cells in the blastocyst are committed to their fates. This is not a complicated experiment with some embryos on the side as controls. I think it should be done, given the large literature on this pathway in vivo. 6. The other interesting problem for which the authors could extract data, is growth coordination between epiblast and PrE. They do not have the optimal markers for this, but could they plot the areal density of the PDGFRa cells as a function of cyst radius for different times and all radii.
The authors in
There are multiple minor problems in the writing: 1. The title or abstract must mention mouse somewhere. One can guess, but mammalian does not default to mouse. 2. In various places the authors list papers that show some effect of Lif, TGFb (both branches), Wnt, RA etc on the PrE/Epi ratio, to motivate their choice of screening factors. More helpful would be one extended discussion with more detail. For instance which factors have a phenotype in the embryo following mutation. Which have an effect when applied to a blastocyst in vitro? And what is the effect, assayed how? 3. In In summary this paper should definitely appear, since no one has accomplished the ultimate goal of a live birth from a synthetic blastocyst (as has been done with synthetic oocytes in 2016), thus multiple protocols should be in the literature.
Advance summary and potential significance to field
In this manuscript Vrij et al., explore the potential of aggregates of mouse ES cells to generate Primitive Endoderm (Pr.End). The differentiation conditions they focuse in, which they term as PrE induction conditions, allow the formation of embryoid bodies (EBs) containing cells of both epiblast (epi) and PrE lineages and are then applied to the blastoid system that has been developed in the Rivron lab. The early version of the system has been shown not to be effective in the production of PrEnd.
These results seem promising as some of their conditiond produce both epiblast and PrE lineage cells, also show a spatial organization similar to how they are in E4.5 mouse blastocysts and can initiate lumen (pro-amniotic cavity like structure) formation later. The cocktain that they find to induce PrEnd and, in particular, the potential involvement of GPCRs is novel and potentially useful.
Comments for the author
This study is a contribution to the emerging field of synthetic embryology but should be considered a technical study rather than a research manuscript. Before publication it would be helpful if the authors could consider the following considerations 1.
The authors mostly have used percentage to show the efficiency of the system. Since presenting data in this manner can be very misleading, the authors should also provide the exact number (like number of well with aggregates, number of blastoids formed, number of blastoids with both PrE and Epi lineage cells, how many replicates for each experiment etc) for all experiments (especially for the experiments involving blastoids). For the same reason, it is hard to understand how many times the author had to repeat the experiment to get to the number for example the ones they have shown in Figure 4B. 2. Figure 1E-1J; Authors have used agonists and inhibitors of different signalling pathways to test their effect on the emergence of PrE. However, in some instances agonist and activators of a given signalling pathway do not necessarily shown the opposite effect on PrE development. Moreover, despite activation or inhibition of some signalling pathways therer appears to be an improvement of PrE induction. In the absence of a statistical analysis it is not clear whether these changes are significant. 3.
The authors have clearly shown that the PrE like cells in their system represent the postimplantation extraembryonic endoderm (XEn). However, the state of the epiblast like cells appears to be related to the pre-implantation epiblast (equivalent to E4.5 embryos). This creates a developmental gap between the two cell types within single aggregate, in contrast with the situation in vivo, when the differentiation of PrE into Visceral/Parietal endoderm and rosette formation in epiblast takes place simultaneously between E4.5 and E5.5. and rosette formation is influenced by PrE. Therefore, one wonders how this developmental gap between the two cell types is affecting the rosette development and blastoids formation in their system. 4.
From the start, the authors argue in support of PrE-like cells generation in chemically defined serum free condition. Yet, they use serum in their medium for blastoids generation. Therefore, it is not easy to understand the point of using serum free condition during PrE induction when they ultimately end up using serum. Did the authors try to generate bastoids in serum free condition as well? Also, considering that PrE induction efficiency in serum (>90% EBs expressed PrE markers just with retinoic acid and LIF) was better than in N2B27 medium, why did they not perform all experiments on serum based medium rather than to switch between two media conditions. 5. Figure 5: Given such a low number of 3D structure formation, of which only 3 have lumen during in vitro blastoid culture, the data are not strong enough to claim that the PrE induced blastoids support the morphological features of post implantation embryos including pro-amniotic cavity like structure development during in vitro culture. 6.
Initial cell number is quite crucial in these kind of experiments. The authors seeded 7 ESCs for PrE/Epi embryoid bodies and blastoid generation and 14 ESCs for Xen/Epi rosettes generation. Was there any particular reason behind using different number of cells between these experiments? Their efficiency during XEn/Epi rosette EBs development was 94%. However, only 31% of blastoids showed 3D structure meaning even lower efficiency for the ones with rosettes like structure during in vitro culture. Could this lower rate of rosette formation in blastoid in vitro culture be attributed to the lower number of cells initially seeded which could potentially impact the mechanical forces and ultimately affect the development of structure like lumen, rosettes etc. 7.
Based on point 5 of comments, I suggest to change the title of the manuscript as expansion is an overstatement of the modest results that they obtained 8.
Naturally, the most crucial experiment that is missing is the test whether their PrEnd enhanced blastoids implant (something missing from the early version). This experiment is not needed if the manuscript is to be published as a technical advance/report but it would be good for the authors to comment on it.
First revision
Author response to reviewers' comments Reviewer 1 Advance Summary and Potential Significance to Field: This is an interesting study by the group of Nicolas Rivron, that contributes to the emerging field of synthetic embryogenesis. The manuscript presents a novel methodology that will be of interest for the field. Moreover, this methodology has the potential to improve existing blastoid models. I support the publication of this manuscript in Development.
We thank the Reviewers for addressing all these points and we believe this has led to a considerably improved manuscript.
Reviewer 1 Comments for the Author:
The following points need to be clarified: 1. Image analysis: the way this is presented throughout the manuscript is misleading. The graphs specify "count of GFP+ cells per EB", but the methods indicate that only a single Z plane within the EB was analysed. Therefore, the graphs show "count of GFP+ cells on a selected Z plane". This needs to be specified both in the graph and the figure legend. Panel 1A should also specify that even though the culture is in 3D the high-content analysis is in 2D.
We have added information in the figure legend that a 2D mid-focal plane is used for imaging and that a proxy for EB size is measured via the 2D projection area. We also changed this in Figure 1A.
2. Figure 1E-J: the conclusions of the authors do not match the data presented. They state that Nodal, Activin-A, Bmp4 or Tgf-b1 treatments reduced PE specification. However, none of these treatments had a significant effect. SB43 increased the number of clusters per EB but not the PrE yield, while A83, which also inhibits TGFbRI, had no effect. Altogether, their data seems to indicate that the TGFb pathway has no role in the formation of Pdgfra+ cells although the authors conclude otherwise.
The image data in this screen has been rigorously analyzed and the results that are depicted in this Figure adequately reflect the microscopy observations. Since the readout of this screen, as rightly addressed by the Reviewer previously, depended on the activation of a Pdgfra reporter we cannot make any conclusions from this initial screen regarding PrE specification beyond the observation of PDGFRa regulation. Moreover, the ANOVA test was run in an extremely strict fashion by including the three variables (PrE yield, clusters per EB and projection area) altogether for the compensation of variance between treatments. We did this because the three variables are not fully independent (e.g., a larger projection area may correlate to a higher number of Pdgfra+ cells by default). However, when a statistically significant result is obtained, such as for the number of Pdgfra+ clusters per EB in the Sb43 condition, it can be interpreted as important. Especially, since every data point here is already an average of ~400 EBs (one well with EBs).
However, we agree that the claim "Activin-A, Bmp4 or Tgf-b1 treatments reduced PE specification" is too strong and may need to be rephrased. We adjusted the claim to "Activin-A and Tgf-β1 elicited a decline, albeit non-statistically significant, of either the yield or the number of Pdgfrα+ cells." 3. EPI-like identity of the structures: figure 2 focuses on the characterisation of the PE-like compartment in the structures, but the EPI-like compartment is not described. What is the pluripotent state of these cells? The data seems to indicate they have a mixed identity as they coexpress Esrrb (naïve marker) and Otx2 (post-implantation factor). This is particularly relevant as the authors refer to naïve pluripotency exit throughout the manuscript, but it is not clear at present whether the process of naïve pluripotency exit happens prior to PE specification or after.
Indeed, the first scRNAseq data corresponds to structures in microwells 96 hours after seeding the cells and PrE/Epi induction. At this time point the Epi seems to have a mixed identity of naive and post-implantation epiblast. We have added GSEA compared to E5.5 mouse embryo epiblast which gives a similar negative enrichment score (-0.55) as when compared to E4.5 Epiblast (see below and Figure 2E). Thus the developmental window likely corresponds to the E4.5-E5.5 in natural embryos. This is also in line with the findings of an early mixed PE and VE identity for the PrE. In the second scRNAseq dataset we included 2i/Lif cells that show an initial naive pluripotent identity of the cells we use, thus the naive pluripotency exit does happen but it remains unclear exactly how this relates to timing in PrE specification. Likely, PrE specification occurs before the naive pluripotency exit, which is supported by findings shown in Figure 2A. Here, it can be seen that Gata6+ cells (PrE) can be found already 24 hours after induction.
We have included this data in Figure 2 of the manuscript and now mention that the Epi and PrE reflect the peri-implantation blastocyst-stage Epi and PrE. Figure 2F: adding the comparison to PE would be informative. At the moment only the comparison to VE is shown.
5.
This would indeed be very informative. However, we are unaware of any gene list for E5.5 PE. Likely, due to that PE tissue is very difficult to isolate from embryos within the first 48 hours after implantation. However, for comparative purposes we did compare our PrE and PE and VE-like subpopulation data to E6.5 embryo data (Blanca Pijuan-Sal et al., 2019) using GSEA. Since we could insert only the log2FC values and the ranked gene set (thus no p-values) these results cannot be directly compared to the more reliable GSEA results in Figure 2F. Overall, the PrE cluster (Pdgfrɑ+ cells) and the VElike and PE-like subclusters appear enriched in genes of the E6.5 PE, of which the PE subcluster contributes the most. These results support the findings that the Pdgfrɑ+ cells have an extraembryonic endoderm identity primed for both PE and VE.
6. The statement "within 24 hours after induction, double-positive cells and a few double negative cells emerged" is misleading. Double positive cells are as few as double negative cells (approximately 10% in both cases).
We have adjusted the sentence to "within 24 hours after induction, double-positive cells and double negative cells emerged".
7. The authors conclude that the Tgfb pathway is required for the initiation of VE specification, but the data presented to support this claim is really minimal (only expression of Dab2 and Runx1). The authors should either tone down their claims, or perform additional experiments.
We have toned down thes concluding statement to "a possible implication of the Tgf-β pathways in the initiation of VE."
The presence of VE-primed and PE-primed cells within the structures is very interesting.
Further information would be gained by performing immunofluorescence staining for specific markers. Do PE-primed cells lose contact with the epiblast?
We agree with the Reviewer that this is very interesting. We believe that the use of XEn/Epi EpiC in studying PE/VE specification dynamics would require much more analysis and we would prefer to do this carefully in a follow-up study. Importantly, we are already more than 2000 words over the maximum word count in our current manuscript. We agree with the Reviewer that the majority of structures depicted in the manuscript correspond to the Extraembryonic Endoderm/Epiblast epithelialized pro-amniotic-like cavity (XEn/Epi EpiC) stage. We have adjusted the naming accordingly throughout the manuscript to XEn/Epi EpiC, while only using the term Rosette to describe the Epi with a polarized conformation but before proamniotic cavity formation.
10. Figure 3F: this shows that in some structures there are multiple lumens. This is an interesting finding that is not commented in the results section. Is there a correlation between size and the multi-lumen phenotype? This would be along with the findings of Orietti et al, Stem Cell Reports, 2021. This is an interesting comment. We checked our data again but since multi-lumen structures occur at very low frequency we can not reliably correlate it with the total size of structures using our current data sets.
11. Figure 3H: the authors mention that under LIF culture Podxl did not localise to the apical membrane in the VE-like compartment. This is not the case. In the image shown in Figure 3H Podxl clearly localises apically in VE-like cells.
We agree with the Reviewer. In the "No Lif" condition Podxl is found apically but also bilaterally on the majority of cells. Thus, we rephrased the sentence to "the absence of Podxl within the Epilike cells and the arched bilateral/apical location of Podxl in the XEn-like cells." 12. Figure 3I: as a control, the authors should show the Nodal KO structures before the switch to N2B27. Moreover, in the images shown it seems that the VE-like layer is also multi-layered in the control. Is this the case? It is difficult to conclude without a DAPI staining? Could the authors quantify the incidence of a multi-layered VE in the different experimental conditions?
We have performed antibody staining for PrE (Pdgfra) and Epi (Nanog) PrE-induced structures formed with both the Nodal KO -/-line and its corresponding WT control. At 72 hours after induction we observed no apparent difference in morphology of PrE nor Epi. This data is included in the supplementary information (Fig. S15).
We also counted the number of single and multi-layered XEn layers and whether the XEn layer was (partly) delaminated from the Epi (total of 32 structures each) in the V6.5 ESC line with double KO for Nodal (-/-) and its corresponding WT control (+/+). This data is included in Figure 3 (3L).
13. The appearance of mesoderm-like cells upon culture in N2B27 is very interesting, but could be characterised with a bit more detail. At the moment the analysis presented is solely based on sequencing data. Do anterior-like and posterior-like fates emerge within the same structure? Are there cases in which the VE-like cells do not surround the entire EPI-like compartment, and could this maybe explain the emergence of mesoderm-like cells?
We performed antibody staining for Brachyury and observed that 14% of XEn/Epi EpiCs (+64 hours) contained Brachyury-positive inner cells originating from an epithelium-like tissue. Additionally, we performed antibody staining on the minority (20%) of PrE/Epi-induced structures that did not form an epithelialized Epi with pro-amniotic-like cavity (EpiC) but remained instead with an amorphous cell clump engulfed by a XEn-like cell layer. Of note, these structures were not included for single cell RNA sequencing analysis. We observed that these amorphous cell clumps were fully Brachyury-positive. Notably, a XEn-layer was present in all these structures and did not appear to be delaminated from the inner cells, which indicates that there is no correlation between the absence or a delaminated XEn-layer and posterior epiblast fate. This suggests that Epi-VE interactions are sufficient to initiate part of the gastrulation program. However, we cannot rule out that in a subset of structures (that do not cavitate) Chir may have primed cells towards mesoderm fate during initial induction, which as a result may prevent pro-amniotic cavity formation.
We have included the above mentioned data in Figure 4 and Figure S18.
14. Figure 6D: this structure is completely disorganised. Oct4 positive cells are shown in what would be the ExE-like compartment (based on the brightfield image) and only in a subset of cells within the Epiblast-like compartment. What criteria have the authors used to classify that structure as organised? Could they provide more examples and stain for the TE compartment? Based on the images they present is not clear whether an ExE-like region exists.
We did not identify the formation of ExE-like regions, thus with "organized" we mean a central lumen surrounded by a (pseudostratified) Epi-like cell layer and XEn, thus pro-amniotic cavity-like but without the adjacent ExE. The majority of blastoids when grown out in IVC medium do not form such pro-amniotic cavity-like structures and either formed a 2D cell layer (no embryonic structure) or a 3D structure (3D non-organized) that did not progress to structures with a proamniotic-like cavity. Somehow the TE does not progress, which can either be related to suboptimal post-implantation outgrowth culture conditions (e.g., handling, fluorescence microscopy to pick the blastoids with Gata6+/Pdgfra+ cells, IVC medium) or sub-potent TE-like tissue in our blastoids.
To emphasize the absence of the ExE we added the following phrase in the manuscript "..however, ExE-like tissue formation appeared absent." 15. In the discussion the authors state "one attractive possibility raised by this study is that the Epi proliferation and morphogenesis to form the amniotic cavity cannot occur unless the PrE deposits the required basal lamina". This is indeed the case as shown in laminin KO embryos that do not develop post-implantation and present an aberrant peri-implantation morphogenesis (Miner et al, Development, 2004 andSmyth et al, J Cell Biol, 1999).
We have added this information including the references in the discussion.
Minor comments: 1. Conclusion of figure 1: inhibition of GSK3b/bcatenin facilitates the generation of Pdgfra+ cells. This statement is confusing, as GSK3 inhibition leads to bcatenin activation.
We have adjusted this to .."inhibition of GSK3β and the Tgfβ pathway facilitate.." 2. What is the rationale to study GPCR ligands? Could they have a functional role in the embryo? The section on GPCR ligand screening comes out of the blue.
The rationale was instigated by a previous study that found cAMP modulates Pdgfrα expression in EBs. We have this information in the manuscript.
3. Figure 2F: there is a typo. The second plot should say "comparison to Embryo E5.5 VE".
We have changed this accordingly 4. Why do the authors isolate Pdgfra+ cells by FACS using an antibody when they have the reporter line? Why sometimes do they use a Gata6 reporter instead of the Pdgfra reporter?
We have seen using immunofluorescence that the Pdgfra reporter is not always switched on in cells that have Pdgfra present in their membrane. We found later that the Gata6 reporter line is more reliable. 5. Last section results: when referring to Oct6 expression, the figure is 6F, not 6E.
We have changed this accordingly Reviewer 2 Advance Summary and Potential Significance to Field: The authors make use of their previously characterised model system, blastoids, in which blastocyst-like structures can be generated from trophoblast stem cells and embryonic stem cell lineages. Building on this model, they probe the mechanism by which the Epiblast and extra embryonic endoderm develop. The authors are able to uncover how reciprocal interactions between the epiblast and primitive endoderm are important to permit the development and morphogenesis of the early post-implantation embryo. This paper is an exceptionally useful addition to the field, and highly significant. In one facet, it shows how in vitro approaches (which are complementary to in vitro studies) can dissect the mechanisms for cell lineage allocation in a highly controlled and scaleable system. The authors' use of microcells show how reproducibility between blastoids was foremost on their minds, and does well to address the concerns some people in the field have regarding reproducibility in the 'organoid' field.
Reviewer 2 Comments for the Author: I believe the authors have done a very good job of addressing the previous reviewers' comments (going by their rebuttal). I am very happy with the sections on data reproducibility. The majority of the figures are clear, and the conclusions in my mind are backed up well by the highly quantitative data. Regarding Fig. 6, the top portion feels slightly chaotic; I wonder if the table could be placed elsewhere, or formatted graphically? It is an important table to show though, and I'm happy the authors have provided these data; it slightly messes with the flow of the figure.
We reformatted the table in order to make it less chaotic and improved the flow of the figure.
Minor points follow... Materials and Methods 1 -Microscopy: Would the authors be able to add a bit more detail on the light-source and ex/em filters for their widefield?
We have added these details in the materials and methods section.
2 -Immunofluorescence: Although 'paraformaldehyde' is commonly used to describe the solution, it describes the solid; if I'm right, it ought to be '4% formaldehyde' solution.
We have changed this to 'formaldehyde'.
3 Antibodies: I'd like to thank the authors for putting in all this information, especially on the dilutions. However, could the authors put this in table format? it's much easier to read this way.
We have placed this information in a table format.
I am also happy to see that the ss transcriptomics are available in GEO, however regarding the other data availability, I'm not to enthused with the "data available on request". I understand if they aren't able to do the, but would it be possible for the authors to deposit as much data as they can (e.g. microscopy images, replicates etc) on databases such as OMERO or something similar?
Via the DataVerseNL platform we made available the raw image data for Figure 2G, which contains many Rosettes over time (24, 48 and 72 hours after flushing them out from the microwells). The temporary link to access the data is: https://dataverse.nl/privateurl.xhtml?token=04c07d68-ad09-468d-8cb7-c5d5a1d491c2 The final DOI will be: https://doi.org/10.34894/GSOHSD Reviewer 3 Advance Summary and Potential Significance to Field: The authors clarify conditions for maintaining mouse ESC, epiblast and primitive endoderm cells in coculture, and suggest some interactions between the cell types that are necessary for development of the blastocyst. Multiple labs are playing in the area, noone has made a blastocyst that develops in vivo, but its an important problem, and multiple approaches need to be published.
Reviewer 3 Comments for the Author:
The authors have done a good job of responding to my questions/comments in my first report. I do not find in the main text or SI, the figure in Item #5 of their response to referee 2, showing effects of added FGF and its inhibition to proportion of XEn. That should be put somewhere and mentioned in main text.
We have included this data as a supplementary figure (Fig. S9) and referred to it in the main text.
The paper should be published. | 8,898.4 | 2022-08-22T00:00:00.000 | [
"Biology",
"Medicine"
] |
Effect of the charge asymmetry and orbital angular momentum in the entrance channel on the hindrance to complete fusion
The hindrance to complete fusion is studied as a function of the charge asymmetry of colliding nuclei and orbital angular momentum of the collision. The formation and evolution of a dinuclear system (DNS) in the heavy ion collisions at energies near the Coulomb barrier is calculated in the framework of the DNS model. The DNS evolution is considered as nucleon transfer between its fragments. The results prove that a hindrance at formation of a compound nucleus (CN) is related with the quasifission process which is breakup of the DNS into products instead to reach the equilibrated state of the CN. The role of the angular momentum in the charge (mass) distribution of the reaction products for the given mass asymmetry of the colliding nuclei has been demonstrated. The results of this work have been compared with the measured data for the quasifission yields in the 12 C+ 204 Pb and 48 Ca+ 168 Er reactions to show the role of the mass asymmetry of the entrance channel.
INTRODUCTION
One of the problems of modern physics is for the synthesis of the extremely heavy chemical elements, therefore, the investigation of the target and projectile pair and corresponding range of the beam energy leading to as possible large cross sections of the evaporation residues is an important aim of researchers of nuclear physics.
The experimental and theoretical studies of the peculiarities of the processes occurring in heavy ion collisions are important to establish complete fusion mechanism.It can be done by the analysis of the observed reaction products.The absence of the full understanding the reaction mechanisms is related with difficulties of the unambiguous identifications of the mechanisms which are responsible for the yield of the corresponding observed reaction products.There is a probability of the overlap of the mass distributions of the contributions from the two mechanisms: for example, the quasifission and fusion-fission mass distributions may overlap in the mass asymmetric part of the yields [1,2].
Therefore, the analysis of the yields of the quasifission products allows us to study the nature of a hindrance in the complete fusion.In the experiments on the setup CORSET of the Flerov Laboratory of nuclear reactions (JINR) [3], the fission-like binary products of the processes (fusion-fission, quasifission and fast fission) are registered by the two-arm time-of-flight spectrometer CORSET by the coincidence method of simultaneous recording.Naturally, the products of these binary processes can arrive to the same detector with different probabilities.The mass and energy distributions of fission fragments were studied on the setup CORSET for the two reactions 12 C+ 204 Pb and 48 Ca+ 168 Er that lead to the same CN 216 Ra * [3].The beam energies were fixed that the excitation energy of the being formed CN was around 40 MeV in both cases.The analysis of the measured mass and energy distributions showed that the contribution from asymmetric fission in the first reaction is only around 1.5% but is about 30% in the second.The authors have interpreted this dramatic increase in the asymmetric yield as a manifestation of the quasifission process related with the shell effects for the reaction with 48 Ca.They stressed that the more mass symmetric colliding nuclei in the entrance channel and high angular momentum populated in the reaction with 48 Ca will clearly facilitate the evolution of the DNS toward the favored quasifission mass partition.The mass and charge distribution of the quasifission products may overlap with the ones of the fusion-fission and the deep-inelastic collisions [4,5].The last process produces binary products with the mass and charge numbers around the values of the ones of the projectile-and target-nuclei.The overlap of the mass and/or angular distributions of the quasifission and fusion-fission products causes ambiguity in the estimation of the experimental fusion cross sections.But it is difficult to separate them by the experimental methods as products of the corresponding processes.It is important to establish theoretically contributions in the yield of the reaction products from the different mechanisms.
There are different theoretical models to describe the experimental data of the fusion cross sections, but there is not an unambiguous conclusions about fusion mechanism.The models based on the DNS concept consider complete fusion as multinucleon transfer from the light nucleus of the DNS to its heavy one as diffusion process [5][6][7][8][9][10].The ER formation is directly related with the fusion mechanism and ER products are registered enough unambiguously.Therefore, theoretical results are aimed to be close to the experimental data of evapora- tion residues.But the contribution of the ER yields is not alone providing the cross section of the CN formation in the complete fusion.The CN can undergo to fission into two (or three) fragments.The probability of fission increases by increasing its charge number Z CN , excitation energy E * CN and angular momentum (L CN ).The fusion mechanism is studied by the analysis of the dependence of the cross section of the complete fusion on the parameters of the reaction entrance channel as the charge (mass) asymmetry of colliding nuclei, the orientation angles of their axial symmetry axis, colliding energy and orbital angular momentum [6].Consequently, the fusion cross section is determined as a sum σ fus = σ ER + σ fiss .In the reaction leading to formation of the actinides (Z > 92) the fission process is dominant against ER formation.The experimental study of the complete fusion may be not unambiguous due to the presence of the contributions of binary fragments formed in the other channels of reaction in the cross section σ fiss of the fusion-fission products.One of them is quasifission process which the breakup of DNS before reaching the CN.Fig. 1 shows the reaction channels producing binary products, which are observed in the experiments.It should be noted the difference between the deep-inelastic collision and quasifission process.The quasifission process is related with the capture events where full momentum transfer of the relative motion of colliding nuclei takes place.The fusion and quasifission processes are two alternative processes of the capture reactions: the increase of the quasifission yields causes the decrease of the complete fusion events σ cap = σ fus + σ qf .Therefore, the investigation of the quasifission yields allows us to study the change of the intensity of the complete fusion events as a function of the entrance channel parameters.The hindrance to complete fusion is studied as a function of the charge asymmetry of colliding nuclei and orbital angular momentum of the collision [11,12].
The branching ratios between the realization probability of the different channels depend on the mass and charge numbers of the projectile and target nuclei and kinematic parameters of the collision [11].In collisions with the large values L > L gr orbital momentum L elastic and inelastic scattering take place.The capture of colliding nuclei is a necessity condition for the CN formation.But this stage competes with the quasifission which produces binary products (P ′′ and T ′′ ).The quasifission products may have characteristics similar to the ones of the fission products.The CN stability is determined by its excitation energy E * CN and angular momentum L CN since the fission barrier B f is a function of E * CN and L CN .If the being formed CN has angular momentum L which is larger than the value L f causing completely disappear-ance of the fission barrier B f the system undergoes to the fast fission producing fragments (F 1 and F 2 ).It occurs only in collisions with L ≥ L f .
The DNS survived against to quasifission and fast fission is transformed to the rotating and heated CN.If it survives against fission during cooling (de-excitation cascade), evaporation residue nucleus is formed.The contribution of the quasifission against complete fusion and the contribution of the fusion-fission of CN against its surviving by neutron emission are increased at the CN formation with large charge numbers Z > 92.Therefore, the cross-section of syntheses of superheavy elements (SHE) can reach very small values.
In the case of the collision of the light nuclei with the target nucleus capture can be considered as the complete fusion since the intrinsic barrier B * fus causing a hindrance to complete fusion is small.But theoretical investigation of the yield of the binary reaction products observed in the mass symmetric and mass asymmetric entrance channels of the reactions, as well as the study of the hindrance to the complete fusion leading to the formation of the superheavy elements show that there is a large difference between capture and complete fusion cross sections in case of collision of the massive nuclei.The hindrance to complete fusion in reactions with massive nuclei is explained by the presence of an internal barrier B * fus associated with internal structural effects in DNS fragments [6,12].The value of B * fus depends on the characteristics of projectile and target nuclei in the entrance channel and orbital angular momentum.During the development of the resulting DNS, its fragments may separate relatively early before reaching the CN state.
In Section 2, the basic physical quantities as potential energy surface (PES), intrinsic fusion barrier, quasifission barrier, evolution of the DNS charge (mass) asymmetry are described.Discussion of the results of this work and comparison with the corresponding experimental data are presented in Section 3.
I. THEORETICAL FORMALISM
In this work, the range of the values of the orbital angular momentum leading to capture is determined by solving the dynamical equations of motion for the relative distance R and orbital angular momentum L [6,13,14].The contributions coming from the breakup (quasifission) of the DNS formed in the different angular momentum L = ℓℏ are included into consideration by the expression: where P cap (E c.m. , ℓ, α i ) is the capture probability for the colliding nuclei with the orientation angles α i (i = 1, 2) of the axial symmetry axis relative to the beam direction (for the deformed nuclei, see Fig. 17 in Appendix); Y Z (E c.m. , ℓ) is the probability of the yield of the fragment with the charge number Z in the collision with the energy E c.m. and orbital angular momentum ℓ; ℓ d is the maximum value of the orbital angular momentum leading to the capture (full momentum transfer of the relative motion) process.It is calculated by the solution of the dynamical equations of the relative motion and angular momentum ℓ [6,13].If the shape of nuclei in their ground state is spherical, during interaction they are deformed due to the surface vibration [15].For excited states, quadrupole 2 + and octupole 3 − deformation parameters of the nuclei are assumed to be equal to their vibrations.deformation parameters (quadrupole 3 ) used for the DNS fragments (i = 1, 2).Deformation parameters for excited states are obtained from β + 2 [16] and β − 3 [17].The capture probability depend on the beam energy, the size of the potential well of the nucleus-nucleus potential, and the friction coefficients of radial motion and angular momentum.The size of the potential well and friction coefficient determine the number of the partial waves (L = ℓℏ) leading to the capture of the projectilenucleus by the target-nucleus.The size of the potential well is sensitive to the charge and mass asymmetry of the colliding nuclei.This fact has been demonstrated in Ref. [14] by comparison of the nucleus-nucleus potential calculated for the 36 S+ 206 Pb and 34 S + 208 Pb reactions.The nucleus-nucleus interaction potential, radial and tangential friction coefficients and inertia coefficients are calculated in the framework of the DNS model [6,13,14].
In Fig. 2, the partial cross sections σ (ℓ) cap calculated for the capture process in the 12 C+ 204 Pb and 48 Ca+ 168 Er reactions for the energies E Lab =73 MeV and 153 MeV, respectively, are compared.These energies correspond to the CN excitation energies E Lab =40.4 MeV and 39.6 MeV for the corresponded reactions.The critical values L cr of the angular momentum estimated by the authors of Ref. [3] for the 12 C+ 204 Pb and 48 Ca+ 168 Er reactions were equal to 31ℏ and 54ℏ, respectively.These values of L cr are close to the orbital angular momentum corresponding to the maximal values of the partial capture cross sections presented in Fig. 2. The values of L cr obtained in Ref. [3] are obtained for the triangle shape of the partial capture cross section which has sharp decrease at L = L cr .The slow decrease of the theoretical curves of the partial capture cross section at large values of L in this work is related by the averaging of the results obtained for the collisions with different orientation angles It is seen that partial cross section the 12 C+ 204 Pb reaction is sufficiently larger than that for the 48 Ca+ 168 Er reaction.The large values of σ (ℓ) cap for the former reaction is explained with the fact it has small reduced mass µ = A P A T /(A P + A T )=11.3MeV while it is equal to 37.3 MeV for the last reaction.Here A P and A T are mass numbers of the projectile-and target-nuclei, respectively.Further evolution of the DNS is determined by the land- scape of the potential energy surface calculated for the considered reactions.
A. Potential energy surface
In the DNS approach, the PES plays a crucial role in theoretical study of the competition between complete fusion and quasifission processes which occur due to the multinucleon transfer between fragments of the DNS formed at capture.The PES represents the total energy of the DNS as a function of its charge asymmetry Z and relative distance R between the mass-of-centres its interacting fragments.The landscape of the PES determines the fusion probability and yields of the quasifission products as a function of the beam energy and initial orbital angular momentum.The PES is calculated as a sum of the energy balance Q gg and nuclear-nuclear interaction potential V (Z, A, L, R): is the energy balance of the reaction, B 1 , B 2 and B CN are the interacting nuclei and the binding energies of CN taken from the table in Refs.[18,19]; the interaction potential V is a sum of the Coulomb V Coul , nuclear V nuc and rotational V rot parts: The expression of the given potentials are presented in Appendix A.Here Z c = Z CN − Z and A c = A CN − A are introduced to mark the charge and mass numbers for the conjugate nucleus, respectively.A CN = A P + A T and Z CN = Z P + Z T , where Z P and Z T are the charge numbers of the projectile and target nuclei, respectively.
At large distances, the electrostatic repulsion between the positively charged nuclei dominates in the PES.The potential barrier appears at the distances R ≃ R P + R T + 2 fm due to the competition between the nuclear and Coulomb forces.The driving potential is determined from the PES by taking the values of the relative distance R m corresponding to the minimum of the potential well of the nucleus-nucleus interaction for the wide range of the charge numbers of the DNS fragments [6]: The competition between complete fusion and quasifission for the given charge and mass number of the DNS light fragment is determined by the heights of the intrinsic fusion B * fus and quasifission B qf barriers [6].Their values depend on the angular momentum since PES is a function of L. As the nuclei approach each other, the PES changes shape, becoming more complex and exhibiting multiple minima and maxima as a function of its charge asymmetry which changes the binding energies B 1 and B 2 of the DNS fragments.The PES U calculated for the 48 Ca+ 168 Er reaction, driving potential U dr and nucleusnucleus interaction V extracted from the PES are presented in Fig. 3.The arrow (a) corresponds to the capture trajectory and arrow (b) shows the direction to the complete fusion.The arrows (c) and (d) correspond to the possible quasifission trajectories.After capture, the DNS can follow to the CN formation along charge asymmetry axis Z in the direction of its decreasing Z → 0 or breakup channel along line R connecting centres-offragments.The minimum values of the PES along the charge asymmetry axis are observed when the proton and/or neutron numbers in the DNS fragments close to the magic numbers.The position of the entrance channel for the 12 C+ 204 Pb reaction is favorable to complete fusion since the intrinsic barrier causing hindrance is very small while it is sufficiently larger the 48 Ca+ 168 Er reaction.Fig. 3(b) is presented to show the determination of the intrinsic fusion barrier B * fus from the curve of the driving potential as difference between values of the driving potential corresponding to Z = 20 and its maximum value in direction to complete fusion.The dependence of the driving potential on the angular momentum is presented in Fig. 4. The increase of the angular momentum leads to the increase of the B * fus up the large values for the very charge asymmetric configurations (for small values of Z) of DNS.This phenomenon is explained by the strong increase of the DNS rotational energy.
The rotational energy of the DNS with the light fragment having small mass numbers increases faster due to the strong decrease of moment of inertia: It is used in calculation of the rotational energy: where µ = mAA c /(A + A c ) -reduced mass of DNS; inertia of the interacting nuclei; m is a nucleon mass; a i and b i are small and large radii of nuclei [15].
The fast increase of the rotational energy of the DNS with the light fragments is responsible for the incomplete fusion in the reactions with light nuclei [15].Therefore, in both reactions, when L increases, the probability of fusion decreases.Another important physical quantity of the model is quasifission barrier B qf (see Fig. 5) which determines the DNS lifetime.Its value is equal to the depth of the potential well of the nucleus-nucleus interaction between the DNS fragments.The height of fusion barrier B * fus for the reaction is less than the height of quasifission barrier B qf for the mass asymmetric system.This condition is favorable for the complete fusion.It is seen from Fig. 5 that its value for the 12 C+ 204 Pb (Z = 6) system is sufficiently larger than the one for the 48 Ca+ 168 Er (Z = 20) reaction.For the last reaction, the height of the fusion barrier B * fus is greater than the height of the quasifission barrier B qf .Therefore, the probability of complete fusion for the reaction 12 C+ 204 Pb is larger than one for the 48 Ca+ 168 Er reaction.
The excitation energy of DNS E * Z , given the beam energy, is found taking into account the change in the in-ternal energy with the change in the number of nucleons: where is a change of the intrinsic energy of the DNS during its evolution from the initial value (Z = Z P and A = A P ) to the considered state (Z, A).V min (Z, A, R m , L, {α i }) is the minimum value of potential well of the nucleusnucleus potential between the DNS fragments in the last state [6,13].Fig. 6 represents a dependence of the excitation energy of DNS E * DNS for the entrance channel (Z = Z P and A = A P ) on the collision energy E c.m. and its angular momentum L calculated for the 48 Ca+ 168 Er reaction.
B. Charge and mass distribution of the DNS fragments and binary yields
The full momentum transfer takes place at the capture of the projectile by the target nucleus and the DNS is formed with the probability P cap , which is calculated by the solution of the dynamical equation of the collision trajectory for the given values of E c.m. and orbital angular momentum L [6,13].The atomic nucleus consists of neutrons and protons, consequently, due to the nucleon exchange between the DNS nuclei, their mass and charge distributions change as a function of time t as capture has occurred.The evolution of DNS is found by solving the transport master equation [15]: where D Z (t) is represents the probability of DNS being in the moment of time t for the given E * Z and L in the state where the DNS fragments have the charge numbers Z and Z CN − Z. ∆ ± Z (∆ ± Z ) coefficients are the transport coefficients, which are calculated microscopically, for the case when one proton is added to (subtracted from) the fragment of the binary system with the charge number Z. Proton and neutron systems of nuclei have their own energy schemes in individual nuclei.In turn, these schemes depend on the number of neutrons N and the number of charges Z of the nuclei.This means that the quantities ∆ Z are related to the energy schemes of the protons (they fill the energy states).We can calculate transport coefficients using the following formula: Here the matrix elements g P T represent the exchange of nucleons between fragments "P" and "T"; n Z i (T ) and FIG. 8.The same as in Fig. 7, but for L =40ℏ.
ε Z i are occupation number and energy of single-particle states in fragment "i" of the DNS with a fragment with the charge number Z, respectively, θ i is its effective temperature ( i = P, T ): where a = A CN /12 MeV −1 .The transport master equation is solved where the reaction time t = k max ∆t, where ∆t = 10 −22 s.In this case, t is chosen in such a way that after this moment of time, DNS has passed to complete fusion or quasifission.The region Z ≥ 2 represents the contribution of D Z to the incomplete fusion.We can calculate the yield of fragments formed in the reaction using the formula: It is proportional to the width Λ qf Z of the decay through the quasifission barrier B qf (Z, A, L).Λ qf Z is calculated by expression: where ω m and ω qf are frequencies of the parabolas used to approximate the potential well and Coulomb barrier of the nucleus-nucleus interaction; γ = 8 • 10 −22 MeV sec −1 ; T Z is the effective temperature of the DNS with the charge asymmetry Z: (see Refs [11,20] for details).
Eq. ( 8) has been solved with the initial condition D Z =1 at Z = Z P (Z T ) and A = A P (A T ).The charge distributions D Z for the light fragment of the DNS formed in the 12 C+ 204 Pb reaction in collisions with the values of L=0ℏ and L=40ℏ at the beam energy E Lab = 73MeV are presented in Figs.7 and 8, respectively.The presented results have been obtained for the orientation angles of "P" and "T" nuclei α 1 = 45°and α 2 = 30°, respectively.It is seen that the intense of the nucleon transfer from 12 C to 204 Pb decreases by the increase of L. The yields with Z < 2, represents the contribution leading to complete fusion.The complete fusion occurs faster since the charge distribution (D Z ) in the region Z > 2 is very weak.
It is seen from Figs. 7 and 8 that at the beginning the charge distribution is distributed around Z P = 6 in the light fragment (for the conjugate fragment around Z T = 82, it is not shown).The fusion barrier B * fus (Z = 6) is small for the entrance channel of the 12 C+ 204 Pb reaction (see Fig. 2 of the PES and driving potential U dr , respectively).Moreover, the quasifission barrier B qf is large for the charge asymmetric configurations of DNS (see Fig. 5).This favorable conditions cause the motion of the charge distribution towards Z = 2 over time.Consequently, the complete fusion takes place with the larger probability than quasifission process for this strong charge asymmetric reaction.68) at the beginning of the DNS evolution and it is distributed wider including the direction of the larger charge numbers Z > 20.The presence of the hindrance to complete fusion in the case of the 48 Ca+ 168 Er reaction in comparison with the 12 C+ 204 Pb reaction is clearly seen from these figures.Therefore, the complete fusion occur more slowly in the 48 Ca+ 168 Er reaction for the both values of L. The other reason of the observation is the fact that the quasifission barrier B qf (Z = 20) in the entrance channel of the 48 Ca+ 168 Er reaction is smaller than the one for the 12 C+ 204 Pb reaction (see Fig. 5).The small values of B qf is favorable for the quasifission.
The yield Y Z depends on the DNS angular momentum L which determines the heights of the intrinsic fusion B * fus and quasifission B qf barriers.It can be seen from Fig. 5 that the quasifission barrier is changed as a function of L: at large values of L the DNS becomes less stable against breakup into two quasifission fragments.As it has been mentioned above that B * fus increases by L (see Fig. 2).Therefore, the quasifission yields increase by the increase of L.
Comparison of Figs.11 and 13 of the calculated yields of binary fragments in the 12 C+ 204 Pb reaction shows the strong increase of Y Z for the collisions with L = 40ℏ in comparison with the case of L = 0ℏ.The similar increase of the binary yields is seen from the comparison of Figs.
12 and 14 of the yields of the binary fragments calculated for the collisions 48 Ca and 168 Er with the orbital angular momentum L = 0, respectively.The scales of Y Z presented on the right side of these figures show that the absolute values of the quasifission yields are small.This means that complete fusion is main reaction channel for the head on collision.The analysis of the yield products for the 48 Ca+ 168 Er reaction in Fig. 14 and 12 C+ 204 Pb reaction in Fig. 13 shows that the main emitted products are α particles in collisions with the large angular momentum.This process is observed as the incomplete fusion according to its new mechanism verified in Ref. [15].Therefore, the mechanism of the incomplete fusion can be considered as the yield of the very mass asymmetric quasifission products, i. e. the breakup of DNS in the way to complete fusion due to increase of the centre-fugal forces at reaching α particle during multinucleon transfer from the projectile nucleus to the target nucleus.
II. DESCRIPTION OF THE EXPERIMENTAL DATA
Our calculations show that the charge distributions between fragments of the DNS and products being formed at its breakup strongly depend on the orbital angular momentum.The increase of the DNS rotational energy causes the increase in the intrinsic fusion barrier B * fus , decrease of the quasifission barrier B qf and DNS excitation energy E * Z .These quantities and nuclear structure of the colliding nuclei determines the intense nucleon exchange and direction flow of nucleons since the transition coefficients on the single-particle energies of the nucleons.Therefore, in this work, the shell effects of nuclear structure in fragments are pronounced in the formation and yield of the reaction products.These conclusions have been obtained from the dependence of the evolution of the charge distributions between fragments of the DNS in the 12 C+ 204 Pb and 48 Ca+ 168 Er reactions on the charge asymmetry and orbital angular momentum in the entrance channel of collision.
It is important to prove the validity of this formalism of complete fusion by the description of the experimental data related with the yield of the quasifission products.
In Figs. 15 and 16, the mass distributions of the quasifission products the 48 Ca+ 168 Er (at E Lab =194 MeV) and 12 C+ 204 Pb (at E Lab = 73 MeV) reactions, respectively, calculated in this work are compared with the corre- sponding experimental data obtained from Ref. [3].The experimental results are the extracted asymmetric components obtained as a difference where Y exp is the experimental yield of the fission-like binary products and Y FF is the (Gaussian) fusion-fission yield.The curve of Y FF has been extracted by the description of the experimental yield Y exp as a sum of the three Gaussian functions: two small functions describing mass-asymmetric parts and one Y FF which describes the mass-symmetric parts [3].This way separation of the quasifission and fusion-fission products assumes that there is not overlap between quasifission and fusion-fission products in the mass symmetric region of the mass distribution of the binary fragments.
But according our calculations shows the mass distribution of the quasifission can reach mass symmetric region.Its contribution is very small to the mass symmetric in the case of the very mass asymmetric 12 C+ 204 Pb reaction even at large values of L. The yield of the quasifission products with the mass numbers A > 96 is significant.It is seen Fig. 15.The theoretical results are in good agreement with the experimental data for the range of mass numbers A =66-96 (120-150).The symmetric part A =97-119 of the mass distribution of the binary products in the experimental data has been removed by Eq. ( 14) while the curve of the theoretical results shows that the quasifission contribution presents in the mass symmetric region.
III. CONCLUSIONS
In conclusion, we have theoretically studied charge and mass distributions of the quasifission fragments for two -12 C+ 204 Pb and 48 Ca+ 168 Er reactions -that lead to the same compound nucleus 216 Ra * as a function of the orbital momentum of collisions at the beam energies corresponding to the CN excitation energy of around 40 MeV.The experimentally observed yield of the asymmetric fission in the former reaction was 1.5%, whereas it was 30% in the latter case.This difference was interpret as a large contribution of the quasifission products in the 48 Ca+ 168 Er reaction.Application of the DNS model has allowed us to establish a nature of hindrance to complete fusion by comparison results of the partial capture cross sections, charge (D Z ) and mass distributions of the DNS fragments before its breakup and quasifission (Y Z ) products obtained for the 12 C+ 204 Pb and 48 Ca+ 168 Er reactions.The theoretical study of the evolution of the charge (D Z ) distributions of DNS fragments and quasifission (Y Z ) products shows strong influence of the orbital angular momentum of collision (L) on the hindrance of the complete fusion process.The difference in the hindrance observed in these reactions is related by the intrinsic fusion barrier B * fus determined by the driving potential calculated for the reactions leading to formation of the compound nucleus 216 Ra * .
The partial capture cross sections calculated for the 12 C+ 204 Pb and 48 Ca+ 168 Er reactions are very different and their maximum values are close to the critical angular momentum values presented in Ref. [3].
The comparisons of the partial capture cross sections and charge (mass) distributions of the quasifission fragments calculated in this work for these two 12 C+ 204 Pb and 48 Ca+ 168 Er reactions show that the role of the entrance channel characteristics is very strong.This result confirms the conclusion of the authors of Ref. [3].
APPENDIX
The Coulomb interaction of deformed nuclei can be calculated according to the following expression: where α ′ 1 = α 1 + θ, α ′ 2 = π − (α 2 + θ), sin θ = |L|/(µ ṘR); Z i , β (i) 2 and α ′ i are the atomic number ( for each fragment ), the quadrupole deformation parameter, and the angle between the line connecting the centers of masses of the nuclei ( see Fig. 17) and the symmetry axis of the fragment i(i = 1, 2), respectively.Here, R 0i = r 0 A 1/3 i , r 0 = 1.16 fm ,e 2 = 1.44 MeV• fm and P 2 (cos α ′ i ) is the second term of the second type of Legendre polynomial; µ = M 1 M 2 /(M 1 +M 2 ) is reduced mass of the colliding system consisting from projectile and target with masses M 2 and M 2 , respectively.
The nuclear part of the nucleus-nucleus potential is calculated using the folding procedure between the effective nucleon-nucleon forces f ef f [ρ(x)] suggested by Migdal and the nucleon density of the projectile and target nu- 2 .The effective values of the constants f in and f ex were fixed from the description of the interaction of the Fermi system by the Green function method and, therefore, the effect of the exchange term of the nucleonnucleon interactions was taken into account.The densities of the interacting nuclei taken in the Woods-Saxon form: 2 Y 20 (θ i )] a 0 −1 (18) where ρ 0 =0.17 fm −3 and a 0 = 0.54 fm, R i are the center of mass coordinates and R 0i are the half density radii of the interaction nuclei.
FIG. 1 .
FIG. 1. (Color on-line)The sketch of the sequences of nuclear reaction channels which overcome competitions in different stages of interaction processes of dinuclear system fragments.
FIG. 2 .
FIG. 2. Comparison of the capture cross sections calculated in this work for the 12 C+ 204 Pb and 48 Ca+ 168 Er reactions.
FIG. 3. (Color online).Potential energy surface calculated for the DNS formed in the 48 Ca+ 168 Er reaction as a function of the charge number (Z) of its fragment and relative distance (R) between centres-of-mass fragments: arrow (a) shows capture trajectory as the decrease of the kinetic of the relative motion; arrow (b) is direction of the complete fusion due to nucleon transfer from the light fragment of the DNS to the heavy one; arrows (c) and (d) are the quasifission trajectories leading formation of the products with the different mass numbers (a).The driving potential of the DNS formed in the 48 Ca+ 168 Er reaction as a function of the charge number (Z) of its light fragment; the intrinsic fusion B * fus barrier is determined for the entrance channel Z = 20 (b).Quasifission B qf barrier of the entrance channel Z = 20 calculated as the depth of the potential well of the nucleus-nucleus interaction(c).
UFIG. 4 .LFIG. 5 .
FIG. 4. The dependence of the driving potential on the angular momentum L. The results are obtained for the collision with the orientation angles α1 = 60 • and α2 = 45 • .
FIG. 6 .
FIG. 6.The excitation energy E * DNS (Ec.m., L) of the DNS formed in the 48 Ca+ 168 Gd reactions for the entrance channel (Z = ZP and A = AP ) as a function of the center mass energy Ec.m. and orbital angular momentum L.
F r a g m e n t m a s s n u m b e r FIG. 7 .
FIG. 7. Evolution of the charge distribution DZ for the projectile-like fragments for the 12 C+ 204 Pb reaction at E Lab = 73 MeV and L =0ℏ.The results have been obtained for the orientation angles α1 = 45°and α2 = 30°.
Figs. 9 F r a g m e n t c h a r g e n u m b eF r a g m e n t m a s s n u m b e r FIG. 9 .F r a g m e n t m a s s n u m b e r FIG. 10 .
FIG. 9. Evolution of the charge distribution DZ for the projectile-like fragments for the 48 Ca+ 168 Er reaction at E Lab = 194 MeV and L =0ℏ.The results have been obtained for the orientation angles α1 = 45°and α2 = 30°.
F r a g m e n t c h a r g e n u m b e r 1 F r a g m e n t m a s s n u m b e r FIG. 11 .F r a g m e n t m a s s n u m b e r s FIG. 12 .F r a g m e n t c h a r g e n u m b e r 1
FIG. 11.Mass distribution of the yield of quasifission products (YZ ) for the reaction12 C+ 204 Pb calculated for the collision with the orientation angles α1 = 45°and α2 = 30°at the beam energy E Lab =73 MeV and angular momentum L = 0.
Y i e l d o f q u a s i f i s s i o n f r a g m e n t s M a s s n u m b e rFIG. 15 .
FIG.15.Comparison of the theoretical yield of the quasifission products of the 48 Ca+ 168 Er (at E Lab =195 MeV) reaction formed by the DNS mechanism with the corresponding measured experimental data obtained from Ref.[3].
FIG. 16 .
FIG.16.Comparison of the theoretical yield of the quasifission products of the 12 C+ 204 Pb (at E Lab = 73 MeV) reaction formed by the DNS mechanism with the corresponding measured experimental data obtained from Ref.[3].
FIG. 17 .
FIG. 17.The coordinate systems and angles which were used for the description of the initial orientations of projectile and target nuclei.The beam direction is opposite to OZ − r 2 )d 3 r, (16)f ef f (ρ) = C 0 f in + (f ex − f in ) ρ(0) − ρ(r) ρ(0)(17)Here C 0 =300 MeV•fm 3 , f in = 0.09, f ex = −2.59 are the constants of the effective nucleon-nucleon interaction; | 8,876 | 2024-03-13T00:00:00.000 | [
"Physics"
] |
Mitochondrial TrxR2 regulates metabolism and protects from metabolic disease through enhanced TCA and ETC function
Mitochondrial dysfunction is a key driver of diabetes and other metabolic diseases. Mitochondrial redox state is highly impactful to metabolic function but the mechanism driving this is unclear. We generated a transgenic mouse which overexpressed the redox enzyme Thioredoxin Reductase 2 (TrxR2), the rate limiting enzyme in the mitochondrial thioredoxin system. We found augmentation of TrxR2 to enhance metabolism in mice under a normal diet and to increase resistance to high-fat diet induced metabolic dysfunction by both increasing glucose tolerance and decreasing fat deposition. We show this to be caused by increased mitochondrial function which is driven at least in part by enhancements to the tricarboxylic acid cycle and electron transport chain function. Our findings demonstrate a role for TrxR2 and mitochondrial thioredoxin as metabolic regulators and show a critical role for redox enzymes in controlling functionality of key mitochondrial metabolic systems.
M etabolic syndrome is a prevalent public health issue affecting one in three adults in the United States 1 . Mitochondrial dysfunction is a prominent driver of metabolic deficits and has also been linked with Alzheimer's disease, Parkinson's disease, Huntington's disease, Amyotrophic Lateral Sclerosis, Friedreich's ataxia, cardiovascular disease, atherosclerosis, and diabetes 2 . Impaired mitochondrial function has been linked with metabolic deficits, elevated serum glucose levels, diminished glucose tolerance, and increased insulin resistance, all of which are driving factors for diabetes progression 3 . Declines in mitochondrial function and metabolic deficits are well-cataloged aspects of aging and are likely a key driver in agerelated increases in diabetes incidence rates 4 . The connection between mitochondrial function and diabetes is particularly apparent in studies of mitochondrial redox enzymes. Transgenic expression of mitochondrial-targeted catalase (mCAT) prevented age-associated declines in mitochondrial function and decreased insulin resistance in muscle 5 . Overexpression of the mitochondrial peroxidase PRX3 was shown to decrease fasting plasma glucose levels and protect against high-fat diet-induced glucose intolerance 6 . However, the mechanism through which increased mitochondrial redox scavenging regulates metabolic function remains unclear.
Thioredoxin is a redox protein whose primary function is the scavenging of oxidants and free radicals through reduction of protein disulfide bonds into thiolate anion 7 . There are two major forms of thioredoxin in the cell, thioredoxin-1 which is cytosolic and thioredoxin-2 which is mitochondrial 8 . Thioredoxin-2 represents the major redox scavenging system in mitochondria 9 with multiple functions in this organelle. It reduces oxidized proteins allowing the repair of oxidative damage in mitochondria 7 . It plays a key role in the peroxiredoxin (PRX) system. PRX is oxidized as part of its role in the reduction of hydrogen peroxide into water and is another substrate for thioredoxin-2 allowing restoration of its function 10 . Thioredoxin-2 also plays a key role in the regulation of cell death by repressing ASK-1-regulated cell death apoptotic signaling pathways 11 . Thioredoxin-2 acts as a redox sensor and becomes oxidized under oxidative stress conditions de-repressing ASK-1 and triggering the apoptotic pathway 11 .
The enzyme thioredoxin reductase 2 (TrxR2) is critical for the continued function of thioredoxin-2. TrxR2 reduces oxidized thioredoxin-2 in a reaction involving the conversion of NAD(P)H to NAD(P) + . This is a critical rate-limiting step in the function of Thioredoxin-2 12 . TrxR2 levels have been reported to decline in rat skeletal and cardiac muscle over the course of age, driving agerelated declines in mitochondrial redox capacity 13 . We have previously reported that augmentation of TrxR2 function can extend lifespan in Drosophila melanogaster and diminish agerelated declines in oxygen consumption 14 . All these previous findings suggested that TrxR2 may play an important role in the regulation of mitochondrial metabolic function and potentially protect against metabolic disease.
In this study, we created a TrxR2 overexpression mouse. We found TrxR2 overexpression to enhance metabolism and prevent high-fat diet-induced metabolic deficits in mice. We found TrxR2 overexpression induced metabolic improvement through alteration in the function of several enzymes in the Tricarboxylic acid (TCA) cycle and complexes of the electron transport chain. Our findings demonstrate a role for the thioredoxin system as a metabolic regulator impacting key systems of mitochondrial function.
Results
Thioredoxin reductase 2 overexpression mouse model. We designed a TrxR2 overexpression mouse through the insertion of a transgene containing a single copy of mouse TrxR2 cDNA preceded by a ubiquitous CAG promoter and two LoxP sequences sandwiching a premature stop site. Under these conditions, the TrxR2 gene is not translated and therefore not overexpressed. We then crossed this mouse with a ubiquitous homozygous Cre recombinant overexpressor (EIIa-CRE) which led to the deletion of the premature stop site generating a 'CAG-TrxR2' overexpression mouse (Fig. 1a). The mice produced with this transgene (TrxR2-Tg) were viable, fertile, and physiologically normal with no observed deficits. We validated overexpression through western blot and demonstrated a 75-100% increase in TrxR2 expression levels in the brain, muscle, and heart (Fig. 1b, c). We also observed similar increases in liver and lung tissues ( Supplementary Fig. 1a, b).
TrxR2 overexpression is mitochondrial localized, reduced mitochondrial redox state, and increased oxidative stress resistance. To assess cellular localization and functionality of TrxR2 we derived Mouse Embryonic Fibroblasts (MEFs) from TrxR2-Tg mice. TrxR2-Tg MEFs showed a 50% increase in TrxR2 protein levels. We did not detect any significant changes in mitochondrial markers PGC-1α or VDAC levels ( Fig. 1d) or in mtDNA copy number ( Supplementary Fig. 1c). This suggested that both mitochondrial biogenesis and mitochondrial mass remained unchanged in MEFs or tissues derived from TrxR2 transgenic mice. We confirmed TrxR2 mitochondrial co-localization with immunofluorescence using mitotracker. As confirmed by western blot we saw a higher expression of TrxR2 in TrxR2-Tg MEFs (green, Fig. 1e) and demonstrated TrxR2 protein overexpression was predominantly co-localized with mitochondria (Merge, Fig. 1e). In addition, we demonstrated that liver mitochondrial isolates derived from TrxR2-Tg mice exhibited lower oxidized PRX3 levels suggesting higher functionality of thioredoxin-2 (Fig. 1f). We also found TrxR2-Tg MEFs were more resistant to H 2 O 2 treatment than MEFs derived from littermate controls and displayed higher protection against the mitochondrial-specific oxidative stressor Tert-butyl hydroperoxide 15 (TBH) (Fig. 1g, h). These findings support TrxR2 overexpression being mitochondrial-specific TrxR2 overexpression and increasing TrxR2 functionality.
TrxR2-Tg mice show improved glucose tolerance. We hypothesized that overexpression of TrxR2 may influence metabolism. To assess this, we measured fasting blood glucose levels in TrxR2-Tg mice. Interestingly, we found TrxR2-Tg mice to have lower basal blood glucose levels than littermate controls (Fig. 2a). We hypothesized that this may be driven by the enhanced capacity to metabolize glucose. To test this, we monitored blood glucose clearance using glucose tolerance tests (1.5 g/kg glucose bolus injection). We were interested to find that both male and female TrxR2-Tg mice could metabolize glucose slightly faster than littermate controls showing improved glucose tolerance (Fig. 2b). In addition, both male and female TrxR2-Tg mice displayed slightly increased insulin sensitivity measured through insulin tolerance tests (0.8 U/Kg insulin injection) (Fig. 2c). These findings suggest TrxR2 overexpression to improve metabolic function in adult mice. These improvements made us hypothesize that TrxR2 may be protective against metabolic disease. To investigate this, we placed mice on a high-fat diet for 4 months (Fig. 2d). We found both male and female high-fat diet-fed TrxR2-TG mice displayed significantly improved glucose tolerance relative to high-fat diet-fed littermate controls (Fig. 2e). Surprisingly though no difference in insulin sensitivity was observed ( Supplementary Fig. 2).
We found mouse weight to be lower in male but not female TrxR2-Tg mice maintained on a normal diet when compared to littermate controls ( Supplementary Fig. 3a, b). However, we did not observe any significant changes in weight in mice maintained on a high-fat diet ( Supplementary Fig. 3c, d). Despite the lack of change in weight we observed decreased liver lipid content suggesting reduced liver steatosis in high-fat diet maintained TrxR2-Tg mice compared to littermate controls (Fig. 2f). Our findings suggest TrxR2 overexpression to increase basal glucose metabolism and improve glucose tolerance in both adult mice and mice fed with a high-fat diet. In addition, while we observed some improvements in insulin sensitivity under a normal diet, high-fat diet experiments suggested this response to be at least partly independent of insulin sensitivity.
TrxR2-Tg male mice exhibit improved whole-body metabolism. To gain a greater understanding of the changes produced by TrxR2 overexpression we examined the impact of TrxR2 overexpression on whole-body composition and metabolism. In normal-fed male mice, we observed a significant decline in body weight as well as changes in body composition (Fig. 3a). We were interested to find a significant decline in fat mass in TrxR2-Tg mice, in contrast, we did not see any significant change in lean mass in these mice (Fig. 3a). These findings suggested that overexpression of TrxR2 increased metabolic processes and decreased fat mass. Importantly, we did not observe any changes in food consumption with TrxR2 overexpression suggesting the changes observed in weight were not due to decreased food intake (Fig. 3b). We monitored other metabolic parameters using indirect calorimetry in individualized cages. Importantly we observed that TrxR2-Tg mice show increased oxygen consumption, CO 2 production, and energy expenditure (Heat) without changes in Respiratory Exchange Ratio (RER) during the light cycle (Figs. 3c-f). While no changes were observed during the dark cycle (Supplementary Fig. 4a-d). These results suggest that basal metabolism is increased with TrxR2 overexpression, however at night when mice are more active transgenic animals did not display a higher energy demand than littermate controls. Furthermore, we did not see any changes in RER suggesting there was not a metabolic switch occurring in the transgenic animals with no change in substrate preference between carbohydrates and fat. All these findings suggest that increasing TrxR2 expression increases basal metabolism in mice leading to decreased fat mass.
TrxR2-Tg isolated liver mitochondria exhibit higher membrane potential and respiration. We sought to understand the mechanism through which TrxR2 overexpression was enhancing metabolic function. We found TrxR2-Tg MEFs to display increased mitochondrial membrane potential (MMP), measured with tetramethylrhodamine methyl ester (TMRM) in live cells. We found TrxR2-Tg MEFs to show increased MMP both under basal and acute stress conditions with hydrogen peroxide (Fig. 4a, b). Interestingly hydrogen peroxide did not decrease the MMP probably due to the acute nature of the exposure (30 min). We found TMRM staining to be specific since it faded after exposure to 10 µM FCCP as expected (Fig. 4a, b). We hypothesized that the TrxR2-induced increase in membrane potential might be driven by increases in mitochondrial respiration. To test this, we performed seahorse coupling respiratory assays in isolated mitochondria obtained from TrxR2-Tg and control mice livers. TrxR2-Tg seahorse traces appeared higher than control mouse traces throughout the assay (Fig. 4c). We noted that TrxR2-Tg mitochondria displayed a significant increase in basal oxygen consumption rate in the presence of pyruvate and malate, State 3-ADP stimulated respiration, ATP-coupled respiration, and maximal respiration when compared to littermate controls (Fig. 4c, d).
TrxR2 overexpression enhances the electron transport chain function. To understand the mechanism through which TrxR2 overexpression was enhancing respiratory function we first examined impacts on the electron transport chain (ETC) by electron flow seahorse assays. This approach evaluates ETC respiratory function uncoupled from ATP production by ATP Synthase. We used the same set of isolated mitochondria from TrxR2-Tg and control livers as those used in Fig. 4c, d. We observed significantly higher respiratory values from complexes II through IV (Fig. 4e, f) after the addition of Rotenone and Succinate and complex IV alone after the addition of Ascorbate and TMPD in TrxR2-Tg mitochondria compared to controls (Fig. 4g) (all in the presence of dinitrophenol and pyruvate and malate). We sought to understand whether these increases in ETC complexes respiratory values could be due to higher function of individual complexes activities. We found complex I activity to be elevated (p = 0.052), measured through kinetic activity assays (Fig. 5a). In addition, we observed significantly elevated complex V-ATP Synthase activity (Fig. 5d) with a trend for elevations in complex II and IV (Fig. 5b, c). Complex III was not evaluated. These results are in line with our seahorse electron flow assays results indicating that enhancing Thioredoxin-2 function targets multiple components of the respiratory chain.
TrxR2 overexpression enhances the tricarboxylic acid chain function. We hypothesized that TrxR2 overexpression may also impact other mitochondrial metabolic components that could lead to increased substrate availability feeding the ETC possibly through the tricarboxylic acid (TCA) cycle (Fig. 5e). We investigated this through activity measures of multiple components of the TCA cycle. We found citrate synthase (the first enzyme in the TCA cycle) to display a trend for higher activity in TrxR2-Tg versus controls (Fig. 5f). To investigate whether there could be changes in TCA cycle function specific to NADH generation we investigated other components of the TCA cycle such as malate dehydrogenase (MDH2) which converts malate to oxalacetate and isocitrate dehydrogenase (IDH2) which converts D-isocitrate to α-ketoglutarate. These targets were chosen as both were previously described to be direct molecular targets of cytoplasmic Thioredoxin-1 which is highly homologous to Thioredoxin-2, the substrate of TrxR2 16 . We found the functionality of both MDH2 and IDH2 to be increased in isolated mitochondria from TrxR2-Tg livers (Fig. 5g). These changes in function did not appear to be due to increased protein levels that could be caused by different factors. TrxR2-Tg showed similar protein levels of IDH3α, MDH2, and ATP5α in mitochondria when compared to controls. This strongly suggests that Thioredoxin-2 reduces oxidized thiol groups in multiple protein targets in mitochondria impacting their functionality ( Supplementary Fig. 5a, b). All these findings suggest TrxR2 to exert a synergistic function to increase mitochondrial respiration impacting directly several components of the electron transport chain and at the same time increasing substrate availability by the TCA cycle.
Discussion
Our findings show a key role for the mitochondrial thioredoxin system Thioredoxin-2/TrxR2 as a protector against metabolic dysfunction. Thioredoxin-2 acts upstream of the peroxiredoxin system thus our results are consistent with previous findings of enhanced metabolism by PRX3 overexpression such as improved glucose tolerance 6 . Metabolic function is closely linked to oxidative stress. The mitochondria is the major cellular source of free radical production and as a consequence is highly susceptible to oxidative damage 17 . A number of studies have reported that enhancement of oxidative stress resistance in mice can increase metabolism and protect against high-fat diet-induced metabolic dysfunction, for example, catalase artificially targeted to the mitochondria and PRX3 have both been reported to increase metabolism and protect against high-fat diet-induced insulin resistance and glucose intolerance respectively. However, the mechanism by which these improvements were produced is unclear 5,6 . Furthermore, our findings suggest TrxR2 overexpression could alleviate age-related declines in metabolic function. In previous work, we found TrxR2 function to be elevated in species which are under selection to be longer lived and that artificial overexpression of TrxR2 extended lifespan in fruit flies. While we did not see any improvement from overexpression of the cytosolic variant TrxR1. In addition, we found TrxR2 overexpression to decrease the age-related decline in muscle function and oxygen consumption in Drosophila suggesting prevention of age-related metabolic declines 14 .
Here, we demonstrated that TrxR2 overexpression in mice enhances whole-body metabolism. We found this enhancement to improve glucose tolerance by increasing the rate of blood glucose clearance, increasing tolerance to a high-fat diet, and decreasing the degree of high-fat diet-induced liver steatosis. We also showed this to occur in tandem with an increase in wholebody O 2 consumption, CO 2 production, and energy expenditure during the daytime suggesting an increase in basal metabolic rate. The observation that TrxR2-Tg high-fat diet-fed mice did not show any weight improvements was unexpected, however, this could be easily explained by a reactive oxygen species (ROS) overload caused by the diet. It has been reported that a long period of exposure to a high-fat diet induces a large production of ROS 18 and the ability of overexpressed TrxR2 to clear them in tissues such as white fat might be affected in this condition.
We were interested to find the improvements in whole-body metabolism from TrxR2 overexpression were driven by changes in Fig. 3 TrxR2 overexpression increases light cycle whole-body metabolism in the normal diet-fed male mice. a TrxR2-TG male mice are leaner with lower weight and fat mass relative to littermate controls as measured by qMRI (see methods, n = 8-9). b Food consumption is unaltered. Food intake was recorded for 7 days, (n = 8-9). c-f Indirect calorimetry was performed in normal-fed male mice. Parameters for each mouse were obtained and normalized by body weight. c Daytime O 2 consumption (VO 2 ). d CO 2 production (VCO 2 ). e RER (CO 2 /O 2 ), and f energy expenditure (Heat). All experiments utilized 8 months old normal-fed male mice, n = 8-9. Values are Mean ± SEM. *p < 0.05. multiple mitochondrial metabolic processes. We observed all respiratory parameters to be enhanced in TrxR2-Tg isolated liver mitochondria. We found several components of the electron transport chain to show elevated activity including Complex I and V by both respiratory and direct activity measures. Complex IV respiration showed significantly enhanced function through seahorse assays with a positive trend in direct kinetic activity assays. We additionally found significant improvements in the output of several enzymes of the TCA cycle. This demonstrates a synergistic role for TrxR2 protein to increase mitochondrial function at different levels. Previous findings in plants have shown thioredoxin to be capable of modulating multiple components of the TCA cycle including malate dehydrogenase (MDH2) and isocitrate dehydrogenase (IDH2) 19 . Indeed, we found the functionality of both of these components to be elevated with TrxR2 overexpression. In addition, previous proteomics reports have pointed to specific components of the electron transport chain and TCA cycle to be targets of the cytoplasmic thioredoxin-1. Several subunits of Complex I, and the ε chain of ATP Synthase contain cysteines with thiol groups that are reduced by thioredoxin-1 16 that seem to be impacting protein functionality and also be targets of thioredoxin-2.
All these results suggest metabolic improvements from augmentation of the thioredoxin system not to be through general declines in oxidative stress but to be through a targeted role of Thioredoxin-2 in reducing oxidized enzymatic components of both the electron transport chain and the TCA cycle. These results underscore the importance of the redox state as a key regulator of metabolic output and the sensitivity of its components to oxidative stress. These findings demonstrate the role of the mitochondrial thioredoxin system as a regulator of electron transport chain function, mitochondrial activity, and metabolism. Based on these findings and our rodent studies we can conclude that TrxR2 overexpression increases overall metabolism at least in part through increased electron transport chain and TCA function leading to increased glucose tolerance and metabolism.
Methods
Mouse model. We generated a CAG-Lp-STOP-Lp-TrxR2 construct which was microinjected by the University of Michigan transgenic core. Male mice containing the transgene in a CB6F1/C57Bl/6 J background were crossed with a homozygous CRE EIIa female germline overexpressor mouse (Cat# 003724, Jackson Labs). The EIIa-Cre mice carry a Cre transgene under the control of a zygotically expressed (EIIa-Cre) promoter that activates the expression of Cre recombinase in the early mouse embryos 20 . This resulted in a first-generation with 50% transgene expression. We then crossed positive mice with wild-type C57BL/6J for two more generations in order to avoid mosaicism in the following generations. From the second generation, we used both transgenic and non-transgenic littermates for our experiments. We genotyped transgenic positive mice with a forward primer (gtcaagctgcacatctccaa) and a reverse primer (gcgatgcaatttcctcattt) by PCR. We put a group of male and female mice under a normal diet and or high-fat diet (60% of fat, Research Diets cat#D12492) for 4 months. Animals were housed in ventilated cage racks with up to five animals per cage under 12 h light/dark cycles at 24°C. Immunofluorescence. Cells were seeded on a coverslip then, after 24 h, incubated with 200 µM Mitotracker deep red (Thermofisher, M22426) for 15 min, fixed with 4% paraformaldehyde, washed with PBS, blocked with normal goat serum, and permeabilized with 0.3% Triton. Cells were incubated with the TrxR2 antibody (Lifespanbio, LS-C118624, 1/200), washed, and incubated with goat anti-rabbit Alexa Fluor 488 (Thermofisher A-11008, 1/200), washed again and mounted with vectashield.
Cell survival assays. Cells were seeded in 96-well plates using the manufacturer's instructions WST-1 (Sigma 5015944001). Cells were treated with 100 and 500 µM H 2 O 2 and tert-butyl hydroperoxide for 1 h, then cells were washed and incubated with WST-1 reagent for 30 min.
Mitochondrial isolation. Mitochondria were isolated following a modification of the method by Rasmussen et. al. 21 using a custom homogenizer. Freshly isolated mouse liver was immediately submerged in ice-cold B1 buffer (10 mM EGTA, 0.1 uM free calcium, 20 mM imidazole, 20 mM taurine, 50 mM K-MES, 6.56 mM MgCl 2 , 5.77 mM ATP, 15 mM phosphocreatine) containing protease inhibitor. The tissue was weighed and carefully cut into 1 mm cubes with a 9 mm razor blade while submerged in B1 buffer in a petri dish on ice. The sample was homogenized with a Craftsman drill press at 990 rpm using a customized Wheaton mortar and pestle, designed to standardize optimal and consistent yields and functionality in mitochondrial preparations 22 . The samples were maintained between 0-1°C in a custom-built clear water-jacketed chamber. Homogenization included 12 slow passes of the pestle at 30 s for each downstroke, 30 s stopping of rotation once to the bottom, and 30 s on the upward pass. Homogenates were centrifuged at 600 × g, 10 min, at 4°C. The supernatants were transferred to another ice-cold centrifuge tube and centrifuged at 10,000 × g, 10 min, at 4°C. The pellet was then rinsed with 50 µL ice-cold isolation buffer and the supernatant removed. The final pellet was gently resuspended in 100 µL isolation buffer and protein concentration (mg/mL) was determined by Lowry quantification.
Seahorse assays. The Seahorse XF96 sensor cartridge (Agilent Technologies, Seahorse Bioscience) was hydrated in sterile water in a non-CO 2 37°C incubator overnight and then in a pre-warmed calibrant in a non-CO 2 37°C incubator 1 h before the assay run. About 20 µg of mitochondria were resuspended in MiR03 buffer (20 mM sucrose, 10 mM KH 2 PO 4 , 3 mM MgCl 2 -6H 2 O, 20 mM HEPES, 0.5 mM EGTA, 0.1% (w/v) fatty acid-free BSA, 20 mM taurine) with 10 mM pyruvate and 2 mM malate was used to dilute effectors for both the coupling assay (CA) and the electron flow assay (EFA), with 150 µM 2,4-dinitrophenol (DNP) added to the EFA buffer. All effectors were made to 10× desired concentrations and loaded into the appropriate ports of the sensor cartridge at 20, 22, 25, and 27 µL in ports A, B, C, D respectively for the CA assay (port A: 40 mM ADP, port B: 25 µg/mL oligomycin, port C: 1500 µM DNP, port D: 40 µM antimycin A) and the EFA assay (port A: 20 µM rotenone, port B: 100 mM succinate, port C: 40 µM antimycin A, port D: 100 mM ascorbate and 1 mM TMPD). About 20 µg mitos were seeded in 50 µL CA or EFA buffer in the designated wells of the XF96 tissue culture microplate and then centrifuged at 2000 × g for 20 min. About 130 µL of CA or EFA buffer was gently added to each well and the plate was loaded into the Seahorse XFe96 Analyzer (Agilent Technologies, Seahorse Bioscience). The Seahorse protocol was run as follows: calibrate (15 min), equilibrate (15 min), 3× basal reads (Mix: 3 min, Wait: 0 min, Measure: 3 min), then a port injection (A, B, C, D) each followed by 3× reads. Oxygen consumption rate (OCR) and extracellular acidification rate (ECAR) was obtained at baseline and in response to each effector injection.
Mitochondrial activity assays. All assays were performed using 30 µg of isolated mitochondria. Complex I activity was immediately measured on a DU800 spectrophotometer using 2,6-dichloroindophenol (DCIP) as the terminal electron acceptor at 600 nm with the oxidation of NADH reducing artificial substrates Coenzyme Q10 that then reduces DCIP. The reduction of DCIP is mostly dependent on complex I activity and has a very high rotenone-sensitive activity (1). Complex II activity will be analyzed as the reduction of dichloroindophenol at 600 nm with succinate as the substrate, and complex II/III will be measured as the reduction of cytochrome c at 550 nm also with succinate as the substrate (2). Complex IV activity was measured by the oxidation of cytochrome c at 550 nm (3). Data were represented as the pseudo-first-order rate constant (k) divided by protein concentration. ATP synthase activity, measured in the direction of ATP hydrolysis (ATPase activity), was assayed by the continuous spectrophotometric monitoring of the oxidation of NADH (e340 = 6180 M −1 cm −1 ) in an enzymelinked ATP regenerating assay using ATP, phosphoenolpyruvate, pyruvate kinase, and lactate dehydrogenase to determine the ATPase activity (NADH loss) in nmol/ min/mg protein (4, 5). Citrate synthase was measured using the coupled reaction with oxaloacetate, acetyl-CoA, and 5,5-dithiobis-(2,4-nitrobenzoic acid) (6). f Citrate synthase activity kinetics was evaluated (n = 4). g The specific function of the TCA cycle enzymes MDH2 and IDH2 were increased in TrxR2-Tg mitochondria relative to non-transgenic controls, evaluated by MDH2 and IDH2 activity assays. (n = 3, assays run in triplicates). Values are Mean ± SEM. *p < 0.05, **p < 0.01.
Mitochondrial membrane potential. Mitochondrial membrane potential was monitored with a Zeiss LSM740 confocal microscope, cells were seeded in glassbottom dishes and incubated in 10 nM TMRM (non-quenching mode) for 15 min in Seahorse XF DMEM medium (Agilent part 103575) containing 10 mM glucose, 1 mM sodium pyruvate, and 2 mM L-glutamine without phenol red. For H 2 O 2 treatment cells were incubated with 100 μM H 2 O 2 30 min prior to the TMRM incubation. Cells were imaged immediately after TMRM incubation.
GTT and ITT assays. Glucose and Insulin Tolerance Tests were performed using an Aimstrip plus glucometer. A bolus of 1.5 g/kg bolus of glucose or a 0.8 U/kg bolus of insulin were injected intraperitoneally into mice that were starved overnight. Then glucose was monitored for a total period of 2 h. Procedures were approved under IACUC 20170040AR.
qMRI and indirect calorimetry and food intake. The whole-body composition was obtained with EchoMRI-100H and 130 (EchoMRI). Measures of total body fat, lean mass, and free water as grams of body weight were obtained in individual awake animals (anesthesia was not required). We ensured that mice were immobile and located at the bottom of the measurement tube. For calorimetry Mice were individualized in cages and left to acclimate in the room for two days. We used the MARS indirect calorimetry "pull mode" system (Sable Systems) to determine whole-body metabolic parameters such as O 2 consumption, CO 2 production, and heat production for a period of 48 h (three light and two dark phases). Data from the third hour is examined to detect potential equipment-related problems so that animals can be quickly retested.
Food Intake was recorded by weighing the food at the beginning of the week and then weighing the daily changes. Then the final sum of the daily changes was added for 7 days in order to plot the differences.
Statistics and reproducibility. All statistical tests applied were t-Student's tests unless otherwise specified. The number of samples or mice utilized are specified for each figure panel. Normally at least three biological samples were used with three different technical replicates. For animal physiological measures, seven to nine animals were used. Significance was considered for p values ≤0.05. | 6,458.2 | 2022-05-16T00:00:00.000 | [
"Biology"
] |
LIGAND-BASED PHARMACOPHORE MODEL AND QSAR STUDIES ON HERBICIDES TARGETING PHOTOSYSTEM II FROM CHLAMYDOMONAS REINHARDTII
The resistance of weeds is a problem which can be overcome by finding new herbicides. For this purpose, beyond the experimental methods, in silico approaches can be helpful, as a starting point. In this regard, pharmacophore mapping and 3D-QSAR studies were carried out on several series of herbicide, already known to act on the Photosystem II (PS II) D1 protein. Using PHASE software, three pharmacophore features, H-bond acceptor (A), hydrophobic (H) and aromatic ring (R) were taken into account to be the best hypothesis. For this hypothesis an atom-based 3D-QSAR model was generated with statistically significant parameters (the correlation coefficient of regression (R2) of 0.839, the standard error of estimates (SD) of 0.370, the Fisher test (F) of 53.7 for the training set, the external explained variance Q2 = 0.640, the Pearson-R = 0.916 and Root Mean Square Error (RMSE) = 0.572, for the test set). This hypothesis, validated by the 3D atom-based QSAR approach, assures the selection of novel scaffolds of herbicide derivatives and can be used for the design of new chemical entities active on the PS II D1 protein.
The resistance of weeds is a problem which can be overcome by finding new herbicides. For this purpose, beyond the experimental methods, in silico approaches can be helpful, as a starting point.
In this regard, pharmacophore mapping and 3D-QSAR studies were carried out on several series of herbicide, already known to act on the Photosystem II (PS II) D1 protein. Using PHASE software, three pharmacophore features, H-bond acceptor (A), hydrophobic (H) and aromatic ring (R) were taken into account to be the best hypothesis.
For this hypothesis an atom-based 3D-QSAR model was generated with statistically significant parameters (the correlation coefficient of regression (R 2 ) of 0.839, the standard error of estimates (SD) of 0.370, the Fisher test (F) of 53.7 for the training set, the external explained variance Q 2 = 0.640, the Pearson-R = 0.916 and Root Mean Square Error (RMSE) = 0.572, for the test set).
This hypothesis, validated by the 3D atom-based QSAR approach, assures the selection of novel scaffolds of herbicide derivatives and can be used for the design of new chemical entities active on the PS II D1 protein. Table 1. The structure of the most active compounds (1 to 8), the unaligned ligands (9 and 10) and the less active compounds (11 and 12) and their herbicidal activity in logarithmic units
Pharmacophore modeling and validation
The "Develop Pharmacophore Model" module of Phase software [12][13][14] implemented in the Schrödinger suite was used in order to generate all possible pharmacophore hypothesis using four PLS factors. The number of PLS factors was increased, but the model statistics or predictive ability did not improve.
The pharmacophore validation was carried out by atom-based 3D-QSAR regression including both internal and external validation. The training set includes 80% randomly selected molecules, whereas the remaining 20% were denominated to validate the model (test set). The external predictive ability for the test set prediction using Pearson-R was considered and the models which have values greater than 0.6 were selected.
Taking into account this statistical parameter but also high value of Q2 test (correlation coefficient of prediction for the test set) and R2 training (correlation coefficient for the training set) we selected the best QSAR model. Ten pharmacophore (Table 2) Table 1. Figures 3 to 6. Pharmacophore-based 3D-QSAR study of PSII D1 inhibitors is carried out in order to explain the structural features of some herbicide derivatives (pyrimidine, pyridine, cinnoline, triazine and quinine) required for their inhibitory activity.
A graphical representation of the significant favourable and unfavourable features for the herbicidal activity of the compounds that resulted when the QSAR model is applied is shows in
The selected 3D-QSAR model indicates a significant correlation and a good predictive capacity. One hydrogen bond acceptors (A), one lipophilic/hydrophobic group (H) and one aromatic ring (R), as pharmacophore features, are important for the PSII D1 herbicidal activity. The best hypothesis AHR.7, in this study, is characterized by the best values of the R 2 regression coefficient (0.839) and the highest values for the Pearson-R coefficient (0.916).
In future studies this pharmacophore model will be used for screening molecular databases in order to find potential new herbicides.
Conclusion
This project was financially supported by Project 1.1 of the Institute of Chemistry of the Romanian Academy. The authors thank Dr. Ramona Curpăn (Institute of Chemistry Timisoara of Romanian Academy), for providing access to Schrödinger software acquired through the PN-II-RU-TE-2014-4-422 projects funded by CNCS-UEFISCDI Romania. | 1,082.2 | 2016-11-01T00:00:00.000 | [
"Chemistry",
"Biology"
] |
Post-Tsunami Lifeline Restoration and Reconstruction
The 2004 Indian Ocean earthquake and tsunami caused severe damage to houses and infrastructure and resulted in massive human casualties in several countries. Although there have been several reports on the resulting damage to lifelines, processes for the restoration and reconstruction of the lifelines have not been reviewed well; these processes are important in view of the effects on people’s life, the community, and industrial conditions. A lifeline refers to a vital infrastructure in our lives. As a city becomes modernized and its population increases, a lifeline service covers a larger area with a complicated network system. After the 2004 tsunami, it took as long as a few weeks, and sometimes several months, before the process of restoration and reconstruction was begun. The victims of the tsunami faced many problems that differed from those that occur after an earthquake. This chapter attempts to elucidate the post-tsunami lifeline restoration and reconstruction process from several points of view by using case studies from Indonesia, Thailand, and Sri Lanka. Tsunami restoration evaluation modeling and its application are discussed. Moreover, methods for town reconstruction planning for lifeline reconstruction are discussed. Among the lifelines, adequate water supply is important to residential life. Most coastal residential areas, which are at the highest risk from a tsunami, use domestic water from shallow wells. After the 2004 tsunami, worldwide support facilitated the water-supply system to be reconstructed as part of the disaster reconstruction projects, and the residents in the affected areas changed their water-supply system from shallow wells to a pipeline network. The end of this chapter contains an analysis of the lifeline reconstruction and its long-term effects, with the focus on residential awareness of water use before and after the tsunami.
Introduction
The 2004 Indian Ocean earthquake and tsunami caused severe damage to houses and infrastructure and resulted in massive human casualties in several countries. Although there have been several reports on the resulting damage to lifelines, processes for the restoration and reconstruction of the lifelines have not been reviewed well; these processes are important in view of the effects on people's life, the community, and industrial conditions. A lifeline refers to a vital infrastructure in our lives. As a city becomes modernized and its population increases, a lifeline service covers a larger area with a complicated network system. After the 2004 tsunami, it took as long as a few weeks, and sometimes several months, before the process of restoration and reconstruction was begun. The victims of the tsunami faced many problems that differed from those that occur after an earthquake. This chapter attempts to elucidate the post-tsunami lifeline restoration and reconstruction process from several points of view by using case studies from Indonesia, Thailand, and Sri Lanka. Tsunami restoration evaluation modeling and its application are discussed. Moreover, methods for town reconstruction planning for lifeline reconstruction are discussed. Among the lifelines, adequate water supply is important to residential life. Most coastal residential areas, which are at the highest risk from a tsunami, use domestic water from shallow wells. After the 2004 tsunami, worldwide support facilitated the water-supply system to be reconstructed as part of the disaster reconstruction projects, and the residents in the affected areas changed their water-supply system from shallow wells to a pipeline network. The end of this chapter contains an analysis of the lifeline reconstruction and its long-term effects, with the focus on residential awareness of water use before and after the tsunami.
How are lifeline systems damaged by a tsunami wave?
In a lifeline system, electrical poles and facility buildings can be damaged by a tsunami wave in the same way that houses can be damaged. In the 2004 tsunami, pipelines were also damaged despite the fact that they were installed underground. The mechanism of lifeline damage under the tsunami wave is explained through spatial analysis of the underground pipeline damage and inundation distribution in a case study of southern Thailand.
The residential pipe on the customer side of the meter was generally a fragile vinyl pipe (VP), whereas the service pipe on the PWWA side of the meter was a flexible polyethylene pipe (PE). Although the vulnerable water meter sustained severe damage, the repaired meter and piping maintained the previous standards. In addition to the physical damage to the facilities, it is believed that the water-supply facilities were broken by scrapers during the recovery work after the tsunami since the location of the pipeline beneath the ruins could not be identified. Fortunately, the water purification plant was located in the mountains as far as 18 km from the coast, and it was undamaged.
Photo 1. Exposed pipeline after scouring of embankment soil (Phuket, Thailand) Photo 2. Water meter above ground. Blackcolored polyethylene pipe is the service pipe of PWWA, and the blue-colored vinyl pipe is the residential pipe (Khao Lak, Thailand)
Electric power supply network damage
The electric power supply for the 40,000 customers of Phang Nga province is managed by Provincial Electricity Authority [PEA] of Phang Nga. Out of the eight administrative districts, Khura Buri, Ta Kua Pa, and Thai Mueang sustained severe damage to their electric power facilities. The main facilities, including the aerial electric power line, service transformer, and electric power meter, were damaged. For instance, damaged lengths reached 36 km for the high voltage line and 28 km for the low voltage line. As shown in Fig. 2, the high-voltage line ran along the main roadway, the Petch Kasem road, parallel to the coastline. The interruption in the inland low-voltage line was caused by the tsunami striking the high-voltage trunk lines along the coast. The damage rate (damaged aerial electric line length per total line length) was 80 to 100% in the inundation zone, where the inundation height was estimated to be 6 to 7 m. The electric power service interruption was caused by the collapse of electric poles because of the tsunami (Thailand witnessed a weak earthquake, so most of the damage was caused by the tsunami). Some poles had cracks at the bottom and others had cracks at the center. The locations of the cracks varied because of the random nature of the colliding driftwood (see Photo 3). The other cause of damage was the scouring of the pole foundations (see Photo 4), which caused the poles to tilt or fall, resulting in the snapping of the power lines. The electric power supply through an underwater cable (400 m length) from Nam Kem village to Kho Khao Island was interrupted because the electric tower on the coast of Nam Kem village was flooded, although the underwater cable itself was not damaged. Similarly, the underground electric power cable along Karon Beach in Phuket was not damaged. Underground cables seem to fare better in a tsunami. Even so, the underground cable was used only at Karon Beach in Phuket and but in Phang Nga province because of its high cost. Underground cables contribute to the preservation of the landscape in tourist areas, as well as disaster mitigation. Comprehensive regional planning of lifeline infrastructures focusing on land use is expected.
Business continuity management after a tsunami
Destructive natural disasters such as earthquakes, tsunamis, floods, and typhoons frequently occur all over the world and cause a great deal of property damage and loss of life. Many businesses are also damaged in these disasters. The recovery of business in the affected areas is a big issue for the local societies. Interest in "business continuity management (BCM)" after such disasters has recently been increasing among state governments, local governments, and business organizations. For example, the Cabinet Office of Japan (2005) created a guideline on business continuity. BCM involves the preparation of plans, the allocation of resources, and the implementation of processes such that an organization can recover quickly and safely from an interruption (crisis, emergency, event, etc.), with minimum negative impact to people, premises, assets, and operations. Understanding the process of lifeline restoration and lifeline-related industrial business recovery helps in planning and preparing for future disasters. A method is proposed to evaluate the functionality of a business after a tsunami, with a focus on lifeline function. This method has several modules, including damage estimation of business base (building, equipment, and lifeline) caused by the tsunami, a rate-to-time model to restore the business bases, and the functionality of the business introduced by facility restoration and its influence on the business (Kuwata et al., 2006). ATC-13 (Applied Technology Council, 1985) provided a methodology to evaluate the functionality of a facility, including lifeline effects after an earthquake in California. Referring to ATC-13, the present study proposes a new model of damage estimation and recovery curve for a tsunami. As a case study, the impact of a tsunami on industries and the subsequent restoration process were studied based on an interview survey done in southern Sri Lanka after the 2004 Indian Ocean tsunami, and the survey results were applied to the proposed model.
Post-tsunami business recovery in Galle, Sri Lanka
A survey on tsunami damage to a business base and the restoration process was carried out in Galle, southern Sri Lanka, in late September of 2005, 9 months after the 2004 tsunami. Interview respondents included company owners and other relevant people who have businesses from several industries around the coastal areas. The main industries in Galle are fishing and tourism, and each company is relatively small, with only a few employees (see Photos 5 and 6). The total number of responses in this survey is only 52, because the survey period was limited and the interviews were done face-to-face. The questions were on tsunami inundation levels, physical damages to buildings and equipment and recovery time in days, tsunami damages to lifeline services and recovery time in days, and business restoration processes with respect to the time since the tsunami. Fig. 3 shows the business recovery process of the local industries, which is the average of responses in each industry, as shown weekly for the first three months and every couple of weeks after that. Here the rate of business recovery is defined as the sales of their products compared to that before the tsunami based on the interview of owners. As is readily seen, lifeline and financial (banking) businesses were recovered remarkably quickly. The reason for the rapid recoveries that office buildings did not have extensive damage and their officers could respond to those damages promptly. They could also receive emergency relief and repair workers from the unaffected offices thanks to mutual cooperation. The reason the banks recovered rapidly is that their main offices are in Colombo, and they served as a backup system for customer information. Communications and relationships in non-crisis times are thus important and effective in cases of emergency. The recovery of agriculture was relatively quick because the farmers were far from the sea and did not have major damage to their business base. However, it took a long time for the agricultural sectors to be completely restored because it was necessary to purify the soil, which contained saline due to the tsunami. Nine months after the tsunami tourism, manufacturing, and wholesale and retail trade businesses had been slowly recovered to the level of about 60%. In Galle, a large number of collapsed houses remained untouched at the time of the survey, and the local society was still in the process of recovery. Several hotels had resumed business, but most visitors were domestic tourists, restoration relief teams, or research groups. Foreign vacationing tourists had not returned as of yet. Some fishermen lost their boats in the tsunami and could not buy new boats. The other fishermen said that they could not sell fish during the few months after the tsunami because the local victims of the tsunami did not want to eat them. The effects of tsunami damage vary depending on the industry, and they go beyond the physical damage to the facilities. Photo 5. Rope production factory (Galle, Sri Lanka) Photo 6. Ice production factory (Galle, Sri Lanka)
Post-tsunami business base restoration modeling
An evaluation method for tsunami damage and the restoration curve of the business base is proposed herein. The business base used in this study involves building, equipment, and lifeline service. Fig. 4 shows the schematic evaluation model, including damage estimation of the business base caused by the tsunami, rate-to-time model for restoration of the business bases, and the business recovery rate resulting from facility restoration and its influence on the business. The restoration rate is expressed probabilistically by the damage state of a facility and its conditional restoration rate. Business restoration is not determined by the facilities and www.intechopen.com lifeline service because their effects vary depending on the type of business. This model just considers the business basis from a physical point of view. These effects are considered based on the importance factors. First of all, define the tsunami intensity level. When the hazard is an earthquake, ground motion such as peak ground acceleration and seismic intensity would be an index of intensity level. Shuto (1992) defined tsunami intensity by the square value of the tsunami inundation height and categorized damage based on previous tsunami damage records in terms of its index. This study does not use the tsunami intensity as the square value of inundation height because of the consideration of the limited inundation height in the case study area. The tsunami intensity levels are determined as five discrete levels of inundation height; levels 1 to 5 correspond to no inundation, less than 1, 2, and 3 m, and over 3 m, respectively. The restoration of a facility such as a building and its equipment is defined such that the damaged facility is restored to the same state it was in before the tsunami. Since the damage state affects the expense and restoration time, the damage state (DS) is classified into five categories ranging from A (no damage) to E (severe damage). When the probability of the damage state of a facility under a given tsunami intensity level and the function of the conditional restoration under the damage state of a facility are given, the restoration rate of facility under a given tsunami intensity level is obtained by Fn Rt x denotes the restoration rate for the facility F n at time t in days for a given tsunami intensity level x,
Fn
Rt D S denotes the conditional restoration rate for the facility F n at time t in days for a given damage state DS, and (| ) Fn PD S x denotes the probability of a damage state DS of the facility F n for a given tsunami intensity x The DS of a facility is classified into five discrete categories. Each of these categories corresponds to a damage rate of the facility, y, which is treated as a random variable with a corresponding probability distribution, f (y|x), at every tsunami intensity level, x, as shown in Fig. 5. Each DS has a representative value of damage rate as listed in Table 1, and the probability of the damage state is expressed by section integral calculus of the distribution's lowest damage rate y 1 to the highest damage rate y 2 when given the damage rate distribution f (y|x) as follows.
A distribution model of the damage rate is assumed as the beta distribution between 0 and 1 as follows. Its parameters, q and r, are estimated by statistical inference.
Regarding the restoration rate model, the concept is based on the method by Nojima and Sugito (2005). The conditional restoration ratio of the damage state for a facility, (| )
Fn
Rt D S , can be expressed by the central damage rate of the damage state, k DS , and the cumulative density function of the conditional restoration rate, Fn rt D S , as shown in Eq. (4).
A probability density distribution of restoration rate, (| ) Fn rt D S , is assumed as the gamma distribution as shown in Eq. (5). Its parameters, v and k, are also estimated by statistical inference.
Three kinds of lifelines, all of which are related to business activity, are considered: water supply, electric power, and telecommunication. Contrary to the business facility, the lifeline facility covers a widespread area, and therefore, the damage to the lifeline facility is also widespread. If the service to end users of a lifeline is functioning even after a natural disaster, the lifeline is not an obstacle for business recovery. Hence lifeline damage for business should be taken into account not based on its physical damage but rather on its functional damage. This study thus considers the functionality of lifelines on the user side.
The concept of an evaluation model of lifeline restoration is similar to that of the business facilities. Here, the damage state of lifeline systems is categorized into two states regardless of whether the lifeline is functioning. When the lifeline system is functional, the representative value k DS of the DS becomes 0, and the state of restoration rate (| , ) Ln R t DS DS functional becomes 1. In contrast, when the lifeline is not functional, k DS becomes 1, and (| , ) gives the cumulative density distribution of the conditional restoration rate.
Application of post-tsunami restoration model 3.4.1 Facility restoration
The proposed restoration model was applied to estimate the restoration curves according to the tsunami intensity based on the collected data in southern Sri Lanka. First of all, damage state parameters and restoration rate parameters were calculated based on the equations presented above. The restoration process and resistance to tsunami force vary depending on the type of building, the conditions of the facilities, and so on. In the present study, we do not have enough responses to analyze these factors separately. All the buildings for different industries are assumed to have the same resistance and are arranged without distinguishing between industrial classifications. Only answers from the fishing industry, which is dependent to fishing boats and is quite different from the others, were removed from this analysis. Tables 2 and 3 list the mean and variance of the observed building and equipment damage rate, y, and beta distribution parameters for the different levels of tsunami intensity by the method of moment. If the tsunami level is 0 (no inundation), it can be concluded that there was no physical damage to the building and facilities. In addition, as the number of responders of tsunami intensity levels 2 and 3 is limited, their number is modified by adding weighted responders from the previous and the next intensity level. As shown by the results in Tables 2 and 3, the mean damage rate increases according to tsunami intensity level, and the variance of levels 2 and 3 shows high values. Thus, the probability distribution function of damage rate has two peaks between borders. Table 3. Parameters of facility damage rate, y (%), due to the tsunami intensity level, x Tables 4 and 5 list the probabilities of the building and equipment damage state using the beta distribution's parameters. For the damage states A and B, the probability increases as the tsunami intensity level becomes large. Comparing probabilities between building and equipment, high probability appears at the severe damage state in the equipment rather than the building. Then the parameters of the probability density distribution of the restoration rate,
Tsunami intensity level
Fn rt D S , are estimated in every damage state of building and equipment. Even 9 months after the tsunami (at the end of September, 2005), most fisheries were under the process of restoration or had not been restored yet. Thus responders from the fishing industry were removed from the analyses. A total of 5 out of 29 responders (building/equipment or both) had not been recovered at the time of the survey. The government announced that the area within a 100 m buffer zone from the shore would not be supported. This regulation would www.intechopen.com also hamper a quick restoration. Here, the number of days until restoration completion of the above not restored building/equipments is considered to be 360 days.
In Tables 6 and 7, the parameters of the probability density distribution of restoration rate in the each damage state for buildings and equipment are presented, respectively, using the gamma distribution by the method of moment. The damage state "D" of Table 7 is assigned from relations with recovery days of other damage states because of a lack of records. As the damage state becomes severe, the mean of the restoration days gets longer.
Lifeline restoration
Here, we estimate the restoration process of lifeline systems such as electricity, water supply, and telecommunication. In this regard, the actual probability of the lifeline damage state and the restoration days indicated by the responders are employed because there is not enough information on the lifeline network system and the inundation area to be analyzed. Furthermore, the number of restoration days for the users is much higher than that of the main network reported by lifeline companies. Fig. 7 shows the supply interruption rate of the lifeline companies due to tsunami intensity level. It shows a 20 to 50% interruption of lifeline even though the area had not been hit by the tsunami wave. In addition, the electricity and water-supply services stopped completely when the inundation level was more than 2 m. Thus, it can be concluded that the above-ground lifeline facilities such as electric power and water meters are easily destroyed by tsunami waves. The interruption in telecommunication was less than that for other lifelines because of the functioning of mobile phones during and after the tsunami. Similar to the facilities, the probability density distribution of the restoration rate was estimated using the gamma function by the method of moment as shown in Table 8. We had some responders whose lifeline services had not been recovered at that time. Because the outside buildings were still under construction, it was not possible to install inside facilities. In this study, we have removed those answers from the analyses. The mean number of days for water-supply restoration was 58 days, and that for electric power supply and telecommunication was 39 days.
Business base restoration under inundation height
The restoration curves of business facilities and lifelines under the same tsunami intensity level are compared, as shown in Fig. 9. As can be seen in this figure, business facilities such as building and equipment are restored sooner than the lifelines if the tsunami intensity level is either 1 or 2. However, business facilities are restored slower than lifelines if the tsunami intensity level is 3 or higher. When the tsunami inundation height is higher than 1 m, the business recovery depends more strongly on the business facilities rather than the lifeline. It should be noted that lifelines are damaged even in places where the tsunami does not reach, as is shown in Fig. 9(a). This is because the lifeline is a system that works in a wide area. Moreover, since the lifeline is managed as a public service, its restoration is relatively rapid. The restoration of buildings and equipment, which are mostly private property, is very slow, especially in the first three month after a disaster, relative to the lifeline. Fig. 10 shows the temporal change of business base restoration rate for the three selected types of industry (fishing, manufacturing, and tourism). The observed temporal changes of entire business recoveries for each industry are also shown (same as Fig. 3). The entire business recovery is the same as or smaller than the restoration of the business base. For the fishing industry, sales depend strongly on equipment (fishing boat) restoration. While the lifeline is recovered soon in the manufacturing industry, business restoration is connected to facility restoration. For tourism (hotels), both the restorations of the entire business and the business base are almost the same. It is indicated that in the first few months, lifeline services were restarted at a slightly damaged hotel, and an extensively damaged hotel was restored with lifeline repair a few months later. In other words, the hotel industry cannot run without a lifeline service. The results of a comparison of restoration processes show that the influences of the business base on the entire business functionality are different among the industries. The failure of the lifeline systems affects the residual function of societies in a variety of ways. ATC-13 (Applied Technology Council, 1985) provided a methodology for evaluating the impact of lifeline failures on the loss of function of particular facilities, and they also established an index called the importance factor. Important factors were developed based on the judgment of experts, and they were prescribed for California conditions only. Thus, the importance factors of lifeline systems are examined based on the results of a survey in Sri Lanka, with reference to the methodology of ATC-13. The importance factor examined in this study is only for three lifelines: water supply, electric power, and telecommunications. Each industry will be given three importance factors. The multiple regressions model considering three explanatory variables in terms of functionality of the lifelines is used. Observations of each variable are used from the data by the mean values taken every few weeks for each industry, as in Fig. 3. The results show that the estimated importance factors are mostly close to 1.0, which indicates that lifelines have a severe effect on business activity (Kuwata et al., 2006). In particular, all the factors of financial institutions, hotels, and lifeline businesses are 1.0. These factors are much larger than those in ATC-13. When a business facility and several lifelines lose functionality at the same time, it is considered that the interrelation of business bases increases and the importance factor become large.
Remarks on business restoration model
The evaluation model of restoration curves for the business base was applied from the results of the interview survey of businesses in southern Sri Lanka. When buildings and equipment have extensive damage or are flooded completely, their restoration starts slowly in the first few months. The business restoration depends more strongly on business facility restoration than lifeline restoration if the tsunami inundation is higher than 1 m. The lifeline interruption caused by the tsunami affected business continuity more than in previous studies.
The concept of a restoration model is applicable to those businesses that are flooded by a tsunami. The damage rate and restoration curve estimated in this study is based on the limited number of responses. The parameters shown herein may have to be examined using additional responses. Although the business recovery seems to be related to several social factors, such as regional policy of disaster recovery, regulation, culture, and psychological issues of customers, this model deals only with the physical aspect of facilities. These social factors should be clarified in future work.
Community-based lifeline reconstruction planning
Disaster reconstruction planning is generally necessary to make the affected area stronger than it was before the disaster. It provides the opportunity to review the vulnerability of the town to earthquakes and tsunamis, and to create a vision for development between government and community. If the planning vision or procedure fails, the community might not survive. Therefore, it is important for the suffering community to heal and persevere after the disaster. The disaster reconstruction planning discussed herein targets the area affected by the tsunami. Houses collapsed and were swept out by the wave, leaving an area of land with a cleared surface. Drastic town planning is easier to implement in this area rather than an earthquake-affected area, where the damaged houses are unevenly distributed. Lifeline reconstruction planning follows the land readjustment of town lots. Through the reviews of disaster reconstruction planning at two tsunami-affected areas-Nam Kem village, Thailand and Aonae district, Japan-the implementation procedure between the local government and community is discussed. As part of disaster restoration including readjustment, the lifeline network can be completely renovated and become strong in terms of network system, although the general procedure of lifeline restoration after the earthquake is to replace only the broken or leaked pipe with the new pipe and to keep the former system of the pipeline network. Therefore, post-tsunami lifeline reconstruction has a sense of new construction.
Town reconstruction planning at Nam Kem village, Thailand
The tsunami damage to the underground water supply pipeline at Nam Kem village, Thailand was shown above. In addition to the water-supply pipeline, houses also almost collapsed near the coast. To secure the safety of residents and coastal property, the Department of Public Works and Town Planning (hereafter, DPT), Ministry of the Interior, Thailand proposed a town reconstruction plan for Nam Kem village shortly after the tsunami, as shown in Fig. 11. This plan divides the village into four types of land-use areas (public, fishery, living, and monument & sightseeing), and it provides new roadways, parts of which are suitable evacuation routes. It seems like an ideal land readjustment project from the point of view of land use planning. On the other hand, it would force local fishery residents to move far from the coast. In fact, the destructive tsunami damage at Nam Kem village had received a lot of attention from domestic and international organizations. They sent much disaster relief and donations to help restore the houses of the victims. This support helped the village to recover earlier. Under military management, permanent houses built in a couple of weeks were provided to those who had lived in the same place as before. Within three months after the tsunami, the affected residents started coming back to their rebuilt houses. This process was so quick that the affected community, local government, and central government, such as DPT, could not participate in the town reconstruction. The community relationships were strong because the residents have lived there for a long time, spanning many generations. They did not accept the changes to their livelihoods and residences based on the DPT town readjustment plan, and they insisted on restoring the village to its pre-tsunami state.
Since the land readjustment planning was accepted in the disaster restoration planning and the house rebuilding was fast, the lifeline recovery in the village was also installed in the same place as before. According to the DPT planning, the cost of the planned water-supply pipeline is estimated by the network map, as shown in Fig. 13 (2). When the repair costs of the damaged pipelines and the construction costs of the planned pipeline are compared, as shown in Fig. 12, the costs are 117 thousand USD and 105 thousand USD, respectively, which are not so different. Similarly, the costs for the electric power line are also not very different from each other. It is observed that the cost for lifeline facilities is not different between the new plan and the repair scenario. Whether residents move and live in a safer place or remain in the community depends on the communication between the local community and the government. The roadway administrator and lifeline companies may join the discussion, providing information for safer town development. Although they reject the idea of moving, the town designer and infrastructure organization would agree to rebuild better facilities. Three years after the tsunami, most houses were rebuilt within 500 m from the coast, as before. The population did not increase very much, but water users increased from 300 customers to 600 customers. The vulnerable parts, including the water meter mentioned above, have not improved from the former equipment. The shallow well filled with seawater cannot be restored.
Town reconstruction planning of Aonae district, Okushiri
The Hokkaido southwest-off earthquake (M7.8) that hit Okushiri Island at 22:17 (local time) on July 12, 1993 caused extensive damage and resulted in 172 deaths and 27 missing people; the population of the island was 3700. Aonae district at the south cape of the island, where fishing was the main industry, was the most severely affected part of the island. Tsunami and post-earthquake fires were the main causes of death and destruction. In total, 107 people were killed or missing in Aonae district alone. Those who had escaped to the hill survived. In inundated or burned areas, very few wooden houses remained. Incidentally, half of the direct damage from the tsunami was inflicted on infrastructure and port facilities.
The victims evacuated the shelter after one and a half months, and they stayed in temporary houses for three and a half years, with 900 people in total. Okushiri town established a disaster restoration office three month after the tsunami and aimed to complete the disaster restoration planning in five years. Its disaster reconstruction planning was supported by the national government and the Hokkaido prefecture. The reconstruction project that was presented to the Aonae district consisted of four parts: the fishery village environmental renovation project, the roadway reconstruction project, the disaster recovery project (construction of a tide wall), and the group relocation project for disaster prevention. The tide wall, which was built to a height of 6 m after the tsunami in 1983, was reinforced with a height of 5 additional meters. The wall height of 11 m is the same as the wave height of the last tsunami. The fishery village environmental renovation project was responsible for the reclaimed land development behind the high-raised wall. New roadways, water supply, and waste water drains were constructed over the reclaimed land, as shown in Fig. 13. When constructing the reclaimed land, the local government bought all the lots from the residents and readjusted the roadway and the lots. After the lot readjustment, residents bought land from the government. This process requires land renovation for disaster prevention, financial contribution from the local government, financial support from the national government, and the patience of the residents during reconstruction. In parallel to the land development at the coastal area, the group relocation project led the victims (except the fishery people) to live on the hill, as shown in Fig. 14.
Based on the reclaimed land development and new roadway construction, the water-supply pipeline used before was left under the surface before the tsunami, and the new pipeline was constructed over it, as shown in Fig. 15. A polyvinyl chloride pipe (PVC) was adopted in the reconstruction work in spite of the ductile iron pipe (DIP), which was used before the tsunami. As shown in the plan of pipeline networks in Fig. 12, the pipeline in Aonae district was completely reconstructed. In contrast to the previous pipeline, the new pipeline draws streamline. The completion of reconstruction was declared in Okushiri-Island in March, 1998, 4.5 years after the disaster. Houses and infrastructure reconstruction took a long time. Incidentally, the fishery residents could live in the coastal area under safer condition. Fig. 13. High-raised tidal wall and reclaimed land development at Aonae district
Comparison of town reconstruction planning between two districts
In Aonae district, the victims were evacuated to shelters at first and then moved to temporary houses for about three years. They were finally settled in permanent houses on reclaimed land. It took such a long time for Aonae district to complete reconstruction because the residents and administrative people worked together to put in place measures that would safeguard them against future large earthquakes and tsunamis. On the other hand, the victims in Nam Kem village needed only three months to move from the temporary shelter to permanent houses because they wanted to rebuild the village as before.
The reconstruction in Nam Kem village was focused on going back to the previous state before the tsunami without any improvement of facilities in terms of seismic safety. Infrastructure, including lifeline facilities, cannot be reconstructed in a short amount of time.
Extensive discussions between roadway authorities and other lifeline companies are required to develop a detailed disaster reconstruction plan. Of course, the residents' opinions should be reflected in the plan to maintain the original community. The disaster reconstruction concept in Japan seeks to give more priority to community opinions, even if it takes a long time to complete the reconstruction. For instance, the temporary houses were closed five years after the Kobe earthquake disaster in 1995. The tsunami-affected www.intechopen.com community in Nam Kem village did not have the opportunity to have detailed discussions with the government, and they wanted to retain their traditional lifestyle.
In the reconstruction process, the reconstruction speed, environmental condition after the tsunami, financial support of the government, organization acting as the interface between the government and the community, and traditional and cultural living style are closely related. From the point of view of lifeline reconstruction, it is also important to foster a close relationship among the local government, lifeline companies, and the community and to establish a strong foundation for people-to-people links in order to prepare for an emergency.
After the 2004 Indian Ocean earthquake and tsunami, many villages and towns in Indonesia faced many problems during the reconstruction processes. A lifeline reconstruction process considering community-based planning had not been reviewed in detail so far. This kind of study would be necessary for an effective lifeline reconstruction strategy.
Residential awareness of water use after tsunami
In suffered area from the 2004 tsunami, residents had mostly used domestic water from shallow well. The domestic water became unavailable after covered with salty water. The salt damage to the shallow well seems to affect residential life quality for a long time. The water-supply system in Banda Aceh, Indonesia was reconstructed and the residents changed their water-supply from shallow wells to a pipeline network. In this section, an analysis on the lifeline reconstruction and its long-term effects, with the focus on residential awareness of water use before and after the tsunami is considered.
Reconstruction of water supply system in Banda Aceh
Banda Aceh is the nearest big city from the epicenter and was suffered severely from the 2004 tsunami. Worldwide institutions helped its reconstruction projects. One of them is the water supply system of Perusahaan Daerah Air Minum (PDAM, meaning provincial drinking water supply authority). The water purification plant of PDAM located at about 10 km far from coast was fortunately not flooded by the tsunami but had physical damage to facility due to seismic ground motion. Switzerland government supported rehabilitation on of water purification plant, whereas Japan government planned and installed the watersupply pipeline network of 198 km. The repaired purification pant enables to make water for 50,000 customers. By October in 2010 (almost 6 years after the tsunami) the PDAM supplies water to 32,000 customers and is extending service for additional 8,000 customers, who can use PDAM water with no charge until completion of pipeline and other accessories installs. Population of Banda Aceh increased from 170 thousands to 220 thousand with moving people from tsunami-suffered suburb. The water-supply system user also increased from the 2004 tsunami. In Banda Aceh the PDAM user was large comparing the other cities so that underground water is naturally in high level and contains salt.
Interview survey on residential awareness of water use after tsunami
Interview survey on residential awareness of water use before and after the tsunami was carried out at four districts in Banda Aceh in the beginning of October, 2010. The interview was held at resident's home one by one through Indonesian language and the questionnaire sheet was collected at once. The questions are about water use at home, residential satisfaction rating on water quality and stability, emergency use and so on, and suspension limit of water-supply service during a disaster. Fig. 16 shows the interview districts of Banda Aceh, and Table 9 lists the details of responses in each district. 143 answers were obtained as a whole. District A is resettlement house district donated by the Tsuchi religious body, in which all the residents moved from the tsunami-suffered area in and outside Banda Aceh as shown in Fig. 17. The water-supply system was installed at the same time of house construction after the tsunami. Districts B, C and D were suffered from the tsunami by different damage level. With regards of inundation damage report, Districts C, Kuta Raja and D, Meuraxa had inundation and more than 50 % of houses damage, and District B, Syiah Kuara had inundation and less than 50% of house damage. 80 % of residents live before the tsunami in District B, whereas half of residents moved from the other districts or the suburbs in Districts C and D. Table 9. Responses of interview survey on water use after tsunami The number of answers are not enough statistically, but the rate of PDAM customers in each district as shown in Fig. 18(a) is confirmed to be similar to the statistics of customer number by the PDAM. In resettlement district, A, residents uses 100% PDAM water-supply, and 70 to 80 % of residents do in the other districts. The PDAM customers without charge in Districts C and D are those live in the area under pipeline construction. Those who do not use the PDAM water use domestic water from shallow well. As a whole, 20 % does not use the PDAM water, 50 % uses both the PDAM water and domestic water as shown in Fig.18(b) Well user increases in inland Districts A and B. Among the PDAM users, 10 to 50 % drinks the water after boiling as shown in Fig.19(a). The severely suffered districts, C and D, indicates low rate of drinking customer. Less 10 % drinks domestic water after boiling as shown in Fig.19 The residential awareness on water use is examined considering disaster experience and difference between pipe-supply water and domestic water. Here, the residential satisfaction was asked in terms of water quality, water supply stability, emergency stability and water cost by 5 satisfaction options; (1) satisfy very much (2) satisfy (3) reasonable (4) dissatisfy a little and (5) dissatisfy. The response gives grating points from 5 (satisfy very much) to 1 (dissatisfy) for each option. Means of grating point for satisfaction was analyzed by one-way analysis of variance between three groups. Group I is the residents who use the PDAM water before and after the tsunami. Group II is those who started using the PDAM water after the tsunami. Group III is those who have not used the PDAM water. The grating point by Groups I and II is given about the PDAM water, and that by Group III is about the domestic water. Table 10 summarizes the variance analysis results for four questions. The mean of grating point between the PDAM user (Groups I and II) and the domestic well user (Group III) differs significantly on water quality and supply stability. The PDAM users satisfy water qualities by around 3.7 grating point, but supply stability by 2.56 to 2.92 grating point. Significant difference between PDAM users staring before and after the tsunami can not be seen. During the interview, the responder pointed out the water for washing clothes. Since the domestic water contains salt rather than before, even 6 years after the tsunami, they use it by the PDAM water or the water filtered by house strainer. The residents who do not satisfy the supply satiability replied that they cannot receive adequate water volume in day time and they install house pump at home and take water. The number of house pump seen in the interview survey is not small. These uncontrolled water pressures may provoke malfuctiionality of whole water supply system. For the emergency supply stability after disasters, there is significant difference between groups. Those who continue the same water use before the tsunami (Groups I and III) give 2.1 grating points, while the new PDAM user gives 2.76. The new PDAM user is thought to be evacuated people to Banda Aceh city. Emergency water delivery by tanks and expanding new pipeline install by the PDAM may contribute high grating to residential satisfaction. Water charge is not much effective factor to identify residential awareness in terms of water use and disaster experience as shown in Table 10 (d). By the way, the reason why 100 % of residents do not contract the PDAM water seems to be financial issue. Fig. 20 shows relation between the rate of water cost per income and the residential satisfaction on water charge for PDAM water users. Income rating is classified into 7 classes as shown in Figure. Water cost indicates the PDAM water charge and gallon-sized water bottle purchasing cost per month. The residential satisfaction on water charge is the rate of those who think water charge as inexpensive and half of those who think as reasonable to the all answers, when asked for the water charge in options; expensive, reasonable, and inexpensive. As it can be seen, the residential satisfaction on water charge increases as income increases. A half of residents having over 1.25 million Indonesian Rupiah satisfy the water charge from the PDAM. Their PDAM water charge is less than 10 % of income. When the PDAM water charge increases more than 10 %, it interferes with their daily lives. When compared the amount of the PDAM water use, there is little difference of the personal daily water demand in each income class. The life style related to water demand does not differ by the income and the satisfaction depends on the incomes. It should be noted that the water charge from the PDAM water is almost same as the bottle purchasing cost. The responders buy a monthly average of 11.7 gallon-sized water bottle. One bottle (about 3.785 liter) costs about 5,000 Rupiah and the bottle water costs 1,321 Rupiah per liter. Meanwhile the PDAM water costs about 3 Rupiah per liter. If the PDAM can obtain residents satisfaction on the water quality and residents can drink it even after boiling, the whole residential water charge decreases less than 10 % and the residents also satisfy the water charge. It should be discussed that how much the pipeline network completion for better water quality at the customer side can be invested considering the water charge. These assessments of water pipeline install would be necessary for future work. In Banda Aceh City the water supply pipeline network install is almost completed. There are many small cities and town around the coast of Sumatra Island, in which the tsunami wave was covered by the 2004 tsunami but the water supply system is not installed yet. Even they do not drink domestic water directly, the salty underground water affects on living use water for long time. The water pipeline network, which may damage by the earthquake and tsunami, would be necessary as reconstruction works to these coastal areas.
Conclusions
This chapter discussed the lifeline restoration and reconstruction after the tsunami, especially focusing on the 2004 Indian Ocean earthquake and tsunami. Followings can be summarized. The business restoration depends more strongly on business facility restoration than lifeline restoration if the tsunami inundation is higher than 1 m. The lifeline interruption caused by the tsunami affected business continuity more than in previous studies.
In the reconstruction process, the reconstruction speed, environmental condition after the tsunami, financial support of the government, organization acting as the interface between the government and the community, and traditional and cultural living style are closely related. From the point of view of lifeline reconstruction, it is also important to foster a close relationship among the local government, lifeline companies, and the community and to establish a strong foundation for people-to-people links in order to prepare for an emergency. The salty underground water affects on living use water for long time. The water supply from the pipeline gives high satisfaction to water quality. The pipeline network construction is important to the suffered people. In the writing of the chapter, east Japan was hit by strong seismic motion and covered with huge tsunami. Many civilized areas changed catastrophic ruins. What we learned from Indonesia, Thailand and Sri Lanka introduced herein may be not applicable directly to Japan, because of different living environment and water use, but founding in business recovery and reconstruction planning would contribute to reconstruct the suffered area.
Acknowledgment
The author expresses special thanks to Emeritus Professor of Kobe University, Shiro Takada, Mr. Arun Pinta, Department of Disaster Prevention and Mitigation (DDPM), Ministry of Interior, Thailand, and Dr. Nimal P. D. Gamage, Department of Civil Engineering, www.intechopen.com | 11,229.4 | 2011-12-16T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
Proteomics data for characterizing Microbacterium oleivorans A9, an uranium-tolerant actinobacterium isolated near the Chernobyl nuclear power plant
Microbacterium oleivorans A9 cells were exposed or not to 10 µM uranyl nitrate as resting cells in sodium chloride solution. Bacteria exposed to U(VI) and controls were harvested after 0.5, 4, and 24 h of toxicant exposure. Bacteria were subjected to high-throughput proteomics analysis using a Q-Exactive HF high resolution tandem mass spectrometer incorporating an ultra-high-field orbitrap analyzer. MS/MS spectra were assigned with a protein sequence database derived from a draft genome obtained by Illumina sequencing and systematic six-reading frame translation of all the contigs. Proteins identified in bacteria exposed to U(VI) and controls at the three time points allow defining the proteome dynamics upon uranium stress. The data reported here are related to a published study regarding the proteome dynamics of M. oleivorans A9 upon uranium stress by Gallois et al. (in press) entitled “Proteogenomic insights into uranium tolerance of a Chernobyl׳s Microbacterium bacterial isolate”. The data accompanying the manuscript describing the database searches and comparative analysis have been deposited to the ProteomeXchange with identifier PXD005794.
a b s t r a c t
Microbacterium oleivorans A9 cells were exposed or not to 10 mM uranyl nitrate as resting cells in sodium chloride solution. Bacteria exposed to U(VI) and controls were harvested after 0.5, 4, and 24 h of toxicant exposure. Bacteria were subjected to high-throughput proteomics analysis using a Q-Exactive HF high resolution tandem mass spectrometer incorporating an ultra-high-field orbitrap analyzer. MS/MS spectra were assigned with a protein sequence database derived from a draft genome obtained by Illumina sequencing and systematic six-reading frame translation of all the contigs. Proteins identified in bacteria exposed to U(VI) and controls at the three time points allow defining the proteome dynamics upon uranium stress. The data reported here are related to a published study regarding the proteome dynamics of M. oleivorans A9 upon uranium stress by Gallois et al. (in press) entitled "Proteogenomic insights into uranium tolerance of a Chernobyl's Microbacterium bacterial isolate". The data accompanying the manuscript describing the database searches and
Subject area
Environmental microbiology More specific subject area Actinobacteria comparative proteogenomics Type of data Figure, mass spectrometry raw files, Excel tables How data was acquired Data-dependent acquisition of tandem mass spectra using a Q-Exactive HF tandem mass spectrometer (Thermo).
Data format
Raw and processed Experimental factors Cells, at the exponential growth phase, were harvested and the resulting cell pellets were resuspended in NaCl solution with 0 (control) or 10 mM uranyl nitrate. For each of the three sampling time points, 0.5, 4 and 24 h, four biological replicates were performed.
Experimental features
The 24 proteomes were briefly run on SDS-PAGE, followed by trypsin proteolysis. Tryptic peptides were analyzed by nanoLC-MS/MS and spectra were assigned with a draft genome-derived protein sequence database. Data source location CEA-Marcoule, DRF-Li2D, Laboratory "Innovative technologies for Detection and Diagnostics", BP 17171, F-30200 Bagnols-sur-Cèze, France Data accessibility Data are within this article and deposited to the ProteomeXchange via the PRIDE repository with identifier PRIDE: PXD005794.
Value of the data
The data are an interesting resource regarding the proteome content of a soil bacterium with extreme tolerance to heavy metals.
The proteogenomics strategy described here allows a quick identification of proteins based on a draft genome of this high GC content organism.
The data have been used to define the proteome changes in response to uranium stress in an Actinobacteria isolate from the trench T22 located near the Chernobyl nuclear power plant. As described in detail in the accompanying manuscript [1], the uranyl stress perturbed the phosphate and iron metabolic pathways.
Data
This report contains the complete list of peptide-to-spectrum assignments for the control (not treated) samples and Uranium-treated samples of Microbacterium oleivorans A9 in the first round of the proteogenomics cascade search (Supplementary Table S1), in the second round of the cascade search (Supplementary Table S2). A total of 746,092 MS/MS spectra were assigned in the first search round and 747,621 MS/MS spectra were interpreted in the second search round. All the tandem mass spectrometry characteristics (measured mass, charge, protease cleavage, post-translational modifications, and retention time) are indicated. Fig. 1 shows the schematic flowchart of experiments, data processing and results that were obtained and formatted in.xls tables. The proteome data from four independent biological replicates per time point (0.5, 4 and 24 h) upon uranium exposure or control, i.e. from 24 samples, were assigned to tryptic peptides against the M. oleivorans A9 ORF database described by Gallois et al. [1] following a proteogenomic approach [2][3][4]. The deposited data comprised the 24 raw files and the interpreted files. Supplementary Tables S1 and S2 list the peptide-to-spectrum matches with all the tandem mass spectrometry characteristics in the first query round against the large proteogenomics database and second query round against the reduced ORF database, respectively.
Preparation of Microbacterium oleivorans A9 samples
M. oleivorans A9 cells were isolated from Chernobyl trench T22 soil [5] and exposed as described [1]. Briefly, cells were harvested and the cell pellets were resuspended in 0.1 M NaCl pH 5.0 with 0 (control) or 10 mM uranyl nitrate. Four independent biological replicates were performed for statistical purpose. Fractions of 1 ml of cell suspension were taken after 0.5, 4, and 24 h for each condition. After centrifugation, the resulting supernatants were removed and the pellets were conserved at -80°C until proteomic analysis. Proteogenomics were carried out as previously described [1,6] taking into account the recent draft genome of the strain.
Protein extracts and tandem mass spectrometry
The 24 peptide mixtures were performed using a Q-Exactive HF mass spectrometer (Thermo-Fisher) coupled to an UltiMate 3000 LC system (Dionex-LC Packings) in similar conditions as those previously described. Peptide mixtures (10 μl) were loaded and desalted on-line on a reverse phase precolumn (Acclaim PepMap 100 C18) from LC Packings. Peptides were then resolved onto a reverse phase Acclaim PepMap 100 C18 column and injected into the Q-Exactive HF mass spectrometer. The Q-Exactive HF instrument was operated according to a Top20 data-dependent acquisition method as previously described [7].
Protein sequence database for proteogenomic MS/MS assignment
The recorded MS/MS spectra for the 24 samples were searched against our home-made ORF database with the parameters described previously [1]. This database contains 30,853 polypeptide sequences, for a total of 4,903,573 amino acids with an average of 159 amino acids per polypeptide. The number of MS/MS spectra per protein (spectral counts) was determined for the four replicates of each of the three time points for both conditions. The statistical protein variation was compared for each time point between the uranyl exposure and the control conditions using the T-Fold option of PatternLab 2.0 software [8]. | 1,552.6 | 2018-10-30T00:00:00.000 | [
"Biology"
] |
On Tanaka's Prolongation Procedure for Filtered Structures of Constant Type
We present Tanaka's prolongation procedure for filtered structures on manifolds discovered in [Tanaka N., J. Math. Kyoto. Univ. 10 (1970), 1-82] in a spirit of Singer-Sternberg's description of the prolongation of usual G-structures [Singer I.M., Sternberg S., J. Analyse Math. 15 (1965), 1-114; Sternberg S., Prentice-Hall, Inc., Englewood Cliffs, N.J., 1964]. This approach gives a transparent point of view on the Tanaka constructions avoiding many technicalities of the original Tanaka paper.
Introduction
This note is based on series of lectures given by the author in the Working Geometry Seminar at the Department of Mathematics at Texas A&M University in Spring 2009. The topic is the prolongation procedure for filtered structures on manifolds discovered by Noboru Tanaka in the paper [10] published in 1970. The Tanaka prolongation procedure is an ingenious refinement of Cartan's method of equivalence. It provides an effective algorithm for the construction of canonical frames for filtered structures, and for the calculation of the sharp upper bound of the dimension of their algebras of infinitesimal symmetries. This note is by no means a complete survey of the Tanaka theory. For such a survey we refer the reader to [5]. Our goal here is to describe geometric aspects of Tanaka's prolongation procedure using the language similar to one used by Singer and Sternberg in [7] and [9] for description of the prolongation of the usual G-structures. We found that it gives a quite natural and transparent point of view on Tanaka's constructions, avoiding many formal definitions and technicalities of the original Tanaka paper. We believe this point of view will be useful to anyone who is interested in studying both the main ideas and the details of this fundamental Tanaka construction. We hope that the material of Sections 3 and 4 will be of interest to experts as well. Our language also allows to generalize the Tanaka procedure in several directions, including filtered structures with non-constant and non-fundamental symbols. These generalizations, with applications to the local geometry of distributions, will be given in a separate paper.
Statement of the problem
Let D be a rank l distribution on a manifold M ; that is, a rank l subbundle of the tangent bundle T M . Two vector distributions D 1 and D 2 are called equivalent if there exists a diffeomorphism F : M → M such that F * D 1 (x) = D 2 (F (x)) for any x ∈ M . Two germs of vector distributions D 1 and D 2 at the point x 0 ∈ M are called equivalent, if there exist neighborhoods U andŨ of x 0 and a diffeomorphism F : U →Ũ such that The general question is: When are two germs of distributions equivalent? This space is endowed naturally with the structure of a graded nilpotent Lie algebra, generated by g −1 (x). Indeed, let p j : D j (x) → g j (x) be the canonical projection to a factor space. Take Y 1 ∈ g i (x) and Y 2 ∈ g j (x). To define the Lie bracket [Y 1 , Y 2 ] take a local section Y 1 of the distribution D i and a local section Y 2 of the distribution D j such that p i Y 1 (x) = Y 1 and
Weak derived flags and symbols of distributions
It is easy to see that the right-hand side of (1.1) does not depend on the choice of sections Y 1 and Y 2 . Besides, g −1 (x) generates the whole algebra m(x). A graded Lie algebra satisfying the last property is called fundamental. The graded nilpotent Lie algebra m(x) is called the symbol of the distribution D at the point x.
Fix a fundamental graded nilpotent Lie algebra m = −1 i=−µ g i . A distribution D is said to be of constant symbol m or of constant type m if for any x the symbol m(x) is isomorphic to m as a nilpotent graded Lie algebra. In general this assumption is quite restrictive. For example, in the case of rank two distributions on manifolds with dim M ≥ 9, symbol algebras depend on continuous parameters, which implies that generic rank 2 distributions in these dimensions do not have a constant symbol. For rank 3 distributions with dim D −2 = 6 the same holds in the case dim M = 7 as was shown in [3]. Following Tanaka, and for simplicity of presentation, we consider here distributions of constant type m only. One can construct the flat distribution D m of constant type m. For this let M (m) be the simply connected Lie group with the Lie algebra m and let e be its identity. Then D m is the left invariant distribution on M (m) such that D m (e) = g −1 .
1.3 The bundle P 0 (m) and its reductions To a distribution of type m one can assign a principal bundle in the following way. Let G 0 (m) be the group of automorphisms of the graded Lie algebra m; that is, the group of all automorphisms A of the linear space m preserving both the Lie brackets (A([v, w]) = [A(v), A(w)] for any v, w ∈ m) and the grading (A(g i ) = g i for any i < 0). Let P 0 (m) be the set of all pairs (x, ϕ), where x ∈ M and ϕ : m → m(x) is an isomorphism of the graded Lie algebras m and m(x). Then P 0 (m) is a principal G 0 (m)-bundle over M . The right action R A of an automorphism A ∈ G 0 (m) is as follows: . Note that since g −1 generates m, the group G 0 (m) can be identified with a subgroup of GL(g −1 ). By the same reason a point (x, ϕ) ∈ P 0 (m) of a fiber of P 0 (m) is uniquely defined by ϕ| g −1 . So one can identify P 0 (m) with the set of pairs (x, ψ), where x ∈ M and ψ : g −1 → D(x) can be extended to an automorphism of the graded Lie algebras m and m(x). Speaking informally, P 0 (m) can be seen as a G 0 (m)−reduction of the bundle of all frames of the distribution D. Besides, the Lie algebra g 0 (m) is the algebra of all derivations a of m, preserving the grading (i.e. ag i ⊂ g i for all i < 0). Additional structures on distributions can be encoded by reductions of the bundle P 0 (m). More precisely, let G 0 be a Lie subgroup of G 0 (m) and let P 0 be a principal G 0 -bundle, which is a reduction of the bundle P 0 (m). Since g 0 is a subalgebra of the algebra of derivations of m preserving the grading, the subspace m⊕g 0 is endowed with the natural structure of a graded Lie algebra. For this we only need to define brackets [f, v] for f ∈ g 0 and v ∈ m, because m and g 0 are already Lie algebras.
Denote by L x the left translation on M (m) by an element x. Finally, let P 0 (m, g 0 ) be the set of all pairs (x, ϕ), where x ∈ M (m) and ϕ : m → m(x) is an isomorphism of the graded Lie algebras m and m(x) such that (L x −1 ) * ϕ ∈ G 0 . The bundle P 0 (m, g 0 ) is called the flat structure of constant type (m, g 0 ). Let us give some examples.
Example 1. G-structures. Assume that D = T M . So m = g −1 is abelian, G 0 (m) = GL(m), and P 0 (m) coincides with the bundle F(M ) of all frames on M . In this case P 0 is nothing but a usual G 0 -structure.
Example 2. Contact distributions. Let D be the contact distribution in R 2n+1 . Its symbol m cont,n is isomorphic to the Heisenberg algebra η 2n+1 with grading g −1 ⊕ g −2 , where g −2 is the center of η 2n+1 . Obviously, a skew-symmetric form Ω is well defined on g −1 , up to a multiplication by a nonzero constant. The group G 0 (m cont,n ) of automorphisms of m cont,n is isomorphic to the group CSP(g −1 ) of conformal symplectic transformations of g −1 , i.e. transformations preserving the form Ω, up to a multiplication by a nonzero constant.
Example 3. Maximally nonholonomic rank 2 distributions in R 5 . Let D be a rank 2 distribution in R 5 with degree of nonholonomy equal to 3 at every point. Such distributions were treated byÉ. Cartan in his famous work [1]. In this case dim D −2 ≡ 3 and dim D −3 ≡ 5. The symbol at any point is isomorphic to the Lie algebra m (2,5) generated by X 1 , X 2 , X 3 , X 4 , and X 5 with the following nonzero products: [X 1 , X 2 ] = X 3 , [X 1 , X 3 ] = X 4 , and [X 2 , X 3 ] = X 5 . The grading is given as follows: where Y 1 , . . . , Y k denotes the linear span of vectors Y 1 , . . . , Y k . Since m (2,5) is a free nilpotent Lie algebra with two generators X 1 and X 2 , its group of automorphism is equivalent to GL(g −1 ). Example 4. Sub-Riemannian structures of constant type (see also [6]). Assume that each space D(x) is endowed with an Euclidean structure Q x depending smoothly on x. In this situation the pair (D, Q) defines a sub-Riemannian structure on a manifold M . Recall that g −1 (x) = D(x). This motivates the following definition: A pair m, Q), where m = −1 j=−µ g j is a fundamental graded Lie algebra and Q is an Euclidean structure on g −1 , is called a sub-Riemannian symbol. Two sub-Riemannian symbols (m, Q) and (m, Q) are isomorphic if there exists a map ϕ : m →m, which is an isomorphism of the graded Lie algebras m andm, preserving the Euclidean structures Q andQ (i.e. such that Q ϕ(v 1 ), ϕ(v 2 ) = Q(v 1 , v 2 ) for any v 1 and v 2 in g −1 ). Fix a sub-Riemannian symbol (m, Q). A sub-Riemannian structure (D, Q) is said to be of constant type (m, Q), if for every x the sub-Riemannian symbol (m(x), Q x ) is isomorphic to (m, Q).
It may happen that a sub-Riemannian structure does not have a constant symbol even if the distribution does. Such a situation occurs already in the case of the contact distribution on R 2n+1 for n > 1 (see Example 2 above). As was mentioned above, in this case a skew-symmetric form Ω is well defined on g −1 , up to a multiplication by a nonzero constant. If in addition a Euclidean structure Q is given on g −1 , then a skew-symmetric endomorphism J of g −1 is well defined, up to a multiplication by a nonzero constant, by Ω(v 1 , v 2 ) = Q(Jv 1 , v 2 ). Take 0 < β 1 ≤ · · · ≤ β n so that {±β 1 i, . . . , ±β n i} is the set of the eigenvalues of J. Then a sub-Riemannian symbol with m = m cont,n is determined uniquely (up to an isomorphism) by a point [β 1 : β 2 : . . . : β n ] of the projective space RP n−1 .
Let (D, Q) be a sub-Riemannian structure of constant type (m, Q) and G 0 (m, Q) ⊂ G 0 (m) be the group of automorphisms of a sub-Riemannian symbol (m, Q). Let P 0 (m, Q) be the set of all pairs (x, ϕ), where x ∈ M and ϕ : m → m(x) is an isomorhism of sub-Riemannian symbols m, Q and m(x), Q x . Obviously, the bundle P 0 (m, Q) is a reduction of P 0 (m) with the structure group G 0 (m, Q).
Example 5. Second order ordinary dif ferential equations up to point transformations. Assume that D is a contact distribution on a 3-dimensional manifold endowed with two distinguished transversal line sub-distributions L 1 and L 2 . Such structures appear in the study of second order ordinary differential equations y ′′ = F (t, y, y ′ ) modulo point transformations. Indeed, let J i (R, R) be the space of i-jets of mappings from R to R. As the distribution D we take the standard contact distribution on J 1 (R, R). In the standard coordinates (t, y, p) on J 1 (R, R) this distribution is given by the Pfaffian equation dy − pdt = 0. The natural lifts to J 1 of solutions of the differential equation form the 1-foliation tangent to D. The tangent lines to this foliation define the sub-distribution L 1 . In the coordinates (t, y, p) the sub-distribution L 1 is generated by the vector field ∂ ∂t + p ∂ ∂y + F (t, y, p) ∂ ∂p . Finally, consider the natural bundle J 1 (R, R) → J 0 (R, R) and let L 2 be the distribution of the tangent lines to the fibers. The sub-distribution L 2 is generated by the vector field ∂ ∂p . The triple (D, L 1 , L 2 ) is called the pseudo-product structure associated with the second order ordinary differential equation. Two second order differential equations are equivalent with respect to the group of point transformations if and only if there is a diffeomorphism of J 1 (R, R) sending the pseudo-product structure associated with one of them to the pseudo-product structure associated with the other one. This equivalence problem was treated byÉ. Cartan in [2] and earlier by A. Tresse in [12] and [13]. The symbol of the distribution is m cont,1 ∼ η 3 (see Example 2 above) and the plane g −1 is endowed with two distinguished transversal lines. This additional structure is encoded by the subgroup G 0 of the group G 0 (m cont,1 ) preserving each of these lines.
Another important class of geometric structures that can be encoded in this way are CRstructures (see § 10 of [10] for more details).
Algebraic and geometric Tanaka prolongations
In [10] Tanaka solves the equivalence problem for structures of constant type (m, g 0 ). Two of Tanaka's main constructions are the algebraic prolongation of the algebra m + g 0 , and the geometric prolongation of structures of type (m, g 0 ), imitated by the algebraic prolongation.
First he defines a graded Lie algebra, which is in essence the maximal (nondegenerated) graded Lie algebra, containing the graded Lie algebra i≤0 g i as its non-positive part. More precisely, Tanaka constructs a graded Lie algebra g(m, g 0 ) = i∈Z g i (m, g 0 ), satisfying the following three conditions: 3. g(m, g 0 ) is the maximal graded Lie algebra, satisfying Properties 1 and 2.
This graded Lie algebra g(m, g 0 ) is called the algebraic universal prolongation of the graded Lie algebra m ⊕ g 0 . An explicit realization of the algebra g(m, g 0 ) will be described later in Section 4. It turns out ([10, § 6], [14, § 2]) that the Lie algebra of infinitesimal symmetries of the flat structure of type (m, g 0 ) can be described in terms of g(m, g 0 ). If dim g(m, g 0 ) is finite (which is equivalent to the existence of l > 0 such that g l (m, g 0 ) = 0), then the algebra of infinitesimal symmetries is isomorphic to g(m, g 0 ). The analogous formulation in the case when g(m, g 0 ) is infinite dimensional may be found in [10, § 6].
Furthermore for a structure P 0 of type (m, g 0 ), Tanaka constructs a sequence of bundles {P i } i∈N , where P i is a principal bundle over P i−1 with an abelian structure group of dimension equal to dim g i (m, g 0 ). In general P i is not a frame bundle. This is the case only for m = g −1 ; that is, for G-structures. But if dim g(m, g 0 ) is finite or, equivalently, if there exists l ≥ 0 such that g l+1 (m, g 0 ) = 0, then the bundle P l+µ is an e-structure over P l+µ−1 , i.e. P l+µ−1 is endowed with a canonical frame (a structure of absolute parallelism). Note that all P i with i ≥ l are identified one with each other by the canonical projections (which are diffeomorphisms in that case). Hence, P l is endowed with a canonical frame. Once a canonical frame is constructed the equivalence problem for structures of type (m, g 0 ) is in essence solved. Moreover, dim g(m, g 0 ) gives the sharp upper bound for the dimension of the algebra of infinitesimal symmetries of such structures.
By Tanaka's geometric prolongation we mean his construction of the sequence of bundles {P i } i∈N . In this note we mainly concentrate on a description of this geometric prolongation using a language different from Tanaka's original one. In Section 2 we review the prolongation of usual G-structures in the language of Singer and Sternberg. We do this in order to prepare the reader for the next section, where the first Tanaka geometric prolongation is given in a completely analogous way. We believe that after reading Section 3 the reader will already have an idea how to proceed with the higher order Tanaka prolongations so that technicalities of Section 4 can be easily overcome.
Review of prolongation of G-structures
Before treating the general case we review the prolongation procedure for structures with m = g −1 , i.e. for usual G-structures. We follow [7] and [9]. Let Π 0 : P 0 → M be the canonical projection and V (λ) ⊂ T λ P 0 the tangent space at λ to the fiber of P 0 over the point Π 0 (λ). The subspace V (λ) is also called the vertical subspace of T λ P 0 . Actually, Recall that the space V (λ) can be identified with the Lie algebra g 0 of G 0 . The identification , where e tX is the one-parametric subgroup generated by X. Recall also that an Ehresmann connection on the bundle P 0 is a distribution H on P 0 such that Once an Ehresmann connection H and a basis in the space g −1 ⊕ g 0 are fixed, the bundle P 0 is endowed with a frame in a canonical way. Indeed, let λ = (x, ϕ) ∈ P 0 . Then If one fixes a basis in g −1 ⊕ g 0 , then the images of this basis under the maps ϕ H(λ) define the frame (the structure of the absolute parallelism) on P 0 . The question is whether an Ehresmann connection can be chosen canonically. To answer this question, first one introduces a special g −1 -valued 1-form ω on P 0 as follows: where ϕ H is defined by (2.3). Equivalently, Then f H e H ∈ Hom(g −1 , g 0 ). In the opposite direction, it is clear that for any f ∈ Hom(g −1 , g 0 ) there exists a horizontal subspace H such that f = f H e H . The map defined by is called the Spencer operator 1 . By direct computations ([7, p. 42], [9, p. 317], or the proof of more general statement in Proposition 3.1 below) one obtains the following identity
Now fix a subspace
Speaking informally, the subspace N defines the normalization conditions for the first prolongation. The first prolongation of P 0 is the following bundle (P 0 ) (1) over P 0 : Alternatively, In other words, the fiber of (P 0 ) (1) over a point λ ∈ P 0 is the set of all horizontal subspaces H of T λ P 0 such that their structure functions satisfy the chosen normalization condition N . Obviously, the fibers of (P 0 ) (1) are not empty, and if two horizontal subspaces H,H belong to the fiber, then f H e H ∈ ker ∂. The subspace g 1 of Hom(g −1 , g 0 ) defined by is called the first algebraic prolongation of g 0 ⊂ gl(g −1 ). Note that it is absolutely not important that g 0 be a subalgebra of gl(g −1 ): the first algebraic prolongation can be defined for a subspace of gl(g −1 ) (see the further generalization below). If g 1 = 0 then the choice of the "normalization conditions" N determines an Ehresmann connection on P 0 and P 0 is endowed with a canonical frame. As an example consider a Riemannian structure. In this case g 0 = so(n), where n = dim g −1 , and it is easy to show that g 1 = 0. Moreover, dim Hom(g −1 ∧ g −1 , g −1 ) = dim Hom(g −1 , g 0 ) = n 2 (n−1) 2 . Hence, Im ∂ = Hom(g −1 ∧ g −1 , g −1 ) and the complement subspace N must be equal to 0. So, in this case one gets the canonical Ehresmann connection with zero structure function (torsion), which is nothing but the Levi-Civita connection.
If g 1 = 0, we continue the prolongation procedure by induction. Given a linear space W denote by Id W the identity map on W . The bundle (P 0 ) (1) is a frame bundle with the abelian structure group G 1 of all maps A ∈ GL(g −1 ⊕ g 0 ) such that In [9] this operator is called the antisymmetrization operator, but we prefer to call it the Spencer operator, because, after certain intepretation of the spaces Hom(g −1 , g 0 ) and Hom(g −1 ∧ g −1 , g −1 ), this operator can be identified with an appropriate δ-operator introduced by Spencer in [8] for the study of overdetermined systems of partial differential equations. Indeed, since g 0 is a subspace of gl(g −1 ), the space Hom(g −1 , g 0 ) can be seen as a subspace of the space of g −1 -valued one-forms on g −1 with linear coefficients, while Hom(g −1 ∧ g −1 , g −1 ) can be seen as the space of g −1 -valued two-forms on g −1 with constant coefficients. Then the operator ∂ defined by (2.4) coincides with the restriction to Hom(g −1 , g 0 ) of the exterior differential acting between the above-mentioned spaces of one-forms and two-forms, i.e. with the corresponding Spencer δ-operator.
where T ∈ g 1 . The right action R A of A ∈ G 1 on a fiber of (P 0 ) (1) is defined by the following rule: R A (ϕ) = ϕ • A. Observe that g 1 is isomorphic to the Lie algebra of G 1 .
Set P 1 = (P 0 ) (1) . The second prolongation P 2 of P 0 is by definition the first prolongation of the frame bundle P 1 , P 2 def = (P 1 ) (1) and so on by induction: the i-th prolongation P i is the first prolongation of the frame bundle P i−1 .
Let us describe the structure group G i of the frame bundle P i over P i−1 in more detail. For this one can define the Spencer operator and the first algebraic prolongation also for a subspace W of Hom(g −1 , V ), where V is a linear space, which does not necessary coincide with g −1 as before. In this case the Spencer operator is the operator from Hom(g −1 , W ) to Hom(g −1 ∧ g −1 , V ), defined by the same formulas, as in (2.4). The first prolongation W (1) of W is the kernel of the Spencer operator. Note that by definition g 1 = (g 0 ) (1) . Then the i-th prolongation g i of g 0 is defined by the following recursive formula: g i = (g i−1 ) (1) . Note that g i ⊂ Hom(g −1 , g i−1 ). By (2.5) and the definition of the Spencer operator the bundle P i is a frame bundle with the abelian structure group G i of all maps A ∈ GL i−1 p=−1 g p such that where T ∈ g i . In particular, if g l+1 = 0 for some l ≥ 0, then the bundle P l is endowed with the canonical frame and we are done.
Tanaka's first prolongation
Now consider the general case. As before P 0 is a structure of constant type (m, g 0 ). Let Π 0 : P 0 → M be the canonical projection. The filtration {D i } i<0 of T M induces a filtration {D i 0 } i≤0 of T P 0 as follows: We also set D i 0 = 0 for all i > 0. Note that D 0 0 (λ) is the tangent space at λ to the fiber of P 0 and therefore can be identified with g 0 . Denote by I λ : g 0 → D 0 0 (λ) the identifying isomorphism. Fix a point λ ∈ P 0 and let π i 0 : be the canonical projection to the factor space. Note that Π 0 * induces an isomorphism between the space D i 0 (λ)/D i+1 0 (λ) and the space D i (Π 0 (λ))/D i+1 (Π 0 (λ)) for any i < 0. We denote this isomorphism by Π i 0 . The fiber of the bundle P 0 over a point x ∈ M is a subset of the set of all maps which are isomorphisms of the graded Lie algebras m = i<0 g i and i<0 D i (x)/D i+1 (x). We are going to construct a new bundle P 1 over the bundle P 0 such that the fiber of P 1 over a point λ = (x, ϕ) ∈ P 0 will be a certain subset of the set of all mapŝ For this fix again a point λ = (x, ϕ) ∈ P 0 . For any i < 0 choose a subspace Then the map Π i 0 • π i 0 | H i defines an isomorphism between H i and D i Π 0 (λ) /D i+1 Π 0 (λ) . So, once a tuple of subspaces H = {H i } i<0 is chosen, one can define a map 2) play here the same role as horizontal subspaces in the prolongation of the usual G-structures. Can we choose a tuple {H i } i<0 in a canonical way? For this, by analogy with the prolongation of G-structure, we introduce a "partial soldering form" of the bundle P 0 and the structure function of a tuple H.
defined as follows. Let pr H i be the projection of D i 0 (λ)/D i+2 0 (λ) to D i+1 0 (λ)/D i+2 0 (λ) parallel to H i (or corresponding to the splitting (3.2)). Given vectors v 1 ∈ g −1 and v 2 ∈ g i , take two vector fields Y 1 and Y 2 in a neighborhood of λ in P 0 such that Y 1 is a section of D −1 0 , Y 2 is a section of D i 0 , and Then set In the above formula we take the equivalence class of the vector [Y 1 , Y 2 ](λ) in D i−1 0 (λ)/D i+1 0 (λ) and then apply pr H i−1 .
One must show that C 0 H (v 1 , v 2 ) does not depend on the choice of vector fields Y 1 and Y 2 , satisfying (3.4). Indeed, assume that Y 1 and Y 2 are another pair of vector fields in a neighborhood of λ in P 0 such that Y 1 is a section of D −1 0 , Y 2 is a section of D i 0 , and they satisfy (3.4) with Y 1 , Y 2 replaced by Y 1 , Y 1 . Then where Z 1 is a section of the distribution D 0 0 such that Z 1 (λ) = 0 and Z 2 is a section of the distribution D i+1 From (3.5) we see that the structure function is independent of the choice of vector fields Y 1 and Y 2 . We now take another tuple and consider how the structure functions C 1 H and C 1 e H are related. By construction, for any Then Hom(g i , g i+1 ). Conversely, it is clear that for any f ∈ where the brackets [ , ] are as in the Lie algebra m ⊕ g 0 . The map ∂ 0 coincides with the Spencer operator (2.4) in the case of G-structures. Therefore it is called the generalized Spencer operator for the first prolongation. Proof . Fix vectors v 1 ∈ g −1 and v 2 ∈ g i and let Y 1 and Y 2 be two vector fields in a neighborhood of λ satisfying (3.4). Take two vector fields Y 1 and Y 2 in a neighborhood of λ in P 0 such that Y 1 is a section of D −1 0 , Y 2 a section of D i 0 , and Further, assume that vector fields Z 1 and Z 2 are defined as in (3.6). Then Z 1 is a section of D 0 0 and Z 2 is a section of D i+1 0 such that (3.10) Taking into account (3.4) we get (3.14) Finally, from (3.9) and (3.10) for i = −1, and the definition of the action of G 0 on P 0 it follows that identity (3.13) holds also for i = −1, and that Now we proceed as in the case of G-structures. Fix a subspace As for G-structures, the subspace N 0 defines the normalization conditions for the first prolongation. Then from the splitting (3.16) it follows trivially that there exists a tuple H = {H i } i<0 such that The space g 1 is called the first algebraic prolongation of the algebra m ⊕ g 0 . Here we consider g 1 as an abelian Lie algebra. Note that the fact that the symbol m is fundamental (that is, g −1 generates the whole m) implies that The first (geometric) prolongation of the bundle P 0 is the bundle P 1 over P 0 defined by Equivalently, It is a principal bundle with the abelian structure group G 1 of all maps A ∈ i<1 Hom(g i , g i ⊕ g i+1 ) such that Note that G 1 is an abelian group of dimension equal to dim g 1 .
Higher order Tanaka's prolongations
More generally, define the k-th algebraic prolongation g k of the algebra m ⊕ g 0 by induction for any k ∈ N. Assume that spaces g l ⊂ i<0 Hom(g i , g i+l ) are defined for all 0 < l < k. Set Then let Directly from this definition and the fact that m is fundamental (that is, it is generated by g −1 ) it follows that if f ∈ g k satisfies f | g −1 = 0, then f = 0. The space i∈Z g i can be naturally endowed with the structure of a graded Lie algebra. The brackets of two elements from m are as in m. The brackets of an element with non-negative weight and an element from m are already defined by (4.1). It only remains to define the brackets [f 1 , f 2 ] for f 1 ∈ g k , f 2 ∈ g l with k, l ≥ 0. The definition is inductive with respect to k and l: if k = l = 0 then the bracket [f 1 , f 2 ] is as in g 0 . Assume that [f 1 , f 2 ] is defined for all f 1 ∈ g k , f 2 ∈ g l such that a pair (k, l) belongs to the set {(k, l) : 0 ≤ k ≤k, 0 ≤ l ≤l}\{(k,l)}.
Then define [f 1 , f 2 ] for f 1 ∈ gk, f 2 ∈ gl to be the element of i<0 Hom(g i , g i+k+l ) given by It is easy to see that [f 1 , f 2 ] ∈ g k+l and that i∈Z g i with bracket product defined as above is a graded Lie algebra. As a matter of fact [10, § 5] this graded Lie algebra satisfies Properties 1-3 from Subsection 1.4. That is it is a realization of the algebraic universal prolongation g(m, g 0 ) of the algebra m ⊕ g 0 . Now we are ready to construct the higher order geometric prolongations of the bundle P 0 by induction. Assume that all l-th order prolongations P l are constructed for 0 ≤ l ≤ k. We also set P −1 = M . We will not specify what the bundles P l are exactly. As in the case of the first prolongation P 1 , their construction depends on the choice of normalization conditions on each step. But we will point out those properties of these bundles that we need in order to construct the (k + 1)-st order prolongation P k+1 . Here are these properties: 1. P l is a principal bundle over P l−1 with an abelian structure group G l of dimension equal to dim g l and with the canonical projection Π l .
1. for i < 0 the space H i k+1 is a complement of D i+k+1 Here the maps π i k and Π i k are defined as in (4.4) and (4.5) with l = k.
defines an isomorphism between H i k+1 and H i k for i < 0. Additionally, by (4.7) the map (Π l ) * | H i k+1 defines an isomorphism between H i k+1 and H i k for 0 ≤ i < k. So, once a tuple of subspaces H k+1 = {H i k+1 } i<k , satisfying (4.6) and (4.7), is chosen, one can define a map as follows Can we choose a tuple or a subset of tuples H k in a canonical way? To answer this question, by analogy with Sections 2 and 3, we introduce a "partial soldering form" of the bundle P k and the structure function of a tuple H k+1 . The soldering form of P k is a tuple Here (Π k ) * (Y ) i is the equivalence class of (Π k ) * (Y ) in D i k−1 (λ k−1 )/D i+k+1 k−1 (λ k−1 ). By construction it follows immediately that D i+1 k (λ k ) = ker ω i k . So, the form ω i k induces the g i -valued formω i k on D i k (λ k )/D i+1 k (λ k ). The structure function C k H k+1 of a tuple H k+1 is the element of the space defined as follows: Let π i,s l : D i l (λ l )/D i+l+2 l (λ l ) → D i l (λ l )/D i+l+2−s l (λ l ) be the canonical projection to a factor space, where −1 ≤ l ≤ k, i ≤ l. Here, as before, we assume that D i l = 0 for i > l. Note that the previously defined π i l coincides with π i,1 l . By construction, one has the following two relations corresponding to the splitting (4.9) if i < 0 or the projection of D i k (λ k ) to H k−1 k+1 corresponding to the splitting (4.10) if 0 ≤ i < k. Given vectors v 1 ∈ g −1 and v 2 ∈ g i take two vector fields Y 1 and Y 2 in a neighborhood U k of λ k in P k such that for anyλ k = (λ k−1 ,φ k ) ∈ U k , wherẽ Then set As in the case of the first prolongation, C k H (v 1 , v 2 ) does not depend on the choice of vector fields Y 1 and Y 2 , satisfying (4.11). Indeed, assume that Y 1 and Y 2 is another pair of vector fields in a neighborhood of λ k in P k such that Y 1 is a section of D −1 k , Y 2 is a section of D i k , and they satisfy (4.11) Hom(g i , g k ).
In the opposite direction, it is clear that for any f ∈ i<0 (4.13) and (4.14) and such that f = f H k+1 e H k+1 . Further, let A k be as in (4.8) and define a map where the brackets [ , ] are as in the algebraic universal prolongation g(m, g 0 ). For k = 0 this definition coincides with the definition of the generalized Spencer operator for the first prolongation given in the previous section. The reason for introducing the operator ∂ k is that the following generalization of identity (3.8) holds: A verification of this identity for pairs (v 1 , v 2 ), where v 1 ∈ g −1 and v 2 ∈ g i with i < 0, is completely analogous to the proof of Proposition 3.1. For i ≥ 0 one has to use the inductive assumption that the restrictions ϕ l | g i are the same for all λ l from the same fiber (see item 3 from the list of properties satisfied by P l in the beginning of this section) and the splitting (4.10). Now we proceed as in Sections 2 and 3. Fix a subspace ∈ ker ∂ k . Note also that In other words, Indeed, if f ∈ ker ∂ k , then by (4.15) for any v 1 ∈ g −1 and v 2 ∈ g i with 0 ≤ i ≤ k − 1 one has In other words, Hom(g i , g i+k )). Since g −1 generates the whole symbol m we see that f (v 2 ) = 0 holds for any v 2 ∈ g i with 0 ≤ i ≤ k − 1. This proves that (4.17). Further, comparing (4.15) and (4.18) with (4.2) and using again the fact that g −1 generates the whole symbol m we obtain ker ∂ k = g k+1 .
The (k + 1)-st (geometric) prolongation of the bundle P 0 is the bundle P k+1 over P k defined by Equivalently, It is a principal bundle with the abelian structure group G k+1 of all maps A ∈ i≤k Hom(g i , where T i ∈ Hom(g i , g i+k+1 ) and (T −µ , . . . , T −1 ) ∈ g k+1 . The right action R k+1 A of A ∈ G k+1 on a fiber of P k+1 is defined by R k+1 A (ϕ H k+1 ) = ϕ H k+1 • A. Obviously, G k+1 is an abelian group of dimension equal to dim g k+1 . It is easy to see that the bundle P k+1 is constructed so that the Properties 1-4, formulated in the beginning of the present section, hold for l = k + 1 as well.
Finally, assume that there existsl ≥ 0 such that gl = 0 but gl +1 = 0. Since the symbol m is fundamental, it follows that g l = 0 for all l >l. Hence, for all l >l the fiber of P l over a point λ l−1 ∈ P l−1 is a single point belonging to , where, as before, µ is the degree of nonholonomy of the distribution D. Moreover, by our assumption, D i l = 0 if l ≥l and i ≥l. Therefore, if l =l + µ, then i + l + 1 >l for i ≥ −µ and the fiber of P l over P l is an element of Hom In other words, Pl +µ defines a canonical frame on Pl +µ−1 . But all bundles P l with l ≥l are identified one with each other by the canonical projections (which are diffeomorphisms in that case). As a conclusion we get an alternative proof of the main result of the Tanaka paper [10]: Theorem 4.1. If the (l + 1)-st algebraic prolongation of the graded Lie algebra m ⊕ g 0 is equal to zero then for any structure P 0 of constant type (m, g 0 ) there exists a canonical frame on thē l-th geometric prolongation Pl of P 0 .
The power of Theorem 4.1 is that it reduces the question of existence of a canonical frame for a structure of constant type (m, g 0 ) to the calculation of the universal algebraic prolongation of the algebra m⊕g 0 . But the latter is pure Linear Algebra: each consecutive algebraic prolongation is determined by solving the system of linear equations given by (4.2). Let us demonstrate this algebraic prolongation procedure in the case of the equivalence of second order ordinary differential equations with respect to the group of point transformations (see Example 5 in Subsection 1.3). The result of this prolongation is very well known using the structure theory of simple Lie algebras (see discussions below), but this is one of the few nontrivial examples, where explicit calculations of algebraic prolongation can be written down in detail within one and a half pages.
Finally note that the construction of the bundles P k (and therefore of the canonical frame) depends on the choice of the normalization conditions given by spaces N k , as in (4.16). Under additional assumptions on the algebra g(m, g 0 ) (for example, semisimplicity or existence of a special bilinear form) the spaces N k themselves can be taken in a canonical way at each step of the prolongation procedure. This allows to construct canonical frames satisfying additional nice properties.
In particular, in another fundamental paper of Tanaka [11], it was shown that if the algebraic universal prolongation g(m, g 0 ) is a semisimple Lie algebra, then the so-called g(m, g 0 )-valued normal Cartan connection can be associated with a structure of type (m, g 0 ). Roughly speaking, a Cartan connection gives the canonical frame which is compatible in a natural way with the whole algebra g(m, g 0 ). This is a generalization of Cartan's results [1] on maximally nonholonomic rank 2 distributions in R 5 . Further, T. Morimoto [4] gave a general criterion (in terms of the algebra g(m, g 0 )) for the existence of the normal Cartan connection for structures of type (m, g 0 ).
All these developments are far beyond of the goals of the present note, so we do not want to address them in more detail here, referring the reader to the original papers. | 10,341.8 | 2009-06-02T00:00:00.000 | [
"Mathematics"
] |
Fucoidans Disrupt Adherence of Helicobacter pylori to AGS Cells In Vitro
Fucoidans are complex sulphated polysaccharides derived from abundant and edible marine algae. Helicobacter pylori is a stomach pathogen that persists in the hostile milieu of the human stomach unless treated with antibiotics. This study aims to provide preliminary data to determine, in vitro, if fucoidans can inhibit the growth of H. pylori and its ability to adhere to gastric epithelial cells (AGS). We analysed the activity of three different fucoidan preparations (Fucus A, Fucus B, and Undaria extracts). Bacterial growth was not arrested or inhibited by the fucoidan preparations supplemented into culture media. All fucoidans, when supplemented into tissue culture media at 1000 µg mL−1, were toxic to AGS cells and reduced the viable cell count significantly. Fucoidan preparations at 100 µg mL−1 were shown to significantly reduce the number of adherent H. pylori. These in vitro findings provide the basis for further studies on the clinical use of sulphated polysaccharides as complementary therapeutic agents.
Introduction
Fucoidan, derived from marine edible brown algae, is a complex sulphated polysaccharide [1]. The structures and compositions of fucoidan vary across different brown algae species, but they consist primarily of L-fucose and sulphate, which form polymers with small quantities of D-galactose, D-mannose, D-xylose, and uronic acid [1,2]. Studies on fucoidans' propensity to alter biological processes associated with disease have included analysing their ability to inhibit tumour growth, modulate the immune system, interfere with viral mechanisms, and inhibit coagulation and their use as a reducing agent or antioxidant [2][3][4][5][6][7][8][9][10].
Many algae-derived concoctions used in traditional medicine have been recorded in pharmacopoeias as agents used to treat bacterial infection, either by inhibiting growth or by colonisation. Indeed, the antibiotic nature of fucoidan has been scientifically investigated in vitro for its ability to inhibit the growth of bacteria that are commonly known to develop multidrug resistance such as Staphylococcus aureus and Escherichia coli [11][12][13]. Various sulphated polysaccharides including heparin, heparin oligosaccharides, and fucoidan were also reported to competitively inhibit the colonisation of the gastric pathogen Helicobacter pylori (H. pylori) [14,15]. Furthermore, the adherence of Helicobacter species to macrophages was inhibited by fucoidan [16].
H. pylori is a Gram-negative bacterium that colonises the stomach of half of the world's human population. It causes chronic active gastritis which can progress to peptic ulcers, gastric cancer, and gastric MALT lymphoma [17]. The host's immune system is unable to clear the infection and it persists unless treated. Current standard H. pylori infection therapy consists of administration of a proton pump inhibitor and two antibiotics, amoxicillin and clarithromycin or metronidazole [18]. It was, however, recently shown that the efficacy of this empiric triple therapy has declined to an unacceptably low level of 70% compared to at least 90% eradication rate as a general rule for therapeutic regime prescription against infectious diseases [19]. This problem is in fact due to the increased prevalence of clarithromycin resistance in H. pylori isolates worldwide and it is expected that such occurrence would continue to surge with the use of eradication therapy [20][21][22]. Treatment regimens with higher success rates can involve using expensive and often restricted antibiotics [23]. Alternative and complementary strategies may improve the success rate of current treatment regimens by providing an alternative mechanism to reduce the bacterial burden. In this study, we investigated the antimicrobial activity of three fucoidan extracts, two of which were isolated from Fucus vesiculosus and one from Undaria pinnatifida, against H. pylori and how these fucoidans can alter adherence of H. pylori to human gastric epithelial cells in vitro.
Materials and Methods
2.1. Bacterial Culture Conditions. Helicobacter pylori strain NCTC 11637 was routinely cultured on Columbia agar supplemented with 7% (v/v) defibrinated horse blood (PathWest) at 37 ∘ C in a 10% CO 2 environment.
Biochemical Composition Analysis.
Three different fucoidans used in this study were kindly provided to us by Marinova Pty. Ltd. (Australia), two of which were derived from Fucus vesiculosus and one from Undaria pinnatifida.
Total carbohydrate content was determined by spectrophotometric analysis of the hydrolysed compound in the presence of phenol, based on a method described by Dubois et al. [24].
Sulfate content was analysed spectrophotometrically using a BaSO 4 precipitation method (BaCl 2 in gelatin), based on the work of Dodgson [25,26]. In brief, samples were heated in 1 M hydrochloric acid solution at 105-110 ∘ C for 3 hours to adequately cleave all sulfate groups from the molecule. These were then precipitated using a BaCl 2 /gelatin mixture, and concentration was determined by UV-Vis.
The monosaccharide composition was determined using a GC-based method for the accurate determination of individual monosaccharide ratios in a sample. This method relies on the preparation of acetylated alditol derivatives of the hydrolysed samples [30].
Molecular weight profiles were determined by Gel Permeation Chromatography, with the aid of a Size-Exclusion Column, and were reported relative to Dextran standards.
Cell Toxicity Assay and Viable Cell Count.
Cells were seeded at 5 × 10 4 cells per well in a 24-well tissue culture plate (Corning CellBIND Surface) and incubated at 37 ∘ C in a 5% CO 2 atmosphere until confluent monolayers formed. Three different fucoidans, each prepared in RPMI medium at the concentrations 1, 10, 100, and 1000 g mL −1 , were added to the cell monolayers in triplicate to test for cell cytotoxicity. Cells were incubated for 3 hours and the cell viability was determined by measuring lactate dehydrogenase (LDH) release using the CytoTox 96 nonradioactive cytotoxicity assay (Promega) and by using the trypan blue (Sigma) exclusion method, a stain that selectively colours viable cells.
2.6. Helicobacter pylori Adherence Assay. AGS cells were grown to confluence in 24-well plates. H. pylori cells (grown 24 hours on blood agar plates at 10% CO 2 and 37 ∘ C) were harvested and resuspended in RPMI medium to form the inoculum. The monolayers were inoculated with bacterial suspension at a multiplicity of infection (MOI) of 10 : 1. 3 hours after incubation; the cells were washed and incubated with fucoidans of 100 g mL −1 , in triplicate, for 3 hours under the same conditions. Cells were washed 3 times with phosphate-buffered saline (PBS) (Gibco, Invitrogen) and lysed with 1% (w/v) saponin (Sigma) in PBS for 15 minutes. Dilutions of cell lysates were plated on blood agar plates and incubated at 37 ∘ C in a 10% CO 2 environment for 2-3 days. Visible bacterial colonies were counted. The experiment was repeated twice on different occasions. Results were presented as the mean of three assays with standard deviation.
Statistical Analysis.
Statistical analysis was carried out using Student's -test (Microsoft Excel software). values of <0.05 were considered statistically significant.
Results and Discussion
3.1. Biochemical Content Analysis. Fucoidan, derived primarily from marine brown algae, is a complex sulphated polysaccharide and it has been extensively studied over the past three decades for its wide range of potentially beneficial biological properties. Depending on the source of algae, there is considerable diversity in the basic composition and structure of fucoidans and such structural composition variation confers different or unique biological functions to these diverse sulphated compounds [31].
Three fucoidan extracts, two of which were fucoidan fractions isolated from Fucus vesiculosus and one from Undaria pinnatifida, were examined for their total contents including carbohydrate, sulfate, and polyphenol. These data were supplied to us by Marinova and the relative percentages are described in Table 1. Biochemical composition analysis demonstrated that both Fucus A and Fucus B are highly sulfated fucose polymers. Fucus A contained mainly fucose (59.4%) and sulphate (25.3%) and much smaller amounts of galactose and polyphenol (3.3% and approximately 3-4%, resp.). Fucus B, however, contained nearly 50% less fucose level (31%) and around 8-fold greater polyphenolic antioxidant compositions than in Fucus A. The Undaria fucoidan was shown to be composed of mainly fucose, galactose, and sulphate with 42.4%, 22.5%, and 26.3%, respectively, as well as a small amount of polyphenol (2.5%).
Fucoidans Do Not Inhibit the Growth of H. pylori in
Culture. Fucoidan derived from Laminaria japonica was reported to be bacteriostatic against several microorganisms including S. aureus. Furthermore, in a study conducted by Lee et al. [32], the combination of antibiotics and fucoidan demonstrated synergistic effect on oral pathogenic bacterial killing. In light of these studies, the potential of Fucus A, Fucus B, and Undaria extracts was investigated in the context of H. pylori eradication.
To determine if fucoidan extracts are bacteriostatic or bactericidal, H. pylori strain NCTC 11637 was cultured in growth medium supplemented with Fucus A, Fucus B, or Undaria extract at different concentrations up to 1000 g mL −1 . The growth of H. pylori did not differ in the presence of fucoidans compared with the control (growth medium only) (Figure 1), indicating that no bacteriostatic or bactericidal activity was observed for any of the fucoidan preparations against H. pylori.
Fucoidans Are Cytotoxic to AGS Cancer Cells at Higher
Concentrations. Since inhibition of bacterial growth was not observed, we then considered whether the fucoidans are capable of inhibiting H. pylori adherence in vitro to a commonly used gastric epithelial cell line, AGS cells. Bacterial attachment to host cells is a key process in bacterial pathogenesis or infection and inhibiting this interaction would reduce bacterial load significantly as H. pylori is incapable of colonising alternative niches in the body. Initially, we investigated the cytotoxicity of fucoidans on AGS cells, as it has been shown in numerous studies that fucoidans inhibit carcinoma cell proliferation [33][34][35][36][37][38][39][40]. We, therefore, identified the safe range of fucoidan concentrations used in this assay to ensure maximal cell viability and thus consistency in the number of adherent bacteria recovered after sampling with fucoidans.
To determine if the fucoidan preparations induce cell death, their cytotoxicity of AGS cells was tested by measuring the presence of lactate dehydrogenase (LDH) in the supernatants after exposure (Figure 2(a)) [41]. The levels of LDH were significantly higher for all fucoidan extracts at the concentration of 1000 g mL −1 . Viability counting using trypan blue exclusion approach indicated that 85.5% of cells were viable when Fucus A extract was added to the AGS cells and that 66.5% were viable in the presence of the Undaria preparation (Figure 2(b)). Fewer viable cells were counted for the Fucus B extract (48.4%) than for the Fucus A and Undaria extracts.
All 3 fucoidan preparations were found to be toxic to AGS cells when tested at the concentration of 1000 g mL −1 . This is in line with previous studies that fucoidan induces cell death in a dose-dependent manner in several carcinoma cell lines, including AGS cells. Changes in apoptotic regulation are thought to be the mechanism behind this fucoidan-mediated cell death [36,38,40,42]. It is also important to mention that Fucus B extract exerted the greatest cell cytotoxicity towards AGS cells, which is likely due to its high polyphenol content that is at least 8-fold greater compared to Fucus A and Undaria. Polyphenol antioxidant has been extensively reported for its anticancer properties and such activity is thought to be mediated through inhibition of kinase activity [43].
Fucoidans Disrupt H. pylori Adherence to AGS Cells.
We inoculated AGS cells with H. pylori allowing them to bind to the cells and then washed the confluent cells to maintain the H. pylori that were adhered to the AGS cells. Using this aggregate, we supplemented the replacement growth media with each type of fucoidan at 100 g mL −1 , incubated the cells, and subsequently washed the cells to remove nonadherent bacteria. The numbers of colony forming units (CFUs) were counted compared to the control's CFUs for each experiment to determine if the fucoidan disrupted the adhered bacteria. All fucoidan extracts were shown to significantly remove adherent bacteria from the AGS cell surface when tested at 100 g mL −1 ( < 0.05 for Fucus A and Undaria and < 0.01 for Fucus B) (Figure 3). This finding indicates that fucoidans bind either to H. pylori or to AGS cells with stronger affinity, thus putatively dislodging the bacteria from host cell surface. It is, however, strongly believed to be the former event as it has been reported that fucoidan extracted from Cladosiphon okamuranus TOKIDA brown algae, which similarly inhibited H. pylori binding to human gastric cell lines MKN28 and KATO III, exerted more potent inhibitory effect by preincubation with bacteria but failed to reduce bacterial adhesion by pretreatment of gastric cells [44]. Furthermore, in the same study, several fucoidanbinding H. pylori outer membrane proteins were detected by immunoblotting analysis. In another study which investigated the effect of Fucus vesiculosus derived fucoidan and sulphated polysaccharides including heparin and Dextran on enterohepatic Helicobacter species to murine macrophage cell line J774A.1, it was shown that only fucoidan achieved substantial reduction of bacterial binding to host cells [16]. The results above provide further support that the anti-H. pylori adhesion activity demonstrated in our study is fucoidan-specific, not due to an unspecific colloidal effect. Viable AGS cells after being treated with fucoidan preparations of 1000 g mL −1 were counted using trypan blue exclusion method. Untreated AGS cells were defined as 100% relative growth. following H. pylori inoculation. The symbols * and * * represent statistical significance of values less than 0.05 and 0.01, respectively, with respect to untreated control.
Conclusions
Commerce and Department of Health, and the University of Malaya supported this work. | 3,103.8 | 2015-10-28T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Evaluating wind farm wakes in large eddy simulations and engineering models
We study wind farm wakes with large eddy simulations (LES) and use these results for the evaluation of engineering models such as the Jensen model, the coupled wake boundary layer model (CWBL), the Turbulence Optimized Park model (TurbOPark), and the wind farm model developed by Niayifar and Porté-Agel (Energies 9, 741 (2016)). We study how well these models capture the wake effects between two aligned wind farms with 72 turbines separated by 10 kilometers in a neutral boundary layer. We find that all considered models over-predict the wind farm wake recovery compared to what is observed in LES. The TurbOPark model predictions on the wind farm wake effect are closest to the LES results for the scenario considered here.
Introduction
Due to the increasing number of offshore wind farms, studies of low-velocity zones far downstream of wind farms, also known as wind farm wakes, are of utmost importance [23]. Measurements of these wakes are performed with diverse methods such as Synthetic Aperture Radar (SAR) [7,18,10,9,2], Doppler radar [28,2], Laser Imaging Detection And Ranging (LIDAR) [33], research air-crafts [31,35,30,21], and supervisory control and data acquisition (SCADA) power data [17]. Long-distance wind farm wakes have been observed up to 55 km in stable atmospheric conditions, up to 35 km in neutral conditions, and up to 10 km in unstable conditions [33]. Additionally, Reynolds-averaged Navier-Stokes solver (RANS) [27], large eddy simulation (LES) [11], and Weather Research and Forecasting (WRF) simulations [33,30] have shown that wind farms can influence each other. As a consequence, long-distance wakes behind wind farms have to be considered when planning new farms in the vicinity of existing ones. Engineering models are widely used in this planning process. Their main advantage is that they are computationally efficient and can evaluate a wide range of possible scenarios. Due to these advantages, various wake models are continuously developed [39,32]. The most simple wake model goes back to Jensen [19] who assumed a linear wake growth. More recent wake models consider more details, e.g. the model of Bastankhah and Porté-Agel [5] relies on the conservation of mass and momentum and predicts a more realistic Gaussian wake shape. Furthermore, different superposition methods have been proposed to account for wake interactions. These interactions are modeled by considering either a linear superposition of the velocity deficits [22,26] or a linear superposition of the energy deficits [20,41]. Besides, one can consider the wake deficit with respect to the incoming upstream wind speed [22,20], or with respect to the incoming flow speed for that turbine [41,26]. While these models are extensively tested for wind farms, the ability of these models to accurately capture the interaction between different wind farms is still relatively unknown. Hansen et al. [17] compared the wind farm wake effects predicted by engineering models with SCADA data, the WRF mesoscale model, and computational fluid dynamics models and highlighted the necessity of further investigations to achieve more robust predictions. Recently, Nygaard et al. [29] introduced the Turbulence Optimized Park (TurbOPark) model that is designed to model the interaction between wind farms. The TurbOPark model combines the Jensen and the Katić et al. [20] superposition method, often referred to as the Park model, and improves on this modeling approach by accounting for the turbulence intensity in the wind farm as described by the model by Frandsen et al. [13]. Nygaard et al. [29] found that the wind farm power output predicted by TurbOPark agrees favorably with SCADA data for the neighboring offshore wind farms Humber Gateway and Westernmost Rough. In this work, we compare the predictions by the Jensen model, the TurbOPark model, the coupled wake boundary layer (CWBL) model, and the model developed by Niayifar and Porté-Agel [26] for the wind farm wake development against results from a reference LES. In the LES we consider two aligned wind farms with 72 turbines that are separated by 10 km. Due to the high computational costs of the LES we only consider one wind farm layout in this study. In the remainder of the paper, we first summarize the different engineering models and the LES. Subsequently, we present the comparison between the LES results and model predictions before discussing the conclusions.
Jensen Model
The Jensen model [19] is a simple, classical wake model. The fundamental assumption of the model is that the width of the wake behind a wind turbine increases linearly with the downstream distance. Following Jensen [19] the velocity u w in the wake of a turbine is expressed by: Here u ∞ is the incoming free stream velocity, and x is the downstream distance with respect to the turbine. The turbines are assumed to be actuator disks with a rotor diameter D and the thrust coefficient C T = 4a(1 − a) with the induction factor a. The wake diameter growth rate D w is assumed to be linear: where is the wake expansion coefficient estimated based on the logarithmic wind profile. Here z 0 is the surface roughness, z hub the turbine hub-height and κ the von Kármán constant [12].
To account for the wake interactions the Jensen model [19] is combined with the Katić et al. [20] superposition model, which sums up the squared velocity deficits [20], i.e.
Here the summation i is over all turbine wakes at that location. The power of each turbine P T , normalized by the power of the turbines in first row P 1 , is estimated as where the brackets stand for an average over the turbine disk and x T is the turbine position.
Coupled wake boundary layer model
The Jensen model is one of the so-called bottom-up models in which wake deficits are combined using some superposition model to account for the wake interactions. The CWBL model [36,37] suggests to improve the Jensen model by coupling it to the Calaf et al. [6] top-down model such that the predictions from the Jensen and the top-down model are consistent in the fully developed regime of the wind farm. The top-down model parameterizes the wind farm, instead of the individual turbines, using an increased surface roughness z 0,hi . This model is based on the assumption of two constant momentum flux layers, one above the turbine hub-height and one below. Each has a characteristic friction velocity and surface roughness, such that the velocity at hub-height is given as where δ IBL is the height of the internal boundary layer in the fully developed regime of the wind farm, . Here w f indicates the effective wake area coverage in the fully developed regime of the wind farm. The roughness length of the wind farm is defined as: The value of w f and the value of the wake coefficient in the fully developed regime of the wind farm k w,∞ are determined and updated iteratively until the turbine velocity in the fully developed regime of the wind farm is the same (up to a tolerance of 0.1%) in the Jensen (equation (4)) and the top-down model (equation (6)). The effects of the entrance region of the wind-farm are considered by assigning a wake coefficient to the wake originating from each individual turbine: where m is the number of turbine wakes that overlaps with the turbine of interest and ζ = 1 is determined empirically. k w is the wake expansion coefficient in the entrance region of the wind farm, see equation (3). For further details we refer to Ref. [37].
Turbulence Optimized Park Model
Nygaard et al. [29] extended the Park model based on the idea that the wake expansion rate is dependent on turbulence intensity. To estimate the turbulence intensity behind a wind turbine, the sum of the ambient turbulence intensity I ∞ and the wake added turbulence I w is considered: The wake added turbulence is described empirically with the constants c 1 = 1.5 and c 2 = 0.8 [14].
Assuming that the wake diameter growth rate increases linearly with the turbulence intensity dD w (x )/dx = AI(x ), results in the following wake expansion rate: with α = c 1 I ∞ and β = c 2 I ∞ / √ C T and the model calibration constant A = 0.6 [29]. TurbOPark incorporates equation (11) into the calculation of the wake deficit in equation (2). Further, a prefactor u 0 /u ∞ is added in front of the √ 1 − C T term in equation (1) to take the normalized rotor-averaged inflow wind speed at each turbine position into account. Nygaard et al. [29] set the turbine thrust coefficient based on the velocity at the turbine, which is calculated iteratively based on the wakes induced by upstream turbines and the induced wind farm blockage. This effect is neglected here as the reference LES considers a constant value for C T . We note that Nygaard et al. [29] couple TurbOPark with a wind farm blockage model [29], which is not considered here.
Niayifar and Porté-Agel (2016) Model
Based on the assumption of a wake velocity deficit with an axisymmetric, self-similar Gaussian shape and using conservation of mass and momentum, Bastankhah and Porté-Agel [5] describe the velocity in a wake as Here u 0 is the local average inflow velocity in front of each turbine [43], which is different from u ∞ in equations (1) and (4). y is the spanwise distance with respect to the turbine center. The standard deviation of the Gaussian velocity deficit is: The growth rate k * w (x ) is linked to the turbulence intensity by the empirical expression [26]: k * w (x ) = 0.3837 · I(x ) + 0.003678.
The turbulence intensity is determined using equation (9) in combination with the empirical model for the added turbulence intensity proposed by Crespo and Hernández [8]: Note that equation (15) is only valid in the range 0.065 < I ∞ < 0.14, 5 < x /D < 15 and 0.1 < a < 0.4 [8]. Further, Niayifar and Porté-Agel [26] only consider the turbulence intensity from the closest upstream turbine, i.e. the wake added streamwise turbulence intensity at turbine j is given by the maximum of the added streamwise turbulence intensity induced by turbine k at turbine j: I w j = max I w kj · 4A w /πD 2 . Here A w indicates the intersection between the wake and the rotor area. We note that the wake velocity (equation (12)) is only defined starting from approximately two rotor diameters downstream of each turbine [5]. Considering that the inter-turbine distance in a wind farm is typically much larger this does not affect the model applicability. In contrast to the previous models the wake interaction is modeled as [26]: Different from Katić et al. [20] (equation (4)), Niayifar and Porté-Agel [26] consider the local mean inflow velocity u 0 in front of each turbine instead of the velocity in front of the farm u ∞ . Furthermore, this method considers a linear superposition of velocity deficit instead of a linear superposition of the energy deficit [20].
Large Eddy Simulations
We perform LES of a neutral atmospheric boundary layer (ABL) flow driven by a geostrophic wind. The LES data is used as a reference to evaluate the ability of the various engineering models to capture the development of the wind farm wake. The used LES code is an updated version of the code developed by Albertson and Parlange [3]. The governing equations are the filtered, incompressible continuity and Navier-Stokes equations: Here the tilde represents spatial filtering with a spectral cut-off filter at the LES grid-scale ∆ and u i represents the filtered velocity field components. τ ij = u i u j −ũ iũj is the trace-less part of the sub-grid scale (SGS) stress tensor and it is modeled with the anisotropic minimum dissipation model [1] with a Pointcaré constant of C i = 1/12 in horizontal and C i = 1/3 in vertical directions. The trace of the SGS stress tensor is absorbed into the filtered modified pressurẽ p + =p/ρ − p ∞ /ρ − τ kk /3. The Coriolis parameter is given by f c = (0, 2Ωcos(Φ), 2Ωsin(Φ)) with the Earth's rotation angular speed Ω = 7.3 · 10 −5 rad/s and the latitude Φ = 52 • . G is the geostrophic wind velocity with a magnitude of |G| = 11 m/s. The effect of the wind turbines is added as a body force f i using an actuator disk approach [6]. Effects of resolved viscous stresses are neglected, since a very high Reynolds number limit is assumed. The wall shear stress at the ground is modeled using the Monin-Obukhov similarity theory [24]. The boundary conditions at the top of the domain are zero vertical velocity and zero shear stress. Time integration is performed using a second-order accurate Adams-Bashforth scheme. Derivatives in the vertical direction are calculated using a second-order central finite difference scheme. In the horizontal directions a pseudo-spectral method is applied, and therefore a concurrent precursor inflow method is used to remove periodicity and generate the inlet ABL flow [38]. To ensure that the incoming wind direction at hub-height is aligned with the x-axis, we apply a proportionalintegral controller [4,34]. This simulation approach has recently been validated for various benchmark cases, see Refs. [16,15,40].
The computational domain has a size of L x = 54 km, L y = 7.2 km, and L z = 4 km, in the streamwise, spanwise, and vertical direction, respectively. The domain is resolved on a grid with 1800×480×480 nodes. This results in a resolution of ∆ x = 30 and ∆ y = 15 m in the streamwise and spanwise directions. A stretched grid with a constant ∆ z = 5 m up to z = 1.5 km and larger grid cells above is employed. The fringe region in the concurrent precursor method is ∆x Fringe = 3 km in streamwise direction and ∆y Fringe = 1 km in spanwise direction. We use a surface roughness of z 0 = 0.002 m, which is a typical value for offshore conditions. We consider two aligned wind farms with 12 × 6 turbines. The inter turbine spacing is s x = 7D in streamwise and s y = 5D in spanwise direction. The first wind farm is positioned 7 km downstream of the inflow region, and the distance between the wind farms is 10 km. The turbines have a diameter of D = 120 m and a hub-height z hub = 100m. The thrust coefficient is C T = 3/4 and a = 1/4. The boundary layer in the precursor domain of the LES reaches a quasi-steady state with a fully developed turbulent flow at the end of the 9th hour. Subsequently, the simulation in both domains is continued concurrently for two more hours before the statistics are collected in 3.5 additional hours. From the LES we obtain that the ambient turbulence intensity at hub-height is I ∞ (z hub ) = σ u /u h = 9.02%. The internal boundary layer height (δ IBL ), based on the height where the time-averaged velocity is 99% of the incoming flow speed at that height, is about 700 meters at the end of the second wind farm. This height is used in the CWBL calculations. To conclude, we mention that all engineering model calculations are only performed at hub-height. Figure 1 shows the time-averaged velocity at hub-height obtained from the LES and the four engineering models under consideration. Figure 1(a) shows that the wakes of the individual turbines are distinguishable up to 6 km downstream of the wind farm. Further downstream, the individual wakes merge into a more homogeneously mixed wind farm wake. Similar observations have been made by synthetic aperture and dual-Doppler radar measurements [33,28]. Further, these observations [33,28] and the LES show that the wind farm wake does not expand much beyond the spanwise extent of the wind farm. We note that the spanwise variations in the wind farm wakes are caused by the streamwise elongated rolls in the ABL [25]. Figure 1(b-e) show that the various models qualitatively capture the trends that individual wakes can be observed up to a certain distance behind the farm, after which a more homogeneous wind farm wake is formed. Additionally, the figure reveals clear differences between the analytical models and the LES. A comparison of the different panels shows that the Jensen, CWBL, and Niayifar and Porté-Agel [26] models strongly overpredict the recovery rate of the wind farm wake. The Jensen model prediction gives a stronger wake deficit directly behind the wind farm, due to which the stronger wind farm wake recovery only becomes apparent further downstream of the wind farm. The TurbOPark model captures the wind farm wake recovery observed in the LES most accurately. In figure 2 we show the velocity at hub-height, averaged over the spanwise locations where the turbines are located. In agreement with the studies by Christiansen and Hasager [7] and Wu and Porté-Agel [42] the velocity deficit at 10 km downwind of the farm, i.e. where the second farm starts, is about 3% of the velocity in front of the first farm. Interestingly, the LES and the analytical models only show slightly lower velocities behind the second farm than in the wake of the first farm. Additionally, this figure shows that all considered models overestimate the recovery of the wind farm wake behind both farms when compared to the LES. In the Jensen model, the wake deficits inside the wind farm are strongly overestimated. As a result, the Jensen model overestimates the wind farm wake up to 2 km downstream of the wind farm, while it underestimates the wind farm wake further downstream. The other models capture the wake deficit inside the wind farm reasonably accurately, but these models also overestimate the wind farm wake recovery. The TurbOPark model predictions for the wind farm wake recovery are closest to the LES results for the scenario considered here. Figure 3 compares the wind turbine power output as a function of downstream position obtained from the models and LES for the first and second wind farm. The LES results in Figure 3. Comparison of the power output as function of downstream direction obtained from the models for a) the first and b) the second wind farm. The results are normalized by the power production of the first row of the first farm. Error bars are the standard deviation of the power per row over time.
Results
this figure reveal that the power output of the second farm's first row is about 11% lower than the power production of the first row of the first farm. In agreement with the wind farm wake recovery discussed above, all models significantly overestimate the production of the first row of the second farm. Particularly the reduction in power output of the second farm's first row compared to the the power output of the first row of the first farm is about 9% for the TurbOPark model, 8% for the Niayifar and Porté-Agel [26], 7% for the Jensen and 4% for the CWBL model. Consequently, the TurbOPark model captures the effect of the wind farm wake most accurately. Furthermore, the TurbOPark model most accurately represents the power production as a function of downstream direction. However, in contrast to the CBWL and Niayifar and Porté-Agel [26] model, the TurbOPark overestimates the production in the entrance region of the wind farm. Overall, the TurbOPark, CBWL, and Niayifar and Porté-Agel [26] model predictions are closer to the LES results than the Jensen model.
Conclusions
In this study, we performed LES of a wind farm cluster with two wind farms separated by 10 km and evaluated the performance of four engineering models taking the LES results as a reference. An important finding is that all considered models overestimate the wind farm wake recovery compared to what is observed in LES. From the models considered here, the TurbOPark model provides the best estimate for the recovery of the wind farm wake. However, it is essential to emphasize that more work is required to assess the performance and the capability of various engineering models to capture wind farm wake effects, as only one scenario is considered in this study. | 4,730.6 | 2021-01-01T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Nanosecond sum-frequency generating optical parametric oscillator using simultaneous phase matching
We report a nanosecond sum-frequency generating optical parametric oscillator based on a single KTiOAsO4 crystal that is simultaneously phase matched for optical parametric generation and sum-frequency generation. Pumped at a wavelength of 1064 nm by a Q-switched Nd:YAG laser, this device produces 10.4-ns-long 8.3 mJ red pulses at a wavelength of 627 nm with 21% energy conversion efficiency. This device provides a simple and efficient method for converting high energy Nd:YAG lasers to a red wavelength. © 2005 Optical Society of America OCIS codes: (190.2620) Frequency conversion; (190.4970) Parametric oscillators and amplifiers; (190.4360) Nonlinear optics, devices; (190.7220) Upconversion; (140.3530) Lasers, neodymium; (140.3540) Lasers, Q-switched. References and links 1. W. R. Bosenberg, L. K. Cheng, and C. L. Tang, “Ultraviolet optical parametric oscillation in β -BaB2O4,” Appl. Phys. Lett. 54, 13–15 (1989). 2. A. Fix, T. Schröder, and R. Wallenstein, “The optical parametric oscillators of beta-barium borate and lithium triborate: new sources of powerful tunable laser radiation in the ultraviolet, visible and near infrared,” Laser Optoelektron. 23, 106–110 (1991). 3. D. E. Withers, G. Robertson, A. J. Henderson, Y. Tang, Y. Cui, W. Sibbett, B. D. Sinclair, and M. H. Dunn, “Comparison of lithium triborate and β -barium borate as nonlinear media for optical parametric oscillators,” J. Opt. Soc. Am. B 10, 1737–1743 (1993). 4. A. Fix, T. Schröder, R. Wallenstein, J. G. Haub, M. J. Johnson, and B. J. Orr, “Tunable β -barium borate optical parametric oscillator: operating characteristics with and without injection seeding,” J. Opt. Soc. Am. B 10, 1744–1750 (1993). 5. U. Bäder, J.-P. Meyn, J. Bartschke, T. Weber, A. Borsutzky, R. Wallenstein, R. G. Batchko, M. M. Fejer, and R. L. Byer, “Nanosecond periodically poled lithium niobate optical parametric generator pumped at 532 nm by a single-frequency passively Q-switched Nd:YAG laser,” Opt. Lett. 24, 1608–1610 (1999). 6. V. Pasiskevicius, H. Karlsson, F. Laurell, R. Butkus, V. Smilgevicius, and A. Piskarskas, “High-efficiency parametric oscillation and spectral control in the red spectral region with periodically poled KTiOPO4,” Opt. Lett. 26, 710–712 (2001). 7. R. A. Andrews, H. Rabin, and C. L. Tang, “Coupled parametric downconversion and upconversion with simultaneous phase matching,” Phys. Rev. Lett. 25, 605–608 (1970). 8. S. N. Zhu, Y. Y. Zhu, and N. B. Ming, “Quasi-phase-matched third-harmonic generation in a quasi-periodic optical superlattice,” Science 278, 843–846 (1997). 9. O. Pfister, J. S. Wells, L. Hollberg, L. Zink, D. A. Van Baak, M. D. Levenson, and W. R. Bosenberg, “Continuouswave frequency tripling and quadrupling by simultaneous three-wave mixings in periodically poled crystals: application to a two-step 1.19–10.71-μm frequency bridge,” Opt. Lett. 22, 1211–1213 (1997). 10. K. G. Köprülü, T. Kartaloğlu, Y. Dikmelik, and O. Aytür, “Single-crystal sum-frequency-generating optical parametric oscillator,” J. Opt. Soc. Am. B 16, 1546–1552 (1999). 11. X. P. Zhang, J. Hebling, J. Kuhl, W. W. Rühle, and H. Giessen, “Efficient intracavity generation of visible pulses in a femtosecond near-infrared optical parametric oscillator,” Opt. Lett. 26, 2005–2007 (2001). (C) 2005 OSA 27 June 2005 / Vol. 13, No. 13 / OPTICS EXPRESS 4896 #7608 $15.00 USD Received 26 May 2005; revised 10 June 2005; accepted 10 June 2005 12. K. Fradkin-Kashi, A. Arie, P. Urenski, and G. Rosenman, “Multiple nonlinear optical interactions with arbitrary wave vector differences,” Phys. Rev. Lett. 88, 023903 (2002). 13. T. Kartaloğlu, Z. G. Figen, and O. Aytür, “Simultaneous phase matching of optical parametric oscillation and second-harmonic generation in aperiodically poled lithium niobate,” J. Opt. Soc. Am. B 20, 343–350 (2003). 14. T. Kartaloğlu and O. Aytür, “Femtosecond self-doubling optical parametric oscillator based on KTiOAsO4,” IEEE J. Quantum Electron. 39, 65–67 (2003). 15. T. W. Ren, J. L. He, C. Zhang, S. N. Zhu, Y. Y. Zhu, and Y. Hang, “Simultaneous generation of three primary colours using aperiodically poled LiTaO3,” J. Phys. Condens. Matter 16, 3289–3294 (2004). 16. Y. Dikmelik, G. Akgün, and O. Aytür, “Plane-wave dynamics of optical parametric oscillation with simultaneous sum-frequency generation,” IEEE J. Quantum Electron. 35, 897–912 (1999). 17. D. L. Fenimore, K. L. Schepler, U. B. Ramabadran, and S. R. McPherson, “Infrared corrected Sellmeier coefficients for potassium titanyl arsenate,” J. Opt. Soc. Am. B 12, 794–796 (1995). 18. J.-P. Fève, B. Boulanger, O. Pacaud, I. Rousseau, B. Ménaert, G. Marnier, P. Villeval, C. Bonnin, G. M. Loiacono, and D. N. Loiacono, “Phase-matching measurements and Sellmeier equations over the complete transparency range of KTiOAsO4, RbTiOAsO4, and CsTiOAsO4,” J. Opt. Soc. Am. B 17, 775–780 (2000). 19. K. Kato, N. Umemura, and E. Tanaka, “90◦ phase-matched mid-infrared parametric oscillation in undoped KTiOAsO4,” Jpn. J. Appl. Phys. 36, L403–L405 (1997).
Introduction
Laser sources in the red part of the spectrum have several important applications in areas such as display technologies, holography, biomedical systems, materials processing, and basic science.Efficient conversion of well-established lasers such as the Nd:YAG laser to red wavelengths is an attractive approach to red generation, especially beyond power or energy levels that are attainable with semiconductor lasers.Second, third, and fourth harmonic generation of Nd:YAG lasers are commonly used to make green or UV sources.However, conversion to red wavelengths usually requires cascading harmonic generation with optical parametric generation.Generation of red laser beams using optical parametric oscillators (OPOs) that are pumped by various harmonics of the Nd:YAG laser were previously reported in the nanosecond pulsed regime [1][2][3][4]; however, the overall 1064-nm-to-visible energy conversion efficiencies of these systems are typically below 10%.A nanosecond periodically poled lithium niobate (PPLN) OPO pumped by the second-harmonic of a Q-switched Nd:YAG laser was reported to achieve a maximum of 12% conversion efficiency [5]; however, the output energy was limited due to the damage threshold of the lithium niobate crystal that is imposed by aperture limitations.Similarly, a recently reported nanosecond periodically-poled potassium titanyl phosphate (PP-KTP) OPO that is pumped at 532 nm [6] also suffers from output energy limitations imposed by the damage threshold, a general problem resulting from aperture-size constraints in crystal poling.
An alternative to cascading two steps of nonlinear frequency conversion is to combine them within the same nonlinear crystal using simultaneous phase matching.This technique has been demonstrated to provide efficient frequency conversion to wavelengths that cannot be reached via a single nonlinear process [7][8][9][10][11][12][13][14][15].In particular, both femtosecond and continuous-wave Ti:sapphire laser beams were upconverted to visible wavelengths by combining sum-frequency generation (SFG) [10,11] or second-harmonic generation [11,13,14] with optical parametric oscillation.
In this paper, we report a nanosecond sum-frequency generating OPO (SF-OPO) in which a single KTiOAsO 4 (KTA) crystal is used for both parametric generation and SFG.Pumped by a nanosecond Q-switched Nd:YAG laser at 1064 nm, this compact device generates red output pulses at 627 nm with more than 20% 1064-nm-to-627-nm energy conversion efficiency.To our knowledge, this is the first demonstration of simultaneous phase matching within an optical parametric oscillator operating in the nanosecond regime.
Experimental configuration
Our KTA crystal was designed so that when pumped at 1064 nm, it is simultaneously phase matched for optical parametric generation and SFG at the signal and pump wavelengths.Type-II birefringent phase matching facilitates optical parametric generation of a p-polarized (horizontal, fast axis) signal beam at 1525 nm and an s-polarized (vertical, slow axis) idler beam at 3520 nm from the p-polarized pump beam at 1064 nm.SFG with an s-polarized beam at the signal wavelength and a p-polarized beam at the pump wavelength as its inputs is simultaneously phase-matched for the same direction of propagation in the KTA crystal, again in a type-II polarization geometry.Coupling a portion of the p-polarized intracavity signal beam to s-polarization with a retarder plate facilitates the generation of a p-polarized sum-frequency output beam at 627 nm.This simultaneous phase matching polarization geometry belongs to class-D SF-OPOs, as defined in Ref. [16].
Our SF-OPO is based on a 20-mm-long KTA crystal that is cut along the θ = 90 • and φ = 33 • direction.It has antireflection coatings for the signal and pump wavelengths on both surfaces.However, experimentally we determined that simultaneous phase matching at our wavelengths occurs at θ = 90 • and φ = 30.1 • , requiring a corresponding tilt.This propagation angle is very close to the value calculated using the Sellmeier coefficients given in Ref. [17] for parametric generation and those given in Ref. [18] for SFG.While the beams polarized along the slow axis (z-axis) experience no walk-off, the calculated walk-off angles associated with the beams polarized along the fast axis are relatively small, with the maximum value being 0.15 • for the sum-frequency beam.
Our experimental setup is shown in Fig. 1.The pump source is a 4 Hz flash-lamp-pumped Q-switched Nd:YAG laser operating at 1064 nm, generating 40 mJ pulses of 14.7 ns duration (FWHM).The telescope lenses reduce the diameter of the pump beam that has a Gaussianlike spatial profile almost 2.5-fold resulting in a 1.6-mm-diameter beam (1/e 2 intensity point) with a divergence of 0.8 mrad.We calculated the peak pump intensity at the crystal to be 260 MW/cm 2 under these circumstances.We chose not to increase our pump intensity any further than this, since the damage threshold of the surface coatings on the KTA crystal are specified to be 500 MW/cm 2 for a 20-ns pulse at 1064 nm.The 4.8-cm-long L-shaped cavity is made up of three flat mirrors, M1, M2, and M3, which are all high reflectors at the signal wavelength.The pump beam enters the cavity through M1 and exits the cavity through M2, making a single pass.Both these mirrors are also high transmitters at 1064 nm.When pumped above threshold, a p-polarized intracavity signal beam is generated.An intracavity λ /4 plate whose surfaces are antireflection coated at 1525 nm acts as a polarization rotator to couple a portion of the signal beam to s-polarization.Simultaneously phase-matched SFG results in the s-polarized component of the signal to be summed with the pump beam to produce a sum-frequency beam at 627 nm.This red beam exits the cavity through M2.The residual pump and red output beams are separated from each other using dichroic mirrors M4 and M5.The idler beam at 3520 nm is mostly absorbed in M2, M4, and M5, which are made from BK7 glass.Only a small amount of idler, 0.4 mJ at the highest input pulse energy, is measured after these mirrors.
Results and discussion
In the SF-OPO, there is an optimum polarization rotation angle that maximizes the output pulse energy.When the fast axis of the intracavity retarder plate is aligned with either the p-or spolarization direction, there is no polarization rotation.In this case, the intracavity signal beam does not have an s-polarized component and hence there is no SFG.The residual cavity losses experienced by the signal beam is relatively small (approximately 3%), resulting in a low OPO threshold and high intracavity signal intensity.As the polarization rotation angle is increased by rotating the retarder plate, a portion of the p-polarized intracavity signal is coupled into spolarization and SFG begins to take place.However, rotating the signal polarization effectively increases the total linear cavity loss experienced by the resonating p-polarized signal, increasing the OPO threshold.Consequently, at some polarization rotation angle above the optimum value, SF-OPO falls below threshold.Figure 2 shows the 1064-nm-to-627-nm energy conversion conversion efficiency as a function of the polarization rotation angle at a fixed pump energy of 39.2 mJ.The smallest polarization rotation angle is chosen to be 18 • to avoid damage to the KTA crystal.A peak conversion efficiency of 21% is obtained at an optimum polarization rotation angle of 36 • .The SF-OPO falls below threshold at 80 • .
Figure 3 shows the output sum-frequency energy and pump depletion as functions of the pump energy while the polarization rotation angle is held fixed at 36 • .A maximum of 8.3 mJ sum-frequency energy is obtained at a pump energy of 39.5 mJ, corresponding to 21% conversion efficiency and 37% pump depletion.The threshold energy of the SF-OPO is 16.7 mJ at this rotation angle.
For any input pump level, the output sum-frequency energy can easily be maximized by adjusting the polarization rotation angle.Figure 4 shows the maximum energy conversion efficiency and the optimum polarization rotation angle as functions of the input pump energy.The spectrum of the sum-frequency output obtained at full energy is shown in Fig. 5.The bandwidths of the pump and sum-frequency spectra are approximately 0.2 nm.The pulse durations (FWHM) of the sum-frequency, intracavity signal and pump beams are 10.4 ns, 13.2 ns and 14.7 ns, respectively.
The simultaneous phase-matching angle of parametric generation and SFG was experimentally determined to be θ = 90 • and φ = 30.1 • .Figure 6 shows the sum-frequency energy at a fixed polarization rotation angle of 28 • and signal energy at a polarization rotation angle of 0 • (no SFG) as functions of the internal propagation angle φ (θ = 90 • ).An output coupler with a reflectivity of R = 85% at the signal wavelength was used for this measurement.The input pump energy was fixed at 40 ± 0.6 mJ and the pump beam had a diameter of 2.2 mm (1/e 2 intensity point), a divergence of 0.8 mrad and pulse duration of 15.6 ns.It was possible to rotate the crystal a maximum of 3.3 • (internal) from the point of normal incidence (φ = 33 • ).The largest signal energy is obtained at the point of normal incidence, however, the sum-frequency Fig. 6.Signal energy at a polarization rotation angle of 0 • (no SFG) and sum-frequency energy at a polarization rotation angle of 28 • as functions of the propagation direction in xy-plane of KTA (θ = 90 • ).energy peaks at the simultaneous phase-matching angle of φ = 30.1 • .
Even with a R = 85% output coupler for the signal where the 1064-nm-to-627-nm energy conversion efficiency is 8.6%, approximately 70% of the s-polarized signal component becomes depleted.Consequently, a stronger depletion for the s-polarized signal component can be expected for the case when M2 is a high reflector at 1525 nm.Inherently, it is not possible to angle-tune SF-OPOs when pumped by fixed wavelength lasers, since the simultaneous phase matching condition occurs at a single direction of propagation for a fixed pump wavelength.In our case, Fig. 6 shows that deviating from the simultaneous phase matching angle results in a rapid decrease of sum-frequency energy.
It is difficult to find a single set of Sellmeier coefficients that is accurate throughout the transparency range of a nonlinear crystal and valid for different χ (2) processes.This is even more so for KTA, for which there are several sets of quite different Sellmeier coefficients reported in the literature [17][18][19].We determined which set to use for which process in an ad hoc fashion.[17] for OPO Ref. [18] for SFG measured Fig. 7. Tuning curves of the signal wavelength calculated using the refractive index data given in Ref. [17] for parametric generation (signal is p-polarized) and refractive index data given in Ref. [18] for SFG (signal is s-polarized).Measured signal wavelengths are also shown in the figure .The simultaneous phase-matching angle determined experimentally is in excellent agreement with the value calculated using the Sellmeier coefficients given in Ref. [17] for parametric generation and those given in Ref. [18] for SFG. Figure 7 shows the tuning curves of the signal wavelength calculated using the refractive index data given in Ref. [17] for parametric generation of the p-polarized signal from the p-polarized pump and refractive index data given in Ref. [18] for the sum-frequency interaction between the s-polarized signal and p-polarized pump.The intersection of the two curves occurs at φ = 30.1 • , which is the simultaneous phase matching angle for these interactions.
The Sellmeier coefficients given in Ref. [17] are more accurate than those given in Ref. [18] for calculating the wavelengths for parametric generation.Signal wavelengths measured for various values of φ are also shown in Fig. 7.They are within ±0.7 nm of those calculated using the Sellmeier coefficients given in Ref. [17].However, using the Sellmeier coefficients given in Ref. [18] for parametric generation results in more than 6 nm difference between the calculated (not shown) and measured signal wavelengths.
We also conclude that the Sellmeier coefficients given in Ref. [18] are more accurate than those given in Ref. [17] for calculating the wavelengths of SFG.If the Sellmeier coefficients given in Ref. [17] were used for both parametric generation and SFG, the simultaneous phase matching angle would be calculated to be φ = 24 • , which is inconsistent with the experimental result.
Conclusion
We have demonstrated a 1064-nm pumped nanosecond SF-OPO that employs a single KTA crystal for both parametric generation and SFG.The output energy at 627 nm is 8.3 mJ, corresponding to an energy conversion efficiency of 21%.Accurate determination of the simultaneous phase-matching angle is essential for achieving a high conversion efficiency for the red output.To our knowledge, this is the first demonstration of a nanosecond SF-OPO.This device provides a simple and efficient method for converting high energy Nd:YAG lasers to a red wavelength.
Fig. 2 .
Fig.2.Sum-frequency energy conversion efficiency as a function of the polarization rotation angle.Pump energy is held fixed at 39.2 mJ.
Fig. 3 .Fig. 4 .
Fig. 3. Output sum-frequency energy and pump depletion as functions of pump energy.Polarization rotation angle is held fixed at 36 • . | 3,985.8 | 2005-06-27T00:00:00.000 | [
"Engineering",
"Physics"
] |
PREFERENCES OF POLES CONCERNING THE SHAPE OF REGIONAL POLICY AND THE ALLOCATION OF EUROPEAN FUNDS
Abstract The social reception of economic development processes has been underrated in studies conducted so far. The scarcity of such analyses may be perceived as a problem especially in the case of CEE states, in which economic growth has often been accompanied by a deepening of the socio-economic inequalities in the recent years. This article aims to identify the preferences of Poles concerning the goals of regional policy and the assignment of the European funds. Special attention was given to the differences among various categories of residents, examined in terms of their places of residence, occupational status, education, and age. The research has shown a highly positive attitude of Poles concerning the European funds, and statistically signifi cant relations between selected socio-demographic characteristics of Poles and their preferences concerning the places and fi elds of activity to which the funds should go.
Introduction
Since the introduction of the so-called Delors I Package making eff ective the provisions of the Single European Act (SEA) of 1986, the European Union's structural funds have become chief instruments intended to ensure economic and social cohesion at the Community level (Bailey and De Propris, 2002; De Michelis and Monfort, 2008; Paraskevopoulos and Leonardi, 2004).With the transformation of the principles of the Community's regional policy, there was a growing conviction of a great importance of those two cohesion dimensions for its development.Also growing was the weight att ached to the territorial cohesion, as refl ected fi rst in the approach to its adoption in the Lisbon Treaty, and then in the statement made in the Territorial Agenda of the European Union that territorial cohesion was the basic goal of the EU spatial policy (Faludi, 2009;Cotella, 2012).Establishing this goal was connected, among others, with the admission of ten new members to the EU in 2004, and then Bulgaria and Romania in 2007.Because of the distinctly lower level of economic development in those countries than in the 'old' EU members, the enlargement meant a great increase in the inter-regional diff erences.It was the new members that became the greatest benefi ciaries of the European cohesion policy and that took active part in working out its principles: the states of Central and Eastern Europe (CEE) were active in preparing the Territorial Agenda of the European Union, thus in fact determining its shape (Cotella, 2012) 1 .
In the 2007-2013 fi nancial perspective, 347 billion euros were allott ed to the EU cohesion policy.Nearly half of this sum went to the 12 new members (Wokoun, 2007), the highest proportion -almost 20% of the means, or as much as 67 billion eurosgoing to Poland.This was due to a combination of two factors: its large population number (38 million) and its sub-average level of economic development (lower in terms of per capita GDP than in the majority of the EU members from Central and Eastern Europe).
The use of such substantial fi nancial means as those Poland has received in the form of Community funds is closely connected with the adoption of a specifi c development policy (model) and strategic, integrated thinking about regional development.This is especially signifi cant in the CEE states because of the complexity and the scale of the socio-economic transformation they have undergone since 1989, and because of the large-scale eff ect of this transformation on spatial diff erences (Adams, 2006; Baláž, Kluvánková-Oravská and Zajac, 2007; Finka, 2011; Korec and Rusnák, 2013).The development problems still facing Poland and the remaining CEE countries include the territorial uneven economic growth and greater diff erences in the level of economic development among regions comparatively with the old EU states (Bachtler and Gorzelak 2007).This concerns in particular the ever-growing divergence between the dynamically developing metropolitan areas and the peripheral regions with their high unemployment rate and poverty (Czyż, 2012;Smętkowski, 2013;Churski, 2014).
An example of a policy intended to strengthen territorial cohesion is the new 'National Strategy for Regional Development 2010-2020' adopted by the Polish government in 2010.It assumes effi cient use of individual territorial development potentials to achieve medium-term growth in the economy, employment and spatial cohesion.However, observers both from Poland and abroad have some doubts about the attainment of those directions because the experience shows that the achievement of the declared regional policy targets has been rather limited in Poland so far (Czyż and Hauke, 2011;Czyż, 2012;Ferry, 2013).As Churski puts it (2014, p. 76), 'the development policy pursued so far has proved to be of limited eff ectiveness in the convergence of socio-economic development at the regional level while producing a divergence noticeable at the local level'.
Whatever the doubts regarding the realistic chance of att aining the above development directions, one can note that they have a signifi cant social dimension: they are supposed to prevent marginalization, and the local measures taken should be based on partnership.To achieve this, it seems necessary to have a detailed knowledge not only of development potentials of various areas, but also opinions and att itudes of their residents concerning preferred development directions, and hence their preferences as to the fi elds of allocation of the European funds.This becomes obvious when refl ecting on the main goals of economic development itself, which -simplifying greatly -always aff ects people, and its eff ects are supposed to serve them (Cox, 2011).
The knowledge of the residents' opinions about the allocation of the European means is also important for more particular reasons.First, in Poland, as in many other less wealthy EU states, the means obtained from the European funds make up a substantial part of the local budgets (Swianiewicz et al., 2010;Gonçalves Veiga, 2012).Second, those means play a special role in investments made by local governments: according to Swianiewicz et al. (2010), in the years 2004-2008 over 90% of the funds obtained by them went to investment.Third, learning and accommodating the opinions of the residents about the use of the European funds and regional policy directions could be a factor contributing to greater public trust in administrative organs of various rungs and a small step towards overcoming the 'culture of distrust' which, as Sztompka (1996) observes, still pervades Polish society at all levels of social life.An adverse eff ect of the lack of trust on the implementation of EU regional policy in Poland has been demonstrated by Swianiewicz et al. (2010) and Lackowska-Madurowicz and Swianiewicz (2013).Fourth, learning the opinions of the residents about the use of the European means is increasingly important with the advancing professionalization of the local governments in obtaining and using them (Swianiewicz et al., 2008;Swianiewicz, 2013), and also because of the great signifi -cance of those means for strategic development planning by local authorities (Bachtler and Turok, 2013).And fi nally, studies conducted so far show that most Poles not only have supported its EU membership throughout the entire period after their country's accession in 2004, but can also see the benefi cial eff ect of the European funds obtained (Cichocki, 2011).In this matt er, the situation in Poland is, on the one hand, similar to the one observed in Western Europe in the late 1990s, where the increase in the budget of structural funds was accompanied by the increase in support for the EU (Osterloh, 2008).On the other hand, though, it is unique as Poland ranks highest in the EU in terms of familiarity with EU-funded projects among citizens, and the belief in their positive impact on the socioeconomic growth (Eurobarometer, 2013).However, as Cichocki (2011) demonstrates, the benefi cial eff ect of the European funds in Poland is seen fairly stereotypically, in terms of infrastructural investment (mostly transport infrastructure).This leads to people's appreciation of the role of the funds in their country's development while being blind to major individual (personal) advantages generated from their use.
The above arguments and the fact that the att itudes and preferences of Poles concerning the European funds are still poorly known (like those of the residents of other EU states) were the main motives behind the research undertaken for the purposes of this article.Its main target was to uncover and understand preferences of the Poles concerning the country's regional policy and the assignment of the European funds in the successive years.Special att ention was given to diff erences -so far not analyzed -among various categories of residents concerning their preferences for allott ing the funds to particular places and fi elds of activity, examined in terms of their places of residence, occupational status, education, and age.This article is intended to fi ll the gap in research on the social context of economic development, which at present is largely connected with the regional policy of the European Union and its national equivalent.
Research methods
We used a diagnostic survey method involving the accumulation of knowledge about social phenomena, views and opinions of selected communities, and the intensifi cation and trends of the various phenomena (Hackman and Oldham, 1975).The goal of the poll conducted for the purposes of this research was to identify att itudes and preferences of Poland's residents concerning the regional policy pursued so far, and the allocation of the European funds in the successive years.
During the development of the diagnostic studies, many survey research techniques have been worked out (Oppermann, 1995;Schmidt, 1997;Stanton, 1998;Lazar and Preece, 1999; Jansen, Corley and Jansen, 2007).In spite of the increasingly popular modern survey research techniques replacing the traditional PAPI technique, it was this method that was used in the present research.The basic reason for choosing it was the wish to conduct the survey research not only in the largest cities but also in the peripheral areas, which made CATI and CAWI techniques less fi t for the purpose.
Based on the fi lled questionnaires a database was constructed with answers of all the respondents, which allowed a further mathematical-statistical analysis of the material collected.The respondents were divided into groups, the criteria being the socio-demographic variables listed in Table 1.We used several statistical methods.When examining the relations between the responses given and the socio-demographic characteristics of the respondents, correlation analysis was employed.In order to reduce the analyzed variables to binary ones, we used the Kendall's tau-b correlation coeffi cient (also known as the Kendall rank coeffi cient).This is a non-parametric method making it possible to establish whether two variables can be regarded as statistically dependent (without assuming a specifi ed statistical distribution of the variables analyzed).The values of tau-b, as of the classic Pearson's correlation coeffi cient, range from -1 (100% negative association) to +1 (100% positive association).The value of zero indicates the absence of association.
In order to corroborate the presence of selected statistical dependences identifi ed by Kendall's tau-b correlation coeffi cient, we used the Kendall's partial tau correlation.It allows estimating the strength of the relation between a pair of variables while eliminating the eff ect of other variables.
Organization of the research and characteristics of the respondents
The survey research was conducted from August 20 to September 10, 2012 in six out of Poland's 16 administrative regions (województwa, or voivodeships): Wielkopolska, Małopolska, Pomerania, West Pomerania, Kujavia-Pomerania, and Łódź.Poll takers collected opinions of randomly selected respondents, both in the capitals of the regions (metropolitan centers, i.e.Poznań, Kraków, Gdańsk, Szczecin, Bydgoszcz, Toruń, and Łódź), and in the remaining towns and villages located at various distances from a region's center.
The questions concerned opinions about the eff ect of the European funds on socio-economic development, the knowledge of the investments carried out in the metropolitan areas and fi nanced from Community means, and the preferences as to the place and fi eld of activity that those means should go to.
More than one-third of those surveyed were permanently employed (38%).Also students were represented in a signifi cant proportion (31%).Entrepreneurs accounted for 14% of the respondents, while the share of the unemployed was just over 8%, and pensioners 9%.They were mostly well educated: every other person had secondary education (49%), and every third, higher education (32%).Close to 15% had vocational education, and the remaining 3% were people with basic and no education2 .
Preferences of Poland's residents concerning the regional policy conducted and the allocation of European funds in the light of the research results
The fi rst step on the analysis of the preferences regarding the regional policy and the allocation of the European funds was to fi nd if the respondents considered those funds signifi cant, i.e., if they thought they aff ect Poland's socio-economic development in a signifi cant way.As it turned out, a decided majority of respondents observed such an eff ect: more than 80% of the respondents thought it to be strong or very strong.This result is in line with those obtained in a study conducted by evaluators earlier (SMG/KRC, 2011).Out of the various categories of respondents, most of those who stressed the signifi cance of the funds for the development processes were young and well educated: students, people with higher education, and those aged up to 35 (almost 90% of the respondents from each of those categories).And those who thought them less important were mostly poorly educated, not economically active (the unemployed, pensioners), and those over 56 years of age (just over 60%).In turn, there were no major diff erences in the signifi cance of socio-economic development ascribed to the funds by residents of towns and villages: it was declared to be strong or very strong by 82% and 77%, respectively, of respondents from those groups (Figure 1).An analysis was made of the respondents' preferences as to the fi eld of activity to which the European funds should go.For this purpose, six potential general fi elds were distinguished, the weight of which fully depended on the decisions of public authorities from various levels (among those disregarded were, e.g., subsidies for farmers).The respondents were asked to choose three most important fi elds and to assign to them a weight ranging from 3 (the most important) to 1 (the least important).Figure 2 presents the fi elds preferred by the respondents in terms of the average weight of responses.
According to the respondents, the European funds should go, fi rst of all, to measures (projects) improving the quality of transport infrastructure (1.27).A similarly high rank was assigned to fi ghting unemployment and upgrading the population's skills (courses and training, 1.16), and to bett er access to public services (1.14).Environmental protection, scientifi c research, and support for entrepreneurship were listed among less signifi cant goals.It is worth comparing the fi ndings of the current research with the results obtained in earlier studies.According to SMG/KRC (2011), citizens indicated that the allotment of European funds should go to transport infrastructure and means of transportation (48%), subsidies for farmers (28%), education and science (13%), and measures fi ghting unemployment (12%).This means that the most readily noticed investments are primarily the so-called hard ones (infrastructure) and those that raised the interest of media (infrastructure, subsidies for farmers; Cichocki, 2011).
The present research shows that the European funds are not allott ed to what Poles regard as the most important fi elds.This primarily concerns high expectations as to an improvement in access to public services (taking place in reality, but which they do not seem to notice), and using Community means to fi ght unemployment and organize vocational courses and training.Those measures are acknowledged by the respondents, but their magnitude is thought to be unsatisfactory.When comparing the above results with those obtained in the other 'new' EU member states, one can observe that Poles' expectations as to the fi elds of activity to which the European funds should go are typical for the societies of the CEE post-socialist states (Eurobarometer, 2013).They are even fairly similar to those of socio-political elites, especially in expecting the means to go for improving transport infrastructure (Dostál, 2013).
It is also worth comparing the above results against those obtained by Kisiała (2013) and Kisiała and Stępiński (2013), who analyzed the opinions of the local government representatives in Poland on the acquisition and use of the European funds.They thought that those funds should be used primarily to improve the quality of the transport infrastructure, and only then to improve access to public services and for environmental protection.Thus, there is a signifi cant divergence between the opinion of the local authorities, favoring almost exclusively 'hard' projects, and the opinion of the residents, for whom 'soft' projects are equally important, especially those devoted to reducing unemployment and improving their employability (i.e., potentially dedicated to some of them).
An equally important issue regarding where the EU-funded intervention should go is its geographical destination.Greatly simplifying, it can be reduced to a fundamental dilemma of the regional policy: equality or effi ciency in the socio-economic development of the country as a whole (Gorzelak, 2006;Szul, 2007;Hübner, 2008).It involves the choice between the so-called equalizing model, which assumes support primarily for the less developed and peripheral areas, and the so-called polarizing-diff usion model, with funds going fi rst of all to those best developed in economic terms (especially metropolitan areas and cities).The polarizing-diff usion mechanism rests on the assumption that the rate of return is the highest for the capital invested in the strongest economic units, which -in accordance with the classical conception of growth poles (Perroux, 1955;Boudeville, 1966) and polarized development theory (Friedmann, 1967(Friedmann, , 1972) ) When analyzing the preferences of respondents for the places where the EU funds should be allocated, there was no clear tendency with regard to the support given to any type of area when analyzed in general terms.36.3% of those surveyed chose that primarily peripheral areas and villages should be supported, while 34.7% stated that it was metropolitan areas and cities that should be supported fi rst of all.The remaining of the respondents (29.0%) opted for giving no preference to any of those two categories of area.
The situation looked diff erent when the preferences were analyzed from the perspective of the socio-demographic characteristics of the respondents.Respondents representing most of the categories examined (10 out of 15) preferred that the EU funds to go to peripheral areas (Figure 3).This was especially pronounced in the case of the rural dwellers, more than half of whom (57%) opted that the support under regional policy to go exclusively to the peripheral areas (as against to 21% of them being in favor of the support primarily to go to cities and metropolitan areas).There was also a high predominance of responses preferring peripheral and rural areas among persons aged 46-55 years old, and more than 66-year old.In the remaining categories of the respondents the predominance of the support for peripheral areas over that for cities and metropolitan areas was only slight (no more than 10%).In this group were persons with education below secondary, with secondary and higher education, permanently employed workers, not economically active (pensioners and the unemployed), and persons aged 25 and under (although here there was only a minimum of predominance of peripheral areas and villages).Cities and metropolitan areas as chief destinations of the European funds predominated in the answers of respondents belonging to the following categories: urban dwellers, entrepreneurs, students, and persons aged 26-35 and 46-45.
Statistical relations between the socio-demographic characteristics of respondents and their preferences for spending European funds
A detailed insight into the potential relations between the socio-demographic characteristics of the respondents and their preferences as to the spending of the European funds was obtained via correlation analysis.To that end, the variables were binarised: each respondent was assigned value 1 for the variant of a variable he/she represented (e.g., a person with higher education) and value 0 for the other variants of this variable (a person with secondary education; a person with education below secondary).Because of the binarisation, we used the Kendall's tau-b correlation coeffi cient.The conclusions drawn on this basis were additionally verifi ed by calculating the value of Kendall's partial tau correlation coeffi cient.This procedure was necessary to make sure that the identifi ed dependences were not a result of a direct eff ect of other, potentially present, variables.
The fi rst analysis focused on the statistical relation between the socio-demographic characteristics of the respondents and their spatial preferences for spending the European funds.Not surprisingly, the strongest statistical dependence occurred between the place of residence (town or village) and the declared backing for support going to metropolitan areas and cities or peripheral areas and villages (Table 2).Rural dwellers preferred the means to go to rural and peripheral areas, while urban dwellers preferred their allocation in urban and metropolitan areas.Although both relations were not strong (tau-b = 0.169 and 0.121, respectively), they were statistically signifi cant even at p ≤ 0.001.These results also indicate a slightly stronger tendency of rural dwellers to support rural areas than of urban dwellers to support cities and metropolitan areas.It is worth adding that the obtained coeffi cient values do not change much in the case of Kendall's partial tau correlation where the control variable is the level of education: for the relation between life in a village and support for villages and peripheral areas, it was 0.160, and for life in a city and support for cities and metropolitan areas, 0.114; in both cases at p ≤ 0.001.
Apart from the place of residence, statistically signifi cant relations were found also between the preferences concerning the spatial allocation of the European funds and the education and age of the respondents.Persons with higher education and those aged 26-35 had a slight tendency to allocate means in cities and metropolitan areas (tau-b = 0.095 at p ≤ 0.002, and tau-b = 0.067 at p ≤ 0.028, respectively), while persons aged 46-55 tended to prefer peripheral areas and villages.Those dependences also turned out to be statistically signifi cant with the place of residence (town or village) as control.Kendall's partial correlation coeffi cient was 0.085 (at p ≤ 0.005) for the relation between preferred support for metropolitan areas and cities and higher education, and 0.064 (at p ≤ 0.044) for the relation with the age of 26-35.These results suggest that the support for the allocation of the means in metropolitan areas and cities is connected with a deeper pro-European att itude that can be identifi ed with a pro-modernization approach.As earlier studies have already demonstrated, support for the European Union membership grows most strongly with economic status, and slightly less with an increase in education and occupational status (Doyle and Fidrmuc, 2006).
However, considering the relatively low values of the correlation coeffi cient (Table 2), one can state that the respondents' preferences regarding the spatial allocation of the European funds, although showing some relations with their socio-demographic characteristics, are connected with them only slightly and seem to be an individual matt er depending on a complex att itude towards the processes of economic development.
The other analysis focused on the potential relation between the respondents' characteristics and their preferences for the fi elds of activity where the European funds should go, i.e. their support for specifi ed types of measure (Table 3).Certain statistical relations were found between the socio-demographic features of the respondents and four fi elds of activity: (1) improvement in access to public services, (2) fi ght with unemployment and improvement in skills, (3) improvement in the quality of transport infrastructure, and (4) support for entrepreneurship.In turn, no statistically signifi cant relation was found to hold between the respondents' socio-demographic characteristics and their preferences for spending European funds on scientifi c research and environmental protection.This seems to be somewhat puzzling, especially the absence of a relation between the respondents' education and those two fi elds of activity.The fi eld of activity showing the strongest statistical link with the respondents' socio-demographic characteristics was improvement in access to public services.Probably because health care is one of the basic public services, this fi eld correlated most strongly with the respondents' age (66 years and over -tau-b = 0.101, at p ≤ 0.001) and their status of pensioners (tau-b = 0.146, at p ≤ 0.001).At the same time those two variables were correlated negatively with the choice of improvement in the quality of transport infrastructure (see Table 3).As in the previous case, one can suppose this to be directly connected with the current needs of this category of respondents, for whom the comfort of movement takes a remote place in the hierarchy of needs.Among the other statistically signifi cant interdependences is the relation between having the status of an entrepreneur and supporting entrepreneurship (tau-b = 0.110, at p ≤ 0.001), between having the lowest level of education and giving priority to fi ght with unemployment and improvement in skills (tau-b = 0.083, at p ≤ 0.006), and a negative dependence between the support for improvement in access to public services and higher education (tau-b = -0.086,at p ≤ 0.005).As with the preferences for the spatial allocation of investment supported from the Community funds, as in the case of the preferred fi elds of activity, the strength of the link between them and the respondents' socio-demographic features was weak.In most cases it was limited -as presented above -to the fairly obvious relations following from the specifi c needs of the given category of residents.
Conclusions and discussion
What motivated the research described in this paper was the underrating of the social perception of the economic development processes and the regional policy in studies conducted so far.This is especially puzzling in the case of the CEE states, in which economic growth has often been accompanied by a deepening of income and socio-economic inequalities (Szlachta and Zaleski, 2010; Domonkos, Ostrihoň and Jánošová, 2013).
The results presented in this article can be systematized in the form of a few fundamental conclusions.The research corroborated the highly positive att itude of the Poles to the European funds, already reported in earlier studies (Cichocki, 2011; SMG/ KRC, 2011; Eurobarometer, 2013), and their conviction about their great signifi cance for Poland's economic development.The signifi cance assigned to them grew with the respondents' educational and occupational status.Their important role was also appreciated by young people and urban dwellers.
The results also show that Poles' preferences as to the places and fi elds of activity that the funds should go to are to some extent connected with their socio-demographic characteristics.On the other hand, it should be stressed that the statistical links revealed (often coming down to the dichotomy: young, highly educated urban dwellers vs. older, poorly educated, old-age pensioners or the rural unemployed) accounted for a mere few percent of preferences of the given category of the respondents.This observation holds even for such an obvious -one might think -dependence as the relation between living in a town or a village and expecting regional policy to support primarily metropolitan areas and cities or peripheral and rural areas, or opting for no spatial preferences in this respect.The absence of strong statistical relations between the socio-demographic characteristic of the respondents and their preferences for the allocation of the European funds can be accounted for in two non-exclusive ways.
On one hand, this can be due to the lack of well thought-out att itudes towards the funds.Poles, one might conclude, know litt le about them, do not care about them much (SMG/KRC, 2011), and do not see direct personal advantages deriving from them (Cichocki, 2011).On the other hand, the causes of the poor relation between the socio-demographic characteristics of the respondents and their specifi c preferences for spending the funds can be sought in a high level of individualization of the att itudes and behavior, described by Bauman (2001) as being typical of the fast modernizing European societies.The individualization of the att itudes can be a challenge to the public authorities taking measures intended to make selected population categories partial to the given fi elds of allocation of the European funds.Therefore, it seems that public authorities should primarily initiate a public discourse about the allotment of the funds and make eff orts to include in it social partners and various representatives of the residents under a policy of citizens' empowerment.This could be a factor breaking the 'culture of distrust' diagnosed by Sztompka (1996).Its scale in Poland is revealed by the results of polls concerning social trust: although only one in four Poles does not trust the European Union (Cichocki, 2011), as many as one in three does not trust local authorities (of a town, a commune), and almost one in two does not trust central authorities, courts, and public administration offi cials (Swianiewicz et al., 2008;CBOS, 2012).
Finally, the research showed that Poles think the European funds should be spent on somewhat diff erent fi elds than those chosen, for instance, by representatives of local governments and documented in earlier studies (Kisiała and Stępiński, 2013).While residents considered 'hard' projects (e.g., investments in transport infrastructure) and 'soft' ones (e.g., improvement in the population's skills) to be equally significant, local authorities tended to give priority to 'hard' projects.These results are in-teresting because in their classic paper Rodríguez-Pose and Fratesi (2004) found that the concentration of development funds on infrastructure did not lead to signifi cant returns in the 'old' EU.They also documented that only investment in education and human capital brought positive, signifi cant medium-term returns.A similar opinion can also be found in Bachtler and Gorzelak (2007, p. 319), who stated that for the socalled new member countries the main longer-term need was 'to upgrade human and knowledge capital, shifting the strategic focus of intervention away from infrastructure and towards education (including higher education), training, innovation, technology transfer and diff usion'.Thus, an interesting situation seems to have appeared in Poland concerning development preferences: Poles have similar opinions as international experts, while the preferences of local government representatives appear to depart substantially from those of the experts (Kisiała and Stępiński, 2013).These diff erences in att itudes towards factors of Poland's regional and local development may generate signifi cant problems in the future by producing diffi culties in working out optimum fi elds and places of fund allocation that would combine both economic effi ciency and social acceptability.
The threat described above, despite the lack of adequate research, can pose a problem that may hinder introducing eff ective and widely accepted regional policy in Poland, and at the same it may also undermine the formation of European identity.It appears that the situation does not concern only Poland but also other EU countries, where the support for the EU membership is signifi cantly lower (Eurobarometer, 2013;Mendez and Bachtler, 2016).
Figure 1 :
Figure 1: Proportion of persons (%) declaring the effect of the European funds on socio-economic development to be very strong and strong, and their features Source: Authors' own compilation
Figure 2 :
Fight with unemployment and improvement in skillsImproved quality of transport infrastructure
Figure 3 :
Figure 3: Categories of respondents and geographical preferences in spending European funds Source: Authors' own research
Table 2 :
Values of Kendall's tau-b coeffi cient of correlation between the selected socio-demographic variables and spatial preferences for spending the European funds
Table 3 :
Values of Kendall's tau-b coeffi cient of correlation between the selected socio-demographic variables and the preferred fi elds of allocation of the European funds | 7,239.4 | 2018-06-29T00:00:00.000 | [
"Economics"
] |
Studies on the Efficacy of Various Antimycotic Drugs on Emerging and Reemerging, Superficial, Cutaneous and Subcutaneous Mycotic Infections
Introduction: The efficacy of five systemic and topical antifungal medications, Voriconazole, clotrimazol, beclometasone, Itraconazole, and Fluconazole, on dermatomycosis, which affects the superficial layers of the skin, nails, foot, and hair, was tested with 180 patients. Methods: Included were specimen collection, processing, microscopy, and culture, as well as antifungal susceptibility testing using the E-test method. The Candida species were confirmed and their susceptibility to Voriconazole and Fluconazole was tested using the automated Vitek 2. Results: The final strain identification indicated 41 dermatophytes (69.49%), 11 non-dermatophytic molds (NDM) (18.64%), and 7 yeasts (11.87%). (candida). Candida was the most prevalent nondermatophyte species found. Trichophyton rubrum was the most prevalent species isolated in Tinea corporis , T. cruris , T. capitis, and T. faciei . When tested with the E strips, all dermatophyte strains showed the greatest vulnerability to beclometasone and clotrimazole (MIC range of 0.04– 0.64), but homogeneous resistance to Fluconazole (i.e. MIC 32 g/ml). significant
INTRODUCTION
Dermatomycosis (superficial fungal infections) is one of the most frequent dermatological illnesses. High temperatures, poor personal cleanliness, poor diet, suffocation, severe systemic disorders such as diabetes, drug resistance, immune weakened states such as HIV infection, and other factors have all contributed to the spread of these infections [1].
Mycoses (fungal diseases) are divided into three categories: superficial, deep, and systemic. Skinny mycosis is caused by dermatophytes. The lesions occur in circular patterns, with margins that are desquamated and erythematous. Dermatophytosis is an infection caused by dermatophytes attacking keratinized tissue (skin, hair, and nails) in humans and other animals.
Even with the most recent breakthroughs in diagnosis and treatment, mycoses remain a major cause of morbidity and mortality. The initiation of proper therapy at the appropriate time has a direct positive impact on the patient's recovery [2]. Fungal infections are routinely treated with azole antifungals. Itraconazole, Fluconazole, Voriconazole, Posaconazole, Isavuconazole, Clotrimazole, and Beclometasone are some of these drugs [3]. New antimycotic medications have recently become available, allowing for more treatment options as well as preventive or preventative objectives.
Resistance has arisen on fungal strains as a result of increased improper use of antimycotic medications. Resistance has emerged in two ways: multiple species gaining secondary resistance or susceptible species being replaced by resistant ones, affecting the epidemiology of mycotic diseases [4]. Antifungal susceptibility testing methods can detect antifungal resistance as well as determine the optimal treatment strategy for a particular fungus [5].
The VITEK-2 yeast susceptibility test is an automated method for identifying yeast species and determining antifungal susceptibility by analyzing yeast growth. The system is a simplified version of the broth dilution method that includes a software tool that analyzes and interprets susceptibility test findings based on drug MIC values using CLSI clinical breakpoints.
METHODS
At the Orlu Local Government Area of Imo State, Nigeria, a two-year prospective study was undertaken in 10 selected hospitals. There were 180 patients in total. The institutional ethics committee gave their approval. The information was gathered in a predetermined format. Specimen collection, processing, microscopy, and culture were performed on patients with sufficient scales, and antifungal susceptibility testing was performed using the E-test method.
Isolates of E-Test of Trichophyton rubrum ATCC 28188 and Trichophyton mentagrophytes ATCC 9533 were used as controls in the study. To improve sporulation, Dermatophytes were subcultured on Potato Dextrose Agar (PDA) and incubated at 28°C for 7 days. A haemocytometer was used to adjust the conidial and hyphal suspension to 1x10 6 /ml after the growth was harvested in sterile saline. A swab dipped in the inoculum suspension was used to inoculate Mueller Hinton Agar (MHA) plates. After that, the inoculation plates were dried before the E-strips were applied. The susceptibility of several dermatophytes isolated to Fluconazole, Itraconazole, and Voriconazole was determined using commercially available E-strips (HIMEDIA). To serve as a control, sterile disks were impregnated with 10 l of a 1:100 solution of DMSO. The Estrips for the three medications were put to each infected and dried plate, and then incubated at 28°C for up to 16 hours or longer for filamentous fungi, depending on the fungus' genus. When growth occurred, the size of inhibitory zones for each antifungal drug was measured, as was done in their study. 6 VITEK-2: The automated Vitek 2 was used to confirm Candida species and test susceptibility to Voriconazole and Fluconazole, as well as beclometasone and clotrimazole. Ethical Clearance is a term used to describe the process of obtaining ethical approval Data Analysis: MIC range was acquired and compared with all the isolates studied.
Direct microscopy and culture were used to identify the distinct species. While 37 (20.55%) of the samples were positive on both microscopic and culture examinations, 52 (28.89%) of the samples were exclusively positive on culture. A total of 49 (27.7%) samples were found to be positive solely on microscopy, whereas 42 (23.33%) were found to be negative on both tests. Antifungal susceptibility testing could not be performed on the 91 culture negative samples. Only 49.44% of people were positive about their culture.
The difference in species distribution based on clinical presentation was statistically significant (p = 0.001). The E-test method was used to determine the antifungal sensitivity of dermatophyte strains to Voriconazole, Fluconazole, Itraconazole, Clotrimazole, and Beclometasone.
The MIC of Voriconazole ranged from 0.007 g/ml to 0.064 g/ml, as indicated in Table 2. After being tested with the E strips, all dermatophyte strains showed the same resistance to Fluconazole, with a MIC of 32 g/ml.
The low susceptibility of dermatophytes to Fluconazole as shown by the E-test method (Uniform MIC 32 g/ml) is consistent with the findings of investigations conducted by [5]. [2] and [1] are two examples. Fluconazole resistance may have developed as a result of widespread use and easy access to the drug in pharmacies, as well as self-medication by patients due to its over-the-counter (OTC) status. Itraconazole exhibited a MIC range of 0.015 g/ml to 0.064 g/ml in this investigation.
[8] found a MIC range of 0.038-1.5 g/ml for Itraconazole in their study. [4] reported similar results in their study on the evaluation of the E-test for dermatophytes. Beclometasone, with a MIC range of 0.004 to 0.064 g/ml, was the most effective against the three dermatophyte species, followed by Clotrimazole, with a MIC range of 0.005 to 0.064 g/ml.
Clotrimazole (MIC 1g/ml) and beclometasone (MIC 0.13g/ml) were shown to be equally effective against Candida species. The high sensitivity of Candida species to beclometasone (MIC0.13 g/ml) found in this investigation appears to be consistent with [2]. Differences in antifungal medication MICs on different species were not statistically significant in this investigation. (p less than 0.05).
CONCLUSION
A culture sensitivity report should, ideally, govern the treatment of dermatophytic infections. Beclometasone and clotrimazol are the most appropriate therapy alternatives based on their MIC ranges.
To avoid the rapid development of medication resistance, they must be reserved only for resistant and difficult-to-treat illnesses.
DISCLAIMER
The products used for this research are commonly and predominantly use products in our area of research and country. There is absolutely no conflict of interest between the authors and producers of the products because we do not intend to use these products as an avenue for any litigation but for the advancement of knowledge. Also, the research was not funded by the producing company rather it was funded by personal efforts of the authors.
CONSENT
It is not applicable.
ETHICAL APPROVAL
Ethical clearance was received from the Imo State Ministry of Health via a letter referenced IMMH/20/08/27. Higher susceptibility is associated with a lower MIC range. | 1,765.4 | 2022-05-16T00:00:00.000 | [
"Biology",
"Medicine"
] |
Augmented Lagrangian-Based Reinforcement Learning for Network Slicing in IIoT
: Network slicing enables the multiplexing of independent logical networks on the same physical network infrastructure to provide different network services for different applications. The resource allocation problem involved in network slicing is typically a decision-making problem, falling within the scope of reinforcement learning. The advantage of adapting to dynamic wireless environments makes reinforcement learning a good candidate for problem solving. In this paper, to tackle the constrained mixed integer nonlinear programming problem in network slicing, we propose an augmented Lagrangian-based soft actor–critic (AL-SAC) algorithm. In this algorithm, a hierarchical action selection network is designed to handle the hybrid action space. More importantly, inspired by the augmented Lagrangian method, both neural networks for Lagrange multipliers and a penalty item are introduced to deal with the constraints. Experiment results show that the proposed AL-SAC algorithm can strictly satisfy the constraints, and achieve better performance than other benchmark algorithms.
Introduction
With the rapid development of industrial internet of things (IIoT), more and more devices are connected and controlled via wireless networks. Providing precise services for these devices to fulfill their diverse requirements becomes a fundamental issue in IIoT. Facing this challenge, three application scenarios are defined by International Telecommunication Union (ITU) and Fifth Generation Public Private Partnership (5G-PPP) [1,2], that is, enhanced mobile broadband (eMBB), ultra-reliable low latency communications (URLLC), and massive machine type communication (mMTC). In more detail, the eMBB scenario provides devices with requirements on high transmission rate, such as high-definition surveillance video in factories, whose peak rate for each camera can be greater than 10 Gbps [3]. mMTC refers to the scenarios, where a large number of devices connect simultaneously while the requirements on the transmission rate and delay are not critical [4]. In contrast, URLLC serves applications with a strict transmission on reliability, and latency, such as automatic operators and controllers [5].
To satisfy these disparate scenarios within one network infrastructure, a network slicing technique was proposed. It divides a physical network into multiple independent logical networks [6,7], where each network slice is isolated from others and provides one kind of network service via dedicated resource allocation. To efficiently allocate resources and meet the dynamic of wireless networks, many intelligent algorithms have been proposed. For instance, in [8], the genetic algorithm, ant colony optimization with a genetic algorithm, and quantum genetic algorithm were used to jointly allocate radio and cloud resources to minimize the end-to-end response latency experienced by each user. In [9], two deep learning technologies, supervised and unsupervised learning, were introduced to jointly optimize user association and power allocation problems, combining the data-driven and model-driven learning. The study in [10] exploited a learning-assisted slicing and concurrent resource allocation process to jointly to improve the users' service reliability and resource utilization rate.
Following 5G network architecture, edge servers with caching and computing capacities are deployed close to the base stations (BSs). Such kinds of deployment enables the intelligent cooperative among the neighboring BSs, which is suitable for the network slicing for an intelligent factory. Resource allocation is a dynamic programming problem, which can also be solved effectively by reinforcement learning (RL). In [11], RL was utilized to dynamically update the number of radio resource units allocated to each slice, where a utility-based reward function was adopted to achieve efficient resource allocation. Cooperative proximal policy optimization was adopted in RL to maximize resource efficiency by considering different characteristics of the different network slices in [12]. A general framework was proposed in [13] that uses RL to achieve dynamic resource management of dynamic vehicle networks in realistic environments. Moreover considering the high complexity and combinatorial nature of the future heterogeneous networks consisting of multiple radio access technologies and edge devices, a multi-agent DRL-based method was utilized in [14,15]. Specifically, in [14], deep Q-network (DQN) algorithm was used in each agent to assign radio access technologies, while the multi-agent deep deterministic policy gradient (DDPG) algorithm to allocate power. The authors in [15] investigated a multi-agent cooperative problem in resource allocation aiming at improving the data process ability of wireless sensor networks and eliminating the non-stationary problem for channel allocation.
However, resource allocation problems in a wireless network always involve constraints, e.g., the device's various requirements on average latency, cumulative throughput, or the average package loss rate, which cannot be solved well by traditional RL. To manage the constraints, constrained Markov decision processes (CMDP) arose, which mainly include four classes: (1) penalty function method: it adds penalty terms into the optimization objective to construct an unconstrained optimization problem. Such as in [16], the logarithmic barrier function is introduced as a penalty. (2) Primal-dual method: it uses the Lagrangian relaxation technique to transform the original problem into a dual problem, for instance [17,18]. (3) Direct policy optimization: it replaces the objective or constraint in the original problem by a more tractable function, such as [19][20][21]. (4) Safeguard uses an extra step mechanism to guarantee the constraint in each training step [22].
In addition to the constraint problem, the discrete-continuous mixed action space is involved in our work. Inspired by the concepts of the augmented Lagrangian similar to the primal-dual method, we propose an augmented Lagrangian-based soft actor-critic (AL-SAC) algorithm to solve the network slicing problem with constraints and the hybrid action space. The main contributions of this paper are as follows.
•
A two-stage action selection is designed by considering a hierarchical policy network to solve the hybrid action space problem in RL, which can significantly reduce the action space; • A penalty-based piece-wise reward function and a constraint-handling part involving neural networks for Lagrangian multipliers and cost functions are introduced to solve the constraint problem; • Simulation results show that our proposed algorithm satisfies the constraints, and AL-SAC has a higher reward value than the DDPG algorithm with a penalty item.
System Model and Problem Formulation
This section firstly presents the network model of network slicing and transmission rate model. Then, considering various requirements of different network slices, the con-straints of different types of devices are developed. Finally, a constrained mixed integer nonlinear problem is formulated.
Network Model
As illustrated in Figure 1, we consider a wireless IIoT with multiple BSs and devices with different network requirements. We denote the set of BSs and devices by M = {1, 2, . . . , M} and N = {1, 2, . . . , N}. Moreover, the devices are categorized into three typical scenarios, i.e., eMBB, mMTC, and URLLC, which are denoted by N eM , N uR and N mM , respectively. Hence, N eM ∪ N uR ∪ N mM = N . We further denote the bandwidth available in the m-th BS by B m , m ∈ M, and its transmission power by P m . Additionally, a binary variable x mn ∈ {0, 1} is used to denote the association between BS-m and device-n, and accordingly, b mn , the bandwidth allocated by the BS-m to the device-n if x nm = 1; otherwise, x nm = 0.
SINR and Transmission Rate
Denote the distance between the m-th BS and the n-th device by d nm , and h nm the channel fading gain, the received SINR received at the n-th device from the m-th BS can be expressed as [23] where A 0 denotes the path losses at the reference distance d nm = 1 and α denotes the pathloss exponent; σ 2 denotes the noise power. Hence, the transmission rate that the n-th device can achieve when associated with the m-BS can be calculated as For the n-th device, the transmission rate achieved is given by
Requirements of Different Network Slices
As mentioned before, each device belongs to one typical scenario. To provide the transmission service required, three corresponding network slices are defined. That is, • eMBB slice: The devices served by this network slice require a high transmission rate, such as the device with real-time streaming of high-resolution 4K or 3D video [24]. That is, the transmission rate achieved by these devices has a minimum requirement: where R 0 denotes the rate threshold. • URLLC slice: The devices served by this network slice have a strict requirement on delay, which include the transmission delay, queuing delay, propagation delay, and routing delay [25]. Denoted them by T 1 , T 2 , T 3 , T 4 , respectively, the end-to-end delay can be calculated as T 1 + T 2 + T 3 + T 4 . The minimum requirement for wireless transmission delay is as follows: where L denotes the packet length, and T 0 denotes the delay threshold. As mentioned in [26], the achievable rate of a URLLC wireless link, i.e., Equation (4) in [26], can be approximated by the Shannon capacity when the block length is large. For this reason, in this work, we use the Shannon capacity to calculate the link rate and focus on the transmission time independent of the other delay components [27]. • mMTC slice: The devices served by this network slice have no strict rate or latency requirements [27]. Hence, to ensure the basic wireless connection, a minimum bandwidth B 0 should be allocated to support the connection. That is,
Problem Formulation
The aim of network slicing design is maximizing the overall utility achieved by the devices in the system, Firstly, inside each network slice, to achieve fairness among the inner-slice devices, the proportional fairness is utilized as the utility function of each device [28]. That is, More importantly, considering the disparity of throughput in three network slices, weight preference is utilized to balance their contribution to the overall utility [29]. In this work, we use w eM , w uR and w mM to denote the weight for the devices in eMBB, URLLC and mMTC slices, respectively.
Hence, we have the following optimization problem, via the BS-device association and bandwidth allocation to maximize the overall system utility: In addition, (9) represents the bandwidth allocated to the associated users that cannot exceed the overall bandwidth in this BS; (10) indicates that a device only can be associated with one BS at one-time instance; and (11), (12) and (13) represent that the network requirements in each slice introduced above.
In essence, the above problem is a constrained mixed integer problem. In the following, we propose an augmented Lagrangian-based reinforcement learning (RL) with a soft actorcritic (SAC) framework to solve this problem. Then, with the optimal results on the device-BS association and the bandwidth allocated to each device, an extra but simple procedure is needed to complete the slicing details: /(1) Calculate the number of radio resource blocks (RRB) needed for each slice in one BS, and determine the corresponding collection of RRBs.
(2) Each BS allocates a subset of the RRBs belonging to the slice to each serving device.
Proposed Augmented Lagrangian-Based Reinforcement Learning
In the considered scenario, the agent deploying the proposed algorithm can be the edge server in the 5G architecture. Since the edge server is deployed neighboring the BSs, and with the channel information feedback from the BSs, the proposed algorithm can give the optimal device-BS association and each device's bandwidth.
In this section, the basic of the augmented Lagrangian method is firstly presented. Then, the hybrid action space, state space, and reward function are defined. Finally, the architecture and workflow of the proposed AL-SAC algorithm are elaborated.
Preliminary of Augmented Lagrangian Method
The augmented Lagrangian method not only replaces the constrained optimization problem with an unconstrained problem but also introduces a penalty term to accelerate convergence [30]. Given an objective function f (x) to maximize with parameter x and the constraint functions c i (x) > 0, this optimization problem can be solved by its dual problem as follows: where λ denotes the Lagrange multiplier vector for the constraints, and µ denotes the parameter for the penalty term. Then, the typical solving process alternatively optimizes λ and x during iterations.
i is updated according to the rule as Additionally, when the constraint is not satisfied, µ is enlarged with a scalar.
Definition of State, Action and Reward in RL
To solve this problem using the framework of RL, in the following, we further define the corresponding state, action spaces, and the reward function in this problem.
State Space
In reinforcement learning, the state space represents the environment observed by an agent. Hence, in our scenario, the wireless environment, i.e., the channel condition between BSs and devices, is defined as the state. Since in the factory, the location of devices are fixed, the channel condition only related to the channel fading gain, that is, the state space, can be expressed as Based on the state, we can calculate the SINR by (1), and then the transmission rate and other parameters involved in the optimization problems.
Hybrid Action Space
Considering the discrete and continuous variables involved in problem (8), a hybrid action space is used in this problem. That is, the discrete action, which represents the association between devices and BSs; the continuous action which represents the bandwidth assigned by BS.
Since a BS only allocates bandwidth resources to its associating devices, we have that only when x mn = 1, then b mn > 0. Hence, a hierarchical policy network is designed, where we divide the action selection into two stages to significantly reduce the action space, as illustrated in Figure 2, association action a 1 is selected at Stage-1 based on the state s, and then at Stage-2, bandwidth allocation action a 2 is chosen based on the set {s, a 1 }.
In t-th time episode, we denote the action for the whole system as
Reward Function
In (8), we have multiple constraints. For the association constraint, i.e., (10), and the requirement constraints of the devices, i.e., (11)-(13), we directly map the actions to the corresponding ranges. As for bandwidth constraints (9), we consider an augmented Lagrangian method to constrain the total bandwidth since it cannot be handled at the time of action selection.
Motivated by the augmented Lagrangian method, we introduce the penalty into the design of the reward function. In more detail, that is, when the bandwidth constraints of all BS are satisfied, the reward function equals the original weighted overall utility, i.e., (8), which can be calculated as Otherwise, the reward equals the penalty item similar to the augmented Lagrangian method, that is, where the set M just involves BSs that do not satisfy the bandwidth constraint, and G m = ∑ n∈N x nm b nm represents the overall bandwidth used in m-th BS. In (21), it means when the total bandwidth constraint is not satisfied, the reward is a negative value related to the exceeded bandwidth.
Proposed AL-SAC Algorithm
The framework of SAC includes the actor and critic parts, which are for policy evaluation and policy improvement, respectively. A policy is the function that returns a feasible action for a state, denoted by π. That is, a ∼ π(×|s).
In SAC, the algorithm aims to find the optimal policy which maximizes the average of the entropy of the policy and the expected reward. That is, where γ (0 < γ ≤ 1) is the discount factor; H(π(a|s)) denotes the Shannon entropy of policy π. Considering the hybrid action spaces in our problem, the Shannon entropy for policy π involves the entropy calculation of two parts: H (π(a|s)) = β 1 H(π(a 1 |s)) + β 2 H(a 2 |s), where β 1 and β 2 are the entropy temperatures. As illustrated in Figure 3, the architecture of the proposed AL-SAC algorithm incorporates a constraint part in addition to the original actor, critic parts, and replay buffer. Specifically,
•
Actor part: it deploys a policy network denoted by π, which generates the policy of device association and bandwidth allocation; • Critic part: it deploys a value network and a Q-value network, denoted by V and Q, estimating the value of state and state-action, respectively; • Constraint part: it deploys Lagrangian multiplier networks and cost networks, denoted by L and C, estimates the cost value of constraints and adjusting the Lagrangian multipliers accordingly.
• Replay buffer: it is used in DRL to store the tuples, i.e., {s (t) , a (t) , r (t) , s (t+1) , G (t) m }, from which the sampled tuples are used in neural network training.
In the following, we describe the updating process of each neural network component in the three parts network.
Cost Network
Cost Network
Loss for Q-Value
Networks Figure 3. The architecture of proposed AL-SAC algorithm.
Value Network V
This network is utilized for estimating the state value and target state value, i.e., V φ (s) and Vφ(s), where φ andφ are parameters, andφ is updated by an exponentially moving average of the value network weight [31]. In the learning process, this network is trained by minimizing the squared residual error Then, the gradient in (25) can be estimated by an unbiased estimator and used in the update of the neural network. That is, , a (t) ) + log π θ (a (t) |s (t) )).
Q-Value Network Q
To evaluate the reward function, a Q-value network is deployed to calculate the stateaction value for each action, i.e, Q ψ s, a , where ψ denote the parameter of the neural network. This network is trained by minimizing the soft Bellman residual where Vφ(s (t+1) ) is target state value mentioned above to enhance the training stability. Then, this neural network is updated by
Constraint Networks C
Multiple constraint networks are also deployed to estimate the constraints cost expectation for bandwidth allocation. Similar to double deep Q-learning, each constraint network has two separated Q-value networks with parameters ϕ m andφ m . They are involved in generating the continuous action-state value for m-th BS, i.e., which be utilized for estimating the value allocated bandwidth. Moreover, we also define the continuous action b This network is trained by minimizing the loss. That is, They can be updated by ) is the target action-state value for the training stability, whereφ m is periodically updated by copying ϕ m .
Lagrangian Multiplier Network L
To update Lagrange multiplier λ in (15), we also deploy Lagrangian multiplier networks. As mentioned in Section 3.1, we can learn λ by minimizing the objective function according to the constraints' verification. That is,
Policy Network π
This network searches the optimal policy according to the estimated values generated by networks in the critic and constraint part. It generates an action based on policy π for each state, i.e., π θ (×|s), where θ is the parameter of the policy network. As mentioned earlier, we consider a two-stage action selection. Specifically, the discrete action will be selected first, i.e., the BS connection state will be determined. Then with the corresponding channel state between the device and BS, the bandwidth allocation is determined.
Then, we can update π by maximizing the following function The gradient of (34) can be approximated by where a (t) = f θ ( (t) ; s (t) ), (t) is an input vector sampled from Gaussian distribution, and π θ is defined implicitly in terms of f θ [31].
The workflow of the proposed AL-SAC algorithm is summarized in Algorithm 1. Specifically, lines 2-6 illustrate the experience collected from the environment with the current policy, and then the update of the networks is presented in lines 8-18. Observe the environment s (t) .
4:
Calculate total bandwidth allocated in each BS, i.e., G Calculate the reward r (t) depending on G if t > K then 8: Randomly select a batch of samples (s (t) , a (t) , r (t) , s (t+1) ) from the replay memory buffer D. 9:
Simulation
In this section, we first test the performance of the proposed AL-SAC algorithm in various scenarios, and also verify the constraints required. Moreover, compare it with other benchmark algorithms, including the original SAC algorithm, and the DDPG algorithm with a penalty item dealing with constraints. In the end, the network service constraints for devices with different weights proportion are shown.
Parameter Setting
We consider a wireless scenario with multiple BSs and eMBB/URLLC/mMTC devices, where the locations of devices are randomly distributed. The channel fading between a device and BS follows a Rayleigh distribution, varying with time. Combining the distancebased path loss and fading, the channel condition in an episode can be calculated.
In the simulation, the number of BSs is M = 2, the number of devices in eMBB, URLLC, and mMTC slices are (3,3,4) or (5,5,5). Three different weight designs are considered, that is, (w eM , w UR , w mM ) = ( 1 3 , 1 3 , 1 3 ), ( 2 3 , 1 6 , 1 6 ), or ( 1 10 , 3 5 , 3 10 ). Furthermore, the transmission power P m , the path loss exponent α and noise power σ 2 are set as 2 W, 3.09 and 10 −9 W, respectively [32]. For mMTC devices, B 0 = 0.18 MHz, for eMBB devices, R 0 = 4 Mbps, for URLLC devices, T 0 = 20 ms. The available bandwidth for each BS is set as B m = 10, 12.5, or 15 MHz, the neural networks are trained by the Adam optimizer, and batch size is set as K = 256. All these simulation parameters are also listed in Table 1 for clarity. In Figure 4, we plot the average reward achieved by the proposed AL-SAC algorithm when the maximum bandwidth B m = 10, 12.5, and 15 MHz available with the number of devices N = 10 for each BS, respectively, as well as B = 10 MHz, N = 15. Firstly, it is observed that in all cases, the proposed AL-SAC algorithm converges, although a slight fluctuation exists when B m = 15 MHz, N = 10. Secondly, we can see that with the growth of available bandwidth, the curve of achieved reward is a little less stable. The reason behind this is that the bandwidth allocation options for multiple devices also increase, i.e., the action space. Thirdly, it can be seen that with the growth of available bandwidth resources, the reward achieved by the proposed algorithm increases apparently, which implies increasing system utilities due to more radio resources being available. Lastly, compared with B m = 15 MHz, N = 10, the proposed AL-SAC algorithm with B m = 15 MHz, N = 15 can reach similar reward but more converge speed because when more devices share the same limited bandwidth resources, the weighted overall utility is not higher. Moreover, the action space for each device reduces when the devices number increases, resulting in more stable performance. Meanwhile, to verify the proposed algorithm can provide an effective solution to the constrained RL problem, in Figure 5, we also show the bandwidth constraint in the same scenarios as Figure 4. It clearly can be seen that in all the cases, the proposed AL-SAC can meet bandwidth requirements after 100 episodes. This also shows that the proposed algorithm can provide a feasible and effective solution to the constraint optimization problem considered. In Figures 6 and 7, we compare the performance of the proposed AL-SAC algorithm with two benchmark algorithms: the DDPG algorithm with the penalty involved in the reward function, termed as penalty DDPG [33] and the original soft actor-critic algorithm without constraint handling, termed as SAC [31]. Observe from Figure 6, the SAC algorithm achieves the highest reward, which is much larger than those achieved by AL-SAC and Penalty DDPG. However, from Figure 7, we can see that the overall bandwidth allocated exceeds the maximum bandwidth constraint. Hence, it can be concluded that the high reward achieved by the SAC algorithm is because it cannot handle the constraint of the action sum, resulting in more radio resources being available, i.e., BS cannot satisfy the constraints in (9).
More importantly, comparing the rewards achieved by the proposed AL-SAC and Penalty DDPG, we can see that from Figure 7, both of them satisfy the maximum bandwidth, and the proposed AL-SAC algorithm significantly outperforms the Penalty DDPG algorithm due to the larger reward achieved. The reason behind this is that although Penalty DDPG can meet bandwidth constraints relying on the penalty item in the rewards, the proposed AL-SAC has a strong capability in handling much smaller discrete and continuous action spaces by two stage-design, and the introduction of the constraint part can make the optimizing process more effective. Specifically, the proposed AL-SAC algorithm achieves an improvement of around 42.1% in reward compared to the Penalty DDPG algorithm after 5000 episodes. Moreover, from Figure 7, the overall bandwidth allocated of the proposed AL-SAC algorithm is a little larger than the Penalty DDPG, which is both under the maximum bandwidth B m = 10 MHz constraint. It shows that the policy of device association and bandwidth allocation trained by the proposed AL-SAC algorithm are more efficient.
Results and Analysis
Furthermore, in Figure 8, we tend to verify the performance of each device in different network slices. We consider the scenario of the maximum bandwidth B m = 10 MHz; the number of devices N = 15; the minimum rate requirement for eMBB device R 0 = 4 Mbps; the minimum delay requirement for URLLC device T 0 = 20 ms; and the minimum bandwidth requirement for mMTC device B 0 = 0.18 M. It can be seen from the figure that the proposed AL-SAC algorithm can make all of the devices in various network slices meet their corresponding constraints in three slices, while the Penalty DDPG algorithm cannot. As shown in Figure 8a,b, the device eMBB-3 cannot achieve the required rate, and the devices URLLC-3 and URLLC-5 exceed the maximum delay in (5). Moreover, comparing the transmission rate, delay and bandwidth constraint in the three sub-figures, it can be seen that the network performance achieved by the device in one slice is more even when the proposed AL-SAC algorithm is adopted. This means a better fairness performance is obtained by our proposed AL-SAC algorithm. To see the results affected by different weights, in Figure 9, we also plot the average reward achieved by the proposed AL-SAC algorithm when weight (w eM , w UR , w mM ) = ( 1 3 , 1 3 , 1 3 ), ( 2 3 , 1 6 , 1 6 ), and ( 1 10 , 3 5 , 3 10 ) with the maximum bandwidth B m = 15 MHz, n = 10, respectively. It can be seen that (1) comparing the blue and orange bars in Figure 9a, the total bandwidth allocated to the eMBB slices grows when weight w eM increases. (2) Comparing the blue and yellow bars in Figure 9b, the overall delay in the URLLC slice reduces when weight w UR increases. (3) Comparing the yellow bars in Figure 9a,c, the overall bandwidth allocated grows proportionally for devices in mMTC slices when weight w mM . This is because the agent will more attention to the slice with the higher weight.
Conclusions
In this paper, we investigated the network slicing problem in IIoT, where the device association and bandwidth allocation for devices in different slices are jointly optimized. By formulating it as a constraint mixed integer nonlinear programming problem with continuous and discrete variables, a Lagrangian-based SAC algorithm is proposed to solve it using DRL. Aiming to maximize the total weighted utility under limited bandwidth resources, cost neural networks and Lagrangian multiplier networks are introduced to update the Lagrangian multipliers and the penalty term is introduced also to the reward function. Moreover, specifically, a novel two-stage actions selection network is presented based on DRL to handle the hybrid actions and decrease the action space simultaneously. Our results verify that the proposed AL-SAC algorithm can effectively meet the constraint and achieve better performance than other benchmark algorithms in terms of average reward and fairness.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,547.8 | 2022-10-19T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Dietary Lipid Effects on Gut Microbiota of First Feeding Atlantic Salmon (Salmo salar L.)
Decline in fish oil and fish meal availability has forced the aquaculture sector to investigate alternative and sustainable aquafeed ingredients. Despite that several studies have evaluated the effect of fish oil replacement in aquaculture fish species, there is a knowledge gap on the effects of alternative dietary lipid sources on the gut microbiota in early life stages of Salmo salar. The present study evaluated the influence of dietary administration of two different lipid sources (fish oil and vegetable oil) on the intestinal microbiota of first feeding Atlantic salmon (S. salar) up to 93 days post first feeding (dpff). The two diets used in this study, FD (fish oil diet) and VD (blend of rapeseed, linseed and palm oils diet), were formulated to cover the fish nutritional requirements. Apart from the lipid source, the rest of the feed components were identical in the two diets. Hindgut samples were collected at 0, 35, 65, and 93 dpff. Moreover, fertilized eggs, yolk sac larvae, rearing water and feed were also collected in order to assess a possible contribution of their microbiota to the colonization and bacterial succession of the fish intestines. To analyze the bacterial communities, amplicon sequencing was used targeting the V3–V4 region of the 16S rRNA gene. The findings indicate that feeding on either fish oil or vegetable oil-based diet, fish growth variables (mean wet weight and total length) did not differ significantly during the experiment (p > 0.05). No significant differences were also found between the two dietary groups, regarding their gut bacteria composition, after the analysis of the 16S rRNA sequencing data. Instead, gut microbiota changed with age, and each stage was characterized by different dominant bacteria. These operational taxonomic units (OTUs) were related to species that provide different functions and have been isolated from a variety of environments. The results also show little OTUs overlap between the host and rearing environment microbiota. Overall, this study revealed the occurrence of a core microbiota in early life of Atlantic salmon independent of the feed-contained oil origin.
INTRODUCTION
Fishmeal and fish oil have been the main ingredients in diets for farmed carnivorous fish species, providing the fed fish with the necessary proteins and lipids for high growth performance and resulting in a nutritionally rich final product (International Fishmeal and Oil Manufacturers Association [IFOMA], 2001;Turchini et al., 2010). Due to the declining availability of fishmeal and fish oil, their contents in feed are reduced (Hardy, 2010) and substituted by a variety of alternative feed ingredients. As changes in fish diet ingredients can alter the gut microbiota of fish species, it is important to evaluate the impact of these new diets with lower fish-meal and -oil contents on the composition of the gut microbial communities for reared fish species (for a review see Ringø et al., 2016).
In Atlantic salmon (S. salar), a carnivorous fish with significant economic value in European aquaculture (FAO, 2004), the effect of the alternative aquafeed ingredients on the gut microbiota have been evaluated previously and, in some cases, it was revealed that changes associated with intestinal disorders and slower growth performance, were related to the fishmeal diets (e.g., Green et al., 2013;Navarrete et al., 2013;Schmidt et al., 2016;Gajardo et al., 2017;Booman et al., 2018;Egerton et al., 2020). These studies, however, have focused mainly on alternative protein sources and on juveniles and adult stages.
Although feed is considered as the main factor that affect the gut bacterial communities in fish species, data from previous studies have also shown variations in gut microbial communities across development stages which seem to be affected not only by the provided feed but also from the microbial communities of the rearing environment (Bakke et al., 2013(Bakke et al., , 2015Stephens et al., 2016;Dehler et al., 2017;Egerton et al., 2018). For example, recent work by Minich et al. (2020) recognizes the strong association between the build environment, i.e., tank biofilm and water from the hatchery installation, and Atlantic salmon mucosal microbiota. In a different salmonid species (rainbow trout), gut microbiota was detectable before first feeding commenced, potentially due to contact with the surrounding water and yok sac digestion, indicating that gut microbiota establishment initiates at first feeding and that diet-type affect the bacterial composition (Ingerslev et al., 2014a,b).
Moreover, it has been reported, that fish egg fragments are consumed from the newly hatched larvae, and their microbiome can affect gut microbiota colonization in fish species (Olafsen, 1984;Beveridge et al., 1991;Nikouli et al., 2019). The significant stage of the mouth opening, in aspect of larval microbiota manipulation, have been also recognized in shrimp larvae by Wang et al. (2020) in aquaculture conditions. In addition, evidence also suggest that as early life stages are more prone in environmental/climate changes, then probably is more crucial to study the microbiome shaping on these stages (Lowe et al., 2021). On the other hand, studies have shown that host development considered to had greater effect than hatching environment on the gut microbiota colonization and succession (Califano et al., 2017;Nikouli et al., 2019;Xiao et al., 2021).
Apart from a few studies, which have investigated the gut microbial communities in early life stages of Atlantic salmon (Llewellyn et al., 2016;Dehler et al., 2017;Lokesh et al., 2019), there is a knowledge gap on the effects of a different dietary lipid source on the gut microbiota in early life stages of this fish species, as only Clarkson et al. (2017) have partially investigated the impact of fish oil replacement by vegetable oils during a dietary experiment in diploid and triploid populations of Atlantic salmon. The objective of the present study was to evaluate the influence of total replacement of fish oil with a blend of terrestrial alternative oils (rapeseed, linseed and palm oils) on the intestinal microbiota of first feeding Atlantic salmon. We also characterized the bacterial communities of the rearing environment to determine their contribution in the early colonization and the succession of the fish intestines.
Experimental Design and Sampling
The study was carried out within the Norwegian animal welfare act guidelines, in accordance with EU regulation (EC Directive 2010/63/EU), approved by the Animal Ethics and Welfare Committee of the Norwegian University of Science and Technology (case number 16/10070). The experiment was conducted at the Ervik hatchery (Frøya, Norway) as described previously in Jin et al. (2019). Briefly, a fast-growing Atlantic salmon aquaculture strain was cultivated from fertilized eggs until 93 days post first feeding (dpff). The two diets used in this study, FD (fish oil diet) and VD (blend of rapeseed, linseed and palm oils diet), were formulated to cover the fish nutritional requirements. Apart from the lipid source, the rest of the feed components were identical in the two diets (see Supplementary Table 1).
Each dietary treatment was tested in duplicated groups of 200 Atlantic salmon individuals (0.23 g ± 0.03/fish). On sampling days (0, 35, 65 and 93 dpff) 10 fish from each tank were randomly collected and sacrificed by immersion in 40 mg/L Benzocaine (BENZOAK VET, ACD Pharmaceuticals AS, Oslo, Norway). Furthermore, duplicate samples of rearing water (100 ml/tank) were collected and filtered through 0.2 µm membrane filters (GTTP, Millipore, United States) using a low (<1,500 mmHg) vacuum apparatus. For gut microbiota analysis, hindguts were removed by aseptic dissection and rinsed with ultra-pure water. Moreover, 10 fertilized eggs (EG), 10 yolk sac larvae (YS), and 0.25 g of the provided feeds were sampled in order to assess the contribution of their microbiota on the colonization of Atlantic salmon gut.
DNA Extraction and Sequencing
DNA was isolated from Atlantic salmon (eggs/yolk sac larvae/hindguts) and environmental (water/diets) samples by using the QIAGEN QIAamp DNA Mini Kit (Qiagen, Hilden, Germany) following the manufacturer's protocol "DNA Purification from Tissues." Bacterial communities were characterized by 16S rRNA amplicon sequencing. All samples analyzed individually and pooled prior the 16S rRNA analysis as follow: (a) DNA extracts from 5 individual fish samples (eggs, yolk sac larvae and hindguts) were pooled, resulting in two pooled samples from each time point/fish tank (Supplementary Table 2) and (b) the DNA from the rearing water samples were pooled, resulting in 1 water sample per replicate tank ("STW"initial stock tank, "FW"-rearing water from fish oil (FD) group and "VW"-rearing water from vegetable oil (VD) treatment.
PCR amplification and sequencing was performed at MRDNA Ltd. 1 (Shallowater, TX, United States) facilities on a MiSeq using paired end reads (2 × 300 bp) following the manufacturer's guidelines. A total of 37 samples (representing 30 pooled fish samples, 5 pooled water samples and 2 feed samples) was used in the final amplicon library. The 16S rRNA gene V3-V4 variable region PCR primers S-D-Bact-0341-b-S-17 and S-D-Bact-115 0785-a-A-21 (Klindworth et al., 2013) with barcodes on the forward primer (Supplementary Table 3) were used in a 30 cycle PCR using the HotStarTaq Plus Master Mix Kit (Qiagen, United States) under the following conditions: 94 • C for 3 min, followed by 30 cycles of 94 • C for 30 s, 53 • C for 40 s, and 72 • C for 1 min, after which a final elongation step at 72 • C for 5 min was performed. After amplification, PCR products were checked in 2% agarose gel to determine the success of amplification and the relative intensity of bands. After that, the samples were pooled together in equal proportions based on their molecular weight and DNA concentrations. Pooled samples were purified using calibrated Ampure XP beads. Then the pooled and purified PCR product was used to prepare illumina DNA library.
Data Analysis
Sequencing raw data were processed with the MOTHUR software (version 1.40.5) (Schloss et al., 2009(Schloss et al., , 2011 and the operational taxonomic units (OTUs) were classified with the SILVA database release 132 Yilmaz et al., 2014) following the methodology described in Nikouli et al. (2018). Identification of closest relative of the Most abundant OTUs was performed with Nucleotide Blast 2 . Raw sequence data from this study have been submitted to the Sequence Read Archive 3 with BioProject accession number PRJNA520982. Statistical analysis and graphical illustrations were performed in the Palaeontological STudies (PAST) software (Hammer et al., 2001) and in the R Studio platform Version 1.1.419 (RStudio Team, 2020), with 3.4.3 R version and enveomics.R package, Version 1.2.0 (Rodriguez-R and Konstantinidis, 2016).
Fish Growth Performance
The growth performance of the fish was evaluated throughout the experiment (see Supplementary Table 4 and Supplementary Figure 1) and at none of the sampling points the mean wet weight or total length differed significantly across replicate tanks or between dietary treatments (FD and VD) (p > 0.05; Supplementary Figure 1). The initial (D0) mean weight was 1 www.mrdnalab.com 2 https://blast.ncbi.nlm.nih.gov/Blast.cgi 3 https://www.ncbi.nlm.nih.gov/sra/ 0.23 ± 0.03 g (±SD). After 93 days (D93) the final mean weight was 4.58 ± 1.74 g for the group FD and 4.54 ± 1.78 g for the group VD. Regarding total mean length, the initial (D0) was 29.9 ± 1.6 cm which increased to 76.0 ± 8.9 cm and 73.8 ± 9.2 cm at D93 for the groups FD and VD, respectively.
Bacterial Diversity
The analysis of the 16S rRNA sequencing data revealed a total of 4,548 unique OTUs, with the rarefaction curves (Supplementary Figure 2) and the OTUs richness coverage based on the Chao1 index (Supplementary Table 5) indicating satisfactory sequencing depth. Diversity was considerable higher for rearing water (STW, FW, VW) than gut and diet samples, both in terms of OTU richness ( Table 1) and evenness (Supplementary Table 5).
S. salar Microbiota
Comparing Atlantic salmon microbiota between the different life stages, fertilized eggs (EG) had the highest observed and estimated (Chao1) OTU richness (172 ± 100 and 222 ± 114, respectively). At the yolk sac stage (YS), the OTU richness decreased to 87 ± 0.7 and increased again at first feeding (D0). After that, OTU richness was on the same level until D93 when it decreased (Table 1). Proteobacteria was the dominant bacterial phylum in the samples, mainly due to γand β-Proteobacteria (Supplementary Figure 4). β-Proteobacteria was the dominant subphylum in prefeeding stages (EG, YS, D0), with representatives mainly from the Burkholderiaceae and Chitinibacteraceae families (Supplementary Figure 5). However, in fertilized eggs (EG), OTUs representing β-Proteobacteriales were classified only at class level (44.1% of the total reads). γ-Proteobacteria dominated the period with active feeding (D35-D93) in both dietary treatments, with Pseudomonadaceae, Xanthomonadaceae, Vibrionaceae, Enterobacteriaceae, Moraxellaceae, and Aeromonadaceae as the most abundant families. However, their relative abundances differed between the two dietary treatments (Supplementary Figure 6). Actinobacteria, the dominant bacterial phylum at the late stages (D35 and D65) in vegetable oil dietary group (VD), was due to the high relative abundance of mainly Propionibacteriales, Corynebacteriales, and Micrococcales representatives. The presence of Firmicutes and Bacteroidetes was due to the classes Bacilli and Bacteroidia.
Microbial Communities in Diets and Rearing Water
The bacterial communities in feed samples, consisted almost entirely of Firmicutes (relative abundance of 84.2 and 82.1% in FD and VD, respectively; Figure 1). The Firmicutes were affiliated to the Lactobacillaceae (38.5 and 36.6% in FD and VD, respectively) and Leuconostocaceae families (37.9 and 38.8% in FD and VD, respectively). The rearing water samples (VW, FW, WST) contained mainly Proteobacteria, Actinobacteria and Bacteroidetes species, with Burkholderiaceae (β-Proteobacteria), Sporichthyaceae (Actinobacteria) and Chitinophagaceae (Bacteroidetes) as the most abundant families (Figure 1). In contrast to the experimental diets, Firmicutes in water samples were detected in relative abundance ≤ 1%.
Similarities Between Bacterial Communities
Statistical analysis revealed no significant differences (Tukey's test, p > 0.05, Supplementary Table 6) in the bacterial community composition of the Atlantic salmon samples between the pre-feeding stages (EG, YS, D0). However, EG and D0 samples differed significantly from those taken during the feeding period (D35-D93) in both dietary treatments, with stage D35 in VD group as the only exception. YS bacterial communities differed significantly (p < 0.05) with the bacterial communities only at D93 in both dietary groups (FD and VD). The gut microbiota of the host did not reveal significant differences between the two dietary groups for the different stages (p > 0.05), again with stage D35 in VD group as the only exception (Supplementary Table 6). Further comparison of the bacterial community composition of Atlantic salmon hindguts, based on a Bray-Curtis distance matrix (Figure 2), showed a clear separation between bacterial communities in gut and bacterial communities of the rearing environment (water and diets). Moreover, the bacterial communities of the host were more similar with respect to life stages than to the diet treatments (Figure 2 and Supplementary Figure 7), and this is also indicated through the similarity percentages analysis (SIMPER) based on Bray-Curtis distance. According to the results of the analysis the average dissimilarity among the groups of the same life stages was 76.0%, whereas the average dissimilarity within groups of the same dietary treatment was 78.5% (FD) and 83.6% (VD).
Common and Unique OTUs
Overall, only 2.3% of the OTUs were found in all sample types (rearing water, diets, pre-and after first feeding hindguts). 75.4% of OTUs occurred only in water samples (Figure 3). From the 1,004 OTUs detected in total in Atlantic salmon samples, 423 OTUs (9.3% of the OTUs) were unique in that type of samples. The majority of them (343 OTUs) were unique in the host at the active feeding stages, whereas 13 OTUs were shared among all samples independent of life stage or diet treatment.
DISCUSSION
In the present study, we evaluated the influence of dietary administration of two different lipid sources (fish oil and vegetable oil) on the gut bacterial communities of first feeding Atlantic salmon. Moreover, we characterized the bacterial communities from the rearing environment (rearing water and feeds) and the epibiotas of fertilized eggs and yolk sac larvae to determine their contribution in the bacterial colonization and succession of the gut. Previous studies suggest that the bacterial communities of the rearing environment, mainly from the rearing water and the feed, are important sources for community assembly of the intestinal microbiota of fish (Hansen and Olafsen, 1999;Nayak, 2010;McDonald et al., 2012;Scott et al., 2013;Bolnick et al., 2014;Eichmiller et al., 2016;Kashinskaya et al., 2018). For example, Schmidt et al. (2016), reported a significant effect on intestinal microbial communities in postsmolt Atlantic salmon following replacement of dietary fishmeal with plant ingredients. However, the results in the present study suggest that substitution of fish oil by vegetable oils did not significantly affect the composition of intestinal microbial communities in the same host species.
Furthermore, the results of the present study indicate little overlap between the bacterial communities of the host with that of the rearing environment (water and feed), whereas the life stage appeared to be the main factor affecting the structure of gut microbiota. These results are in agreement with previous findings from Llewellyn et al. (2016), who studied 96 wild-caught individuals of Atlantic salmon with different age and habitats and observed grouping of their intestinal bacterial communities based on the lifecycle stage. In addition, Lokesh et al. (2019), reported stage specific microbial enrichment in intestinal mucosa of the same host species (samples from embryonic stages up to 80 weeks post hatch). Similar stage specific signatures have also been reported across development in Sparus aurata (Nikouli et al., 2019), Danio rerio (Stephens et al., 2016), and Gadus morhua (Bakke et al., 2015) suporting further that the life stage seems to be the primary force shaping gut microbiota in juveniles' stages of fish. The change in microbiota with life stage can be due to both host-microbe (e.g., development in morphology and immune system) and microbe-microbe interactions (mutualism, commensalism and competition). The significance of these factors is, however, still not known.
Proteobacteria, Firmicutes, Actinobacteria, and Bacteroidetes were the dominant bacterial phyla detected in the host samples for both dietary treatments in our study. These bacterial phyla seem to characterize the bacterial communities in individuals of Atlantic salmon at the freshwater life cycle stages (Llewellyn et al., 2016). These bacterial phyla are also commonly found in the gut bacterial communities of both saltwater and freshwater fish species (Hansen and Olafsen, 1999;Nayak, 2010;McDonald et al., 2012;Navarrete et al., 2013;Bolnick et al., 2014;Kormas et al., 2014;Llewellyn et al., 2016;Stephens et al., 2016;Dehler et al., 2017;Tarnecki et al., 2017;Booman et al., 2018;Lokesh et al., 2019;Nikouli et al., 2018Nikouli et al., , 2019. Despite the fact that the two experimental feeds contained almost entirely Firmicutes, the increase in relative abundance of Firmicutes in samples after the onset of feeding was not solely due to feed specific OTUs. It should also be noted that 26.4% of the bacterial representatives detected on fertilized eggs (EG) were not detected in the water of the incubation tank (WST). This support the view that the microbial communities of fish eggs may be vertically transmitted from their parents or horizontally from their breading tank (Hansen and Olafsen, 1989;Nikouli et al., 2019).
In agreement with previous studies (Schmidt et al., 2016;Lokesh et al., 2019;Nikouli et al., 2019) the observed species richness in water samples was always an order of magnitude higher than the richness of the host samples. Bacterial communities in rearing water did not show major shifts during the experiment. OTU0001 dominated at all time points, with closest relative the bacterial species Polynucleobacter necessaries. This species is commonly found in freshwater samples and it can contribute to the catabolism of urea and reduction of nitrate (Boscaro et al., 2013). The dominant bacterial species in Atlantic salmon samples are related with bacterial species from various habitats. The dominant OTU on fertilized eggs (OTU0011) was classified within the Methylotenera genus (β-Proteobacteria) and has previously been detected in fertilized eggs of the same host species by Lokesh et al. (2019). This genus consists of methylotrophic species that use methylamine as sole carbon, energy and nitrogen source (Kalyuzhnaya et al., 2006) and seem to be associated with RAS systems (Minich et al., 2020). The dominant OTU at the YS stage (OTU0013), seems to be related with Delftia acidovorans (β-Proteobacteria). Species of the genus Delftia are obligate anaerobes, organotrophic and nonfermentative organisms (Wen et al., 1999). They have previously been detected in the gut of healthy individuals of Epinephelus coioides (Sun et al., 2009), Oncorhynchus mykiss (Navarrete et al., 2012) and Sparus aurata (Kormas et al., 2014;Nikouli et al., 2018) and S. salar (Gajardo et al., 2016).
Just before onset on feeding (D0), the dominant OTU (OTU0009) showed similarities with the species Iodobacter fluviatilis of the Chitinibacteraceae (β-Proteobacteria) family. Species of this genus have been recorded mainly in sediment and water samples (Ryall and Moss, 1975;Wynn-Williams, 1983;Logan, 1989). Their presence on fish skin (Oncorhynchus mykiss and Salmo trutta) has been associated with skin lesions (Carbajal-González et al., 2011). However, they have previously been detected in high relative abundance in healthy Coreius guichenoti individuals (Li et al., 2016) whereas the present study reports the presence of this bacterial species in Atlantic salmon gut microbiota for the first time.
After first feeding, although not statistically significant differences were found between the bacterial communities in the hindgut samples of the different life stages, each stage was characterized by different dominant OTUs. Moreover, gut bacterial communities differed also between dietary treatments regarding their dominant bacterial species (OTU). Chitinibacteraceae, the dominant bacterial family on D0 (with relative abundance 32.3%), was detected in ∼50× lower relative abundance (≤0.6%) in the rest of the samples. At D35 and D65 in FD treatment, the dominant OTUs (OTU0017 and OTU0070, classified as Pseudomonas viridiflava and Janthinobacterium agaricidamnosum, respectively), are described as plant (Alivizatos, 1986;Alimi et al., 2011;Taylor et al., 2011;Sarris et al., 2012) and mushroom pathogens (Lincoln et al., 1999;Graupner et al., 2015). According to recent findings, Janthinobacterium lividum (β-Proteobacteria) exhibits antimicrobial activity against multidrug resistant bacteria of clinical and environmental origin, such as Enterococci and Enterobacteriaceae (Baricz et al., 2018). Its presence in the gastrointestinal bacterial communities of Atlantic salmon, may have probiotic activity.
At D35 and D65, samples from the VD dietary treatment, were dominated by OTU0005, with closest relative Cloacibacterium normanense (Bacteroidetes). This OTU was also dominant at D93 in FD treatment. According to the literature, this species is frequently present in sewage treatment plants (Benedict and Carlson, 1971;Güde, 1980) where it contributes in the decomposition of complex organic compounds (Bernardet et al., 2002). Similar processes may take place in the intestinal system of Atlantic salmon at D35V, D65V, and D93F. The dominant OTU at D93 (OTU0004), also dominant in both provided feeds (FD, VD), was affiliated with Weissella cibaria (Firmicutes). This bacterial species belongs to the lactic acid bacteria, and has antimicrobial activity in the intestinal system of other fish species (Mouriño et al., 2016). Other Weissella spp. have been found in gut of Oncorynchus mykiss (Lyons et al., 2017;Mortezaei et al., 2020) and Atlantic salmon (Reveco et al., 2014;Godoy et al., 2015;Lokesh et al., 2019). It is worth noting that beside OTU0004, also OTU0013 and OTU0017 are associated with probiotic bacterial species (detected in all time points studied here, from EG to D93, independently of the dietary treatment). This observation suggests a coevolutionary relationship of these bacterial species with the host studied here, and a possible specialized function in the hosts intestinal system.
CONCLUSION
The present study evaluated the effect of total fish oil replacement by a blend of terrestrial vegetable oils (rapeseed, linseed and palm oils) in the feed on the colonization and the bacterial succession in first feeding of Atlantic salmon, up to 93 days dpff. We demonstrated that feeding on either fish oil or terrestrial vegetable oil diets, did not result in significant differences in the intestinal gut microbiota and growth performance parameters (wet weight and total length). On the contrary, the composition of gut microbiota changed with age, and each stage was characterized by different dominant bacteria. These OTUs are related to species that may have probiotic activity to the host. Finally, this study revealed the occurrence of a core microbiota independent of the studied life stages and diet. These findings indicate that total fish oil replacement by terrestrial vegetable oils is feasible and can lead in low cost formulated feeds. Future work should aim on understanding the functional role of the detected core community which could lead in further feed, growth performance and host health optimization.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: https://www.ncbi.nlm. nih.gov/, PRJNA520982.
ETHICS STATEMENT
The animal study was reviewed and approved by the Animal Ethics and Welfare Committee of the Norwegian University of Science and Technology (case no. 16/10070). The study was carried out within the Norwegian Animal Welfare Act guidelines, in accordance with EU regulation (EC Directive 2010/63/EU).
AUTHOR CONTRIBUTIONS
EN, KK, YO, IB, and OV: methodology. EN: formal analysis. EN and KK: data curation and writing-original draft preparation. EN, KK, YJ, YO, IB, and OV: writing-review and editing. OV: supervision. All authors contributed to the article and approved the submitted version. | 5,796 | 2021-05-20T00:00:00.000 | [
"Environmental Science",
"Biology",
"Agricultural and Food Sciences"
] |
Nematicity and magnetism in FeSe and other families of Fe-based superconductors
Nematicity and magnetism are two key features in Fe-based superconductors, and their interplay is one of the most important unsolved problems. In FeSe, the magnetic order is absent below the structural transition temperature $T_{str}=90$K, in stark contrast that the magnetism emerges slightly below $T_{str}$ in other families. To understand such amazing material dependence, we investigate the spin-fluctuation-mediated orbital order ($n_{xz}\neq n_{yz}$) by focusing on the orbital-spin interplay driven by the strong-coupling effect, called the vertex correction. This orbital-spin interplay is very strong in FeSe because of the small ratio between the Hund's and Coulomb interactions ($\bar{J}/\bar{U}$) and large $d_{xz},d_{yz}$-orbitals weight at the Fermi level. For this reason, in the FeSe model, the orbital order is established irrespective that the spin fluctuations are very weak, so the magnetism is absent below $T_{str}$. In contrast, in the LaFeAsO model, the magnetic order appears just below $T_{str}$ both experimentally and theoretically. Thus, the orbital-spin interplay due to the vertex correction is the key ingredient in understanding the rich phase diagram with nematicity and magnetism in Fe-based superconductors in a unified way.
I. INTRODUCTION
In Fe-based superconductors, the origin of the electronic nematic state and its relation to the magnetism have been a central unsolved problem. Recently, the nonmagnetic nematic state in FeSe has attracted increasing attention as a key to solve the origin of the nematicity. FeSe undergoes a structural and superconducting transitions at T str = 90K and T c = 9K, respectively, whereas the magnetic transition is absent down to 0 K [1]. The strength of the low-energy antiferro-magnetic (AFM) fluctuations is very weak above T str , while it starts to increase below T str [2][3][4][5][6][7]. In stark contrast, the magnetic transition occurs at T mag slightly below T str in other undoped Fe-based superconductors. Since the relation T str > T mag is unable to be explained by the random-phase-approximation (RPA), we should develop the microscopic theory beyond the mean-field-level approximations.
Up to now, two promising triggers for the structure transition have been discussed intensively: In the spinnematic scenario [8][9][10][11][12], the trigger is the spin-nematic order. This spin-fluctuation induced spin-quadrupole order could emerge above T mag in highly magnetically frustrated systems. In the orbital order scenario [13][14][15][16], the trigger is the ferro-orbital (FO) order n xz = n yz . Above T str , the strong orbital or spin-nematic fluctuations are observed by the measurements of shear modulus C 66 [2,17,18], Raman spectroscopy [19][20][21][22], and in-plane resistivity anisotropy [23,24]. The nematic orbital fluctuations originate from the strong orbital-spin mode-coupling due to the strong-coupling effect, which is described by the Aslamazov-Larkin vertex correction (AL-VC). The electronic nematic state studied in singleorbital models [25] is more easily realized in multiorbital systems thanks to the AL-VC mechanism [16]. Except for the presence or absence of magnetism below T str , FeSe and other Fe-based superconductors show common electronic properties. Below T str , in both FeSe and BaFe 2 As 2 , large orbital polarization ∆E ≡ E yz − E xz ∼ 50 meV [26][27][28][29][30][31][32][33] is observed. Such large ∆E originates from the electron-electron correlation since the lattice distortion (a − b)/(a + b) is just 0.2 ∼ 0.3%, as we discuss based on band calculation in Appendix A. Above T str , the electronic nematic susceptibility is enhanced in both BaFe 2 As 2 [17,19,23] and FeSe [2,18], following the similar Curie-Weiss behavior. These facts indicate that the common microscopic mechanism drives the nematic order and fluctuations in all Fe-based superconductors, in spite of the presence or absence of the magnetism.
The realistic multiorbital Hubbard models for Fe-based superconductors, which are indispensable for the present study, were derived by using the first-principles method in Ref. [34]. To understand the absence of the magnetism below T str in FeSe, one significant hint is the smallness of the ratio between the Hund's and Coulomb interactions, J/Ū , since the Hund's coupling enlarges (suppresses) the intra-site magnetic (orbital) polarization, which is verified by the functional renormalization-group (fRG) theory [35,36]. Another significant hint is the absence of the d xy -orbital hole-pocket in FeSe, which is favorable for the orbital-spin interplay on the (d xz , d yz )-orbitals due to the AL-VC mechanism.
The goal of this paper is to explain the amazing variety of the electronic nematic states in Fe-based superconductors, especially the non-magnetic nematic state in FeSe, on the same footing microscopically. For this purpose, we study the spin-fluctuation-mediated orbital order by applying the self-consistent vertex-correction (SC-VC) method [16] to the first-principles models. In FeSe, the orbital-spin interplay is significant because of the smallness ofJ/Ū and the absence of d xy -hole pocket. For this reason, the orbital order is realized even when the spin fluctuations are substantially weak. The rich variety of the phase diagrams in Fe-based superconductors, such as the presence or absence of the magnetic order in the nematic phase, are well understood by analyzing the vertex correction seriously. The SC-VC theory had been successfully applied to explain the phase diagram in LaFeAs(O,H) [37], nematic CDW in cuprates [38,39], and triplet superconductivity in Sr 2 RuO 4 [36].
We comment that the localized spin models have been successfully applied to the nematic order, stripe magnetic order, and so on [40]. On the other hand, weak-coupling theories have also been applied to Fe-based superconductors satisfactorily [41]. In the present study, we study the mechanisms of the nematicity and magnetism in various Fe-based superconductors in terms of the itinerant picture, by taking the strong-coupling effect due to the AL-VC into account. The significant role of the AL-VC on the orbital fluctuations has been confirmed by the fRG theory [35,36]. The AL-VC is important to reproduce the Kugel-Khomskii-type orbital-spin interaction [16].
II. MODEL HAMILTONIAN AND SC-VC THEORY
In the present study, we study the realistic d-p Hubbard models for M=LaFeAsO and FeSe by applying the SC-VC method [16]. In Eq. (1), is the 8-orbital d-p tight-binding (TB) model in k-space, which is obtained by using the WIEN2k and WAN-NIER90 softwares; see Appendix A for detailed explanation. σ is the spin index, and l, m are the orbital indices: Hereafter, we denote the five d-orbitals as d 3z [42,43] studies. To reproduce experimental bandstructure of FeSe, we introduce the additional intra-orbital hopping parameters into H 0 FeSe , in order to shift the d xyorbital band [d xz/yz -orbital band] at (Γ, M, X) points by (0, −0.25, +0.24) [(−0.24, 0, +0.12)] in unit eV; see Appendix A. These energy shifts might be induced by the self-energy [44]. The constructed FSs in the FeSe model is shown in Fig. 1 (c). Since each Fermi pocket is very shallow, the superconductivity in FeSe could be close to a BCS-BEC crossover [45].
In Eq. (1), H U M is the first-principles screened Coulomb potential for d-orbitals given by the "constrained-RPA method" [34] given as where U m,l and J m,l are orbital-dependent Coulomb and Hund's interactions for d-electrons, respectively [34]. The averaged intra-orbital Coulomb interactionŪ ≡ [34]. Thus, the ratiō J/Ū = 0.0945 in FeSe is much smaller than the ratiō J/Ū = 0.134 in LaFeAsO. Such strong material dependence of (Ū ,J) is understood as follows: U l,m is strongly screened by the screening bands (excluding the 8 bands in H 0 M ) whereas the screening of J l,m is much weak, and the number of the screening bands is small in FeSe [46]. The factor r(< 1) in Eq. (1) is introduced to adjust the spin fluctuation strength. The ratio J l,m /U l,m is unchanged by introducing the factor r [47,48]. The 8 × 8 Green function in the orbital basis is given asĜ where k = (k, ǫ n = (2n + 1)πT ),ĥ 0 M (k) is the kinetic term in Eq. (2), andẑ −1 ≡ 1 − ∂Σ/∂ǫ| ǫ=0 represents the mass-enhancement due to the self-energy at the Fermi level. Here, we introduce the constant mass-enhancement factor for d-orbital 1/z l (≥ 1). Then, Eq. (4) gives the coherent part of the Green function, which mainly determines the low-energy electronic properties. In the present study, r and z l are the fitting parameters. In FeSe, the orbital order is obtained in the real first-principles Hamiltonian (r ≈ 1) by taking the experimental massenhancement factors z −1 l ≈ 4 into account, as shown later.
The d-orbital charge (spin) susceptibilities (per spin) is given in the following 5 2 × 5 2 matrix form: whereΦ c(s) (q) =χ 0 (q) +X c(s) (q) is the irreducible susceptibility for the charge (spin) channel. In the SC-VC theory, we employ the AL-VC asX c,s (q), and perform the self-consistent calculation with respect to the AL-VC and susceptibilities. Using the Green function in Eq. (4), the bare susceptibility is where q = (q, ω l = 2lπT ). Also, the AL-VC for the charge susceptibility is given as where p = (p, ω m ), andV s,c (q) ≡Γ s,c +Γ s,cχs,c (q)Γ s,c . The three-point vertexΛ(q; p), which gives the coupling between two-magnon and one-orbiton, is given as Λ l,l ′ ;a,b;e,f (q; p) and Λ ′ m,m ′ ;c,d;g,h (q; p) ≡ Λ c,h;m,g;d,m ′ (q; p) + Λ g,d;m,c;h,m ′ (q; −p − q). We stress that the strong temperature dependence of the three-point vertex is significant for realizing the orbital order. We include all U 2 -terms without the double counting in order to obtain quantitatively reliable results. Equation (7) means that the charge AL-VC becomes large in the presence of strong spin fluctuations. More detailed explanations are presented in the textbook [49] In the present study, we neglected the spin-channel VCs since it is expected to be unimportant as discussed in Ref. [50]. In Appendix B, we verify validity of this simplification in the present model by performing a time-consuming self-consistent calculation with respect to both charge-and spin-channel Maki-Thompson (MT) and AL-VCs.
As explained in Ref. [50], the development of χ c x 2 −y 2 (0) is mainly induced by the diagonal elements ofΦ c with respect to l = 2, 3. If we drop the off-diagonal elements ofΦ c approximately, χ c x 2 −y 2 (0) is given as where U ≡ U 2,2 = U 3,3 , J ≡ J 2,3 , Φ c ≡ χ 0 l,l;l,l (0) + X c l,l;l,l (0) (l = 2 or 3). Thus, the charge Stoner factor is , only the spin fluctuations develop since the relation α S > α C is satisfied for J > 0. However, the opposite relation α C > α S is realized if the relationΦ c ≫ χ 0 is satisfied due to the charge-channel AL-VC [37].
III. NUMERICAL RESULTS FOR LAFEASO AND FESE
First, we analyze the LaFeAsO model based on the SC-VC theory. For z = 1 for each l, the obtained χ s (q) and χ c x 2 −y 2 (q) are shown in Fig. 2 (a) and (b), respectively, for r = 0.41 (Ū = 1.74 eV) at T = 50 meV. Here, the number of k-meshes is 32 × 32, and the number of Matsubara frequencies is 256. Thus, both AFM and FO susceptibilities develop divergently, and the realized enhancement factors are S S ≈ 40 and S C ≈ 50. The rdependences of the enhancement factors at T = 50 meV are shown in the inset of Fig. 2 (c): Both S S and S C increase with r, and they are equivalent at r * = 0.41. The lower the temperature is, the smaller r * is, whereas the value of S S = S C at r * is approximately independent of T . Similar result is obtained in BaFe 2 As 2 model as shown in Appendix C.
The orbital-spin interplay due to the AL-VC is intuitively understood in terms of the strong-coupling picture U ≫ W band [37]: As shown in Fig. 2 (d), when the FO order n xz ≫ n yz is realized, the nearest-neighbor exchange interaction has large anisotropy J Then, the stripe AFM order with Q = (π, 0) appears if J (2) is not too small. Thus, the FO order/fluctuations and AFM order/fluctuations emerge cooperatively in the localized model, and such Kugel-Khomskii-type orbital-spin interplay is explained by the AL-VC in the weak-coupling picture.
Next, we analyze the FeSe model, in which the ratiō J/Ū is considerably small. In FeSe, the experimental mass-enhancement factor is ∼ 10 for d xy -orbital, and 3 ∼ 4 for other d-orbitals according to the ARPES study [27]. Therefore, we put z −1 l = z −1 for l = 4 and z −1 4 = 3z −1 in the present study. We find that the peak of χ s (q) moves from q = (π, π) to the experimental peak position q = (π, 0) [4][5][6][7] for z −1 4 ≥ 1.5z −1 , and the results of the SC-VC method are essentially unchanged for z −1 4 ≥ 1.5z −1 . Figures 3 (a) and (b) show the obtained χ s (q) and χ c x 2 −y 2 (q) for r = 0.25 (Ū = 1.76 eV) at T = 50 meV in the case of z = 1. We see that only the FO susceptibility develop divergently [S C ≈ 50], whereas the AFM susceptibility remains small [S S ≈ 8], consistently with experiments for FeSe. The r-dependences of the Stoner enhancement factors at T = 50 meV are shown in the inset of Fig. 3 (c): With increasing r, S C increases rapidly whereas S S remains small. Figure 3 (c) shows the temperature dependences of the enhancement factors at r = 0.25. We stress that S C approximately follow the Curie-Weiss behavior with the Weiss temperature θ C = 48 meV, which is consistent with the experimental Curie-Weiss behavior with positive θ C in FeSe [2]. Since the spin Weiss temperature takes a large negative value (θ S ∼ −20 meV), which is also consistent with experiments, one may consider that the orbital order in FeSe stems from causes other than spin fluctuations.
IV. ORIGIN OF THE RELATION SC ≫ SS IN FESE
In this section, we discuss why the relation S C ≫ S S (θ C > 0 and θ S < 0) is realized in FeSe. First, we focus on the ratio between the Hund's and Coulomb interac-tionsJ/Ū . It is intuitively obvious that the ratioJ/Ū is an important control parameter for the orbital nematicity: For largerJ/Ū , the local configuration of the twoelectrons in the (d xz , d yz )-orbitals is |d xz , ↑ ⊗ |d yz , ↑ , where the magnetic moment is s z = 1 whereas the orbital polarization is n xz − n yz = 0. Thus, the smallness ofJ/Ū in FeSe is favorable for the emergence of the orbital order without magnetization.
Microscopically, as we discuss in Sec. II, the charge Stoner factor for χ c , where X c is the charge AL-VC for orbital 2 or 3 at q = 0. SinceJ/Ū = 0.0945 in FeSe, the orbital order is realized by relatively small AL-VC; X c ∼ 0.9χ 0 (0). In LaFeAsO, in contrast, large AL-VC of order ∼ 2χ 0 (0) is required to realize the orbital order. The obtained AL-VCs in both systems are shown in Fig. 9 (c) in Appendix D.
We discuss why the AL-VC is important in the FeSe model with θ S < 0: As we explain in Appendix D. the T -dependence of the AL-VC is given as X c ∼ Λ 2 T S S , where Λ is the three-point vertex that represents the interference between two short-living magnons. We find the relation Λ 2 ∝ T −a with a ≈ 1 at low temperatures due to the good nesting between h-FSs and e-FSs [20]. Thanks to the strong enhancement of Λ at low temperatures, the orbital order (α C = 1) is realized even if θ S is negative. (Note that T S S decreases as T → 0 when θ S < 0.) Thus, serious diagrammatic analysis of the AL-VC is necessary to understand the rich normal-state phase diagrams in Fe-bases superconductors.
The enhancement of the nematic susceptibility due to the significant T -dependence of Λ 2 (∝ T −a ) had been discussed in Refs. [20,21,51,52]. However, the reported exponent a is not universal, since it depends on the bandstructure and temperature range. In Appendix E, we show the T -dependence of the three-point vertex for LaFeAsO and FeSe models for wide temperature range. It is found that a ≈ 1 for T = 20z ∼ 100z[meV], where z < 1 is the band-renormalization factor. Due to such large T -dependence of a, χ c x 2 −y 2 (0) obtained by the present study follows the Curie-Weiss law only approximately.
Finally, we stress the importance of the orbital dependence of the spin fluctuation strength. Since the is enlarged by the spin fluctuations on the (d xz , d yz )-orbitals. More detailed analysis is given in Appendix D.
V. EFFECT OF THE MASS-ENHANCEMENT FACTOR
Here, we study the effect of the mass-enhancement factor: We study the FeSe model in the case of z −1 = 4 for (d xz , d yz )-orbitals. The obtained S C,S as functions of r are shown in the inset of Fig. 3 (d) at T = 12.5 meV.
Here, S S remains small even for r ∼ 1 since the bare susceptibility is suppressed by z. In contrast, S C is enlarged to 50 for r ≈ 0.97, which is very close to the exact first-principles Hubbard model H FeSe (r = 1). The Tdependences of S C,S are shown in Fig. 3
To understand the similarity between the results in Fig. 3 (c) for z = 1 and the results in Fig. 3 (d) for z −1 = 4, we prove that both α C and α S are independent of z under the rescaling T → zT and (U, J) → (U, J)/z. Here, we assume that z −1 l = z −1 , and neglect the Tdependence of µ for simplicity. Under the scaling T → zT , the Green functionĜ(k, n) at Matsubara integer n given in Eq. (4) is independent of z. For this reason, the bare susceptibility χ 0 (q) = −T k,n G(k + q, n)G(k, n) is proportional to z. By following the same procedure, the three-point vertex Λ is scaled by z, and therefore the AL-VC X c (0) ∼ T U 4 q Λ(0; q) 2 χ s (q) 2 is proportional to z under the scaling T → zT and (U, J) → (U, J)/z. Thus, both spin and charge irreducible susceptibilities are proportional to z, and both α S and α C are unchanged under the rescaling T → zT and (U, J) → (U, J)/z. That is, the Weiss temperatures θ S(C) are scaled by z. The validity of these scaling relations are confirmed by the numerical study in Fig. 3.
It is possible to obtain z −1 by calculating the self-energyΣ(k) together withχ s,c (q) self-consistently. In this case, fine tuning of r will be unnecessary since the relation α S,C < 1 is assured ifΣ(k) andχ s,c (q) are calculated self-consistently in two-dimensional systems (Mermin-Wagner theorem) [53]. This is our important future issue. Here, we study the electronic states in the FO order n xz = n yz established below the structure transition temperature T str , at which the shear modulus C 66 reaches zero. According to the linear-response theory, the electronic orbital susceptibility given by the SC-VC theory, and g is the phonon-mediated Jahn-Teller energy [54]. Therefore, C 66 ∝ (T − T str )/(T − θ C ), and T str = θ C + g is slightly higher than θ C due to the weak electron-phonon coupling (g ≈ 10 ∼ 50 K) [2,17,18]. Figure 4 (a) shows the T -dependence of S S given by the RPA for LaFeAsO and FeSe for z = 1. Here, we introduce the orbital polarization −∆E/2 (∆E/2) for the d xz(yz) -level. We put S S = 20 (5) for LaFeAsO (FeSe) at T str = 50 meV, and assume a mean-field-type T -dependence; ∆E = ∆E 0 tanh(1.74 T str /T − 1) with ∆E 0 = 80 meV. (For z −1 = 4, the renormalized orbital polarization z∆E 0 is just 20 meV.) In both LaFeAsO and FeSe, S S are enhanced by ∆E, since α S increases linearly with ∆E at q = (π, 0) as discussed in Ref. [54]. In LaFeAsO, the magnetic order temperature T mag increases from θ S to just below T str since S S is already large at T str . In contrast, in FeSe, the enhancement of χ s (π, 0) is much moderate [55].
We also perform the self-consistent analysis of the orbital-polarization (∆E xz (k), ∆E yz (k)) and anisotropic χ s (q), which is a natural extension of the SC-VC theory into the orbital ordered state [56]. The obtained S S and k-dependent orbital polarization are shown in Figs. 4 (a) (inset) and (b), respectively. The parameters are r = 0.256 and 1/z 4 = 1.6. The difference ∆n = n xz − n yz is 0.2%. The hole-pocket around Γ-point becomes ellipsoidal along the k y -axis due to the "sign-reversing orbital polarization", in which ∆E xz (0, k) − ∆E yz (k, 0) shows the sign reversal as shown in Fig. 4 (c). Due to this sign reversal, S S in the inset of Fig. 4 (a) tends to saturate below 40 meV [33]. Also, two Dirac-cone FSs appear around X-point when ∆E yz (π, 0) > 50 meV. These results are essentially consistent with the recent ARPES studies reported in Refs. [27][28][29][30][31][32][33]. The obtained orbitalpolarization (∆E xz (k), ∆E yz (k)) belongs to B 1g representation, and therefore it is consistent with the "d-wave orbital order" discovered in Ref. [31]. The d-wave orbital order is theoretically obtained by the mean-field approximation by introducing phenomenological long-range interaction [57], whose microscopic origin might be the AL-VC studied in this paper.
According to Ref. [4], χ s (q, ω) shows the broad maximum at q = (π, 0) at low-energies (ω 10 meV), and its strength is almost independent for T > T str . The magnitude of the low-energy spin susceptibility in FeSe . The FO order is introduced below Tstr = 50 meV. Inset: T -dependences of SS for the FeSe model obtained by calculating the k-dependent orbital polarization and χ s (q) self-consistently. SS tends to saturate below 40 meV due to the sign-reversing orbital polarization. (b) Self-consistent solution of the orbital polarization (∆Exz(k), ∆Eyz(k)) in the orbital ordered state in the FeSe model at T = 50 meV. The shape of the C2-symmetric FSs in (b) is consistent with the experimental reports [27][28][29][30][31][32][33]. We also show (c) the ∆E xz(yz) (k) along the k y(x) -axis, and (d) the C2-symmetric χ s (q) in the orbital-ordered state.
is one order of magnitude smaller than that in BaFe 2 As 2 [58], whereas its magnitude would be comparable to that in LiFeAs [59]. This experimental report in FeSe will be consistent with the present theoretical result with the moderate S S ∼ 10 in Figs. 3 (c) and (d). Note that experimental dispersion relation in χ s (q, ω) for ω 100 meV is qualitatively understood based on the present FeSe model by considering the band-renormalization factor [7].
ofJ/Ū . Numerical study for NaFeAs and BaFe 2 As 2 are presented in Appendix C. In NaFeAs and FeSe, in which J/Ū is smaller, the obtained θ S /θ C decreases to 0.4 and −0.4, respectively. In Fig. 5 (a), experimental values of T mag /T str and θ NMR /T str are also shown, where θ NMR is the Weiss temperature of 1/T 1 T above T str . Since T str = θ C + g (g ≈ 10 ∼ 50 K) and θ NMR = θ S , the relation θ NMR /T str θ S /θ C is expected theoretically. In addition, the relation T mag /T str θ S /θ C is expected since T mag is substantially higher than θ S in the FO ordered state. These two theoretically predicted relations are verified in Fig. 5 (a). Thus, the ratio θ S /θ C is well scaled by the parameterJ/Ū , consistently with the discussion in Sec. IV. Figure 5 (b) shows the critical value of the spin Stoner factor for α C ≈ 1 in each model, α cr S . It is found that α cr S increases withJ/Ū qualitatively. In addition, we plot α cr S for the FeSe (LaFeAsO) TB model with different Coulomb interactions: H 0 FeSe(LaFeAsO) + rH U M . In both FeSe and LaFeAsO TB models, α cr S monotonically increases withJ/Ū , whereas α cr S is clearly small for FeSe TB model. There are two reasons why α cr S is smaller for the FeSe bandstructure. One reason is the absence of the d xy -orbital h-FS in FeSe: As we discussed in Sec. IV, the d xy -orbital spin fluctuations are unnecessary for the development of χ c x 2 −y 2 (0) due to the AL-VC. Another reason is the smallness of the FSs in FeSe: We found numerically that α cr S decreases when the size of the FSs is smaller, since the three-point vertex Λ m ≡ δχ 0 m (q)/δ∆E m , which is odd with respect to G, increases in magnitude when the particle-hole asymmetry is large: In fact, we analyzed the undoped LaFeAsO model with tiny FS pockets by introducing the positive/negative potentials around the electron/hole FSs, and verified that the orbital order is realized by small α S . Recently, the advantage of the small FSs for the nematicity had been stressed by the renormalization group study in Ref. [63].
C. Summary
The emergence of the electronic nematic order has attracted increasing attention as a fundamental phenomenon in strongly correlated metals. In this paper, we studied the origin of the nematicity in Fe-based superconductors, by paying the special attention to the nonmagnetic nematic order in FeSe. By applying the orbital+spin fluctuation theory to the first-principles dp Hubbard models, we succeeded in explaining the rich variety of the phase diagrams in Fe-based superconductors, such as the nonmagnetic/magnetic nematic order in FeSe/LaFeAsO. The key model parameter to realize rich phase diagram is J/U ; the ratio between the Hund's and Coulomb interactions. In addition, the ratio θ S /θ C tends to decrease as the size of the FSs shrinks, as discussed in Sec. VI B.
In both FeSe and LaFeAsO, strong orbital susceptibility χ c x 2 −y 2 (0) ∝ (T − θ C ) −1 with positive θ C is realized by the strong orbital-spin interplay due to the strong-coupling effect, called the Aslamazov-Larkin vertex correction in the field theory. In the FeSe model, ferro-orbital order is established even when the spin Weiss temperature θ S is negative as shown in Fig. 3, since the three-point vertex (=the coupling between twomagnon and one-orbiton) increases at low temperatures as Λ ∝ T −0.5 . In contrast, the spin-nematic susceptibility driven by the spin susceptibility should be T -independent if θ S < 0, as discussed in Ref. [7]. Therefore, we conclude that the nematicity in FeSe originates from the orbital order/fluctuations.
The nematic orbital fluctuations might play important roles in the pairing mechanism in Fe-based superconductors [64]. In FeSe, T c increases from 9 K to 40 K under pressure, accompanied by the enhancement of spin fluctuations [1]. At the same time, the system approaches to the orbital critical point since T str decreases to zero under pressure. These facts indicate the important role of the spin+orbital fluctuations in FeSe.
Acknowledgments
We are grateful to A. Chubukov, P.J. Hirschfeld, R. Fernandes, J. Schmalian, Y. Matsuda, T. Shibauchi and T. Shimojima for useful discussions. This study has been supported by Grants-in-Aid for Scientific Research from MEXT of Japan.
Appendix A: Eight-orbital models for FeSe and LaFeAsO Here, we introduce the eight-orbital d-p models for FeSe and LaFeAsO analyzed in the main text. We first derived the first principles tight-binding models using the WIEN2k and WANNIER90 codes. Crystal structure parameters of FeSe and LaFeAsO are given in Refs. [65] and [66], respectively. The obtained bandstructure and FSs in the LaFeAsO model are shown in Fig. 1 in the main text. In deriving the FeSe model, we introduce the k-dependent shifts for orbital l, δE l , in order to obtain the experimentally observed FSs. In FeSe, we introduce the intra-orbital hopping parameters into We also explain the orbital-dependent Coulomb interaction. The bare Coulomb interaction for the spin channel in the main text is Also, the bare Coulomb interaction for the charge channel is otherwise. (A2) Here, U l,l , U ′ l,l ′ and J l,l ′ are the first principles Coulomb interaction terms given in Ref. [34].
Finally, we perform the band calculations for the orthorhombic phase of FeSe and LaFeAsO, based on the experimental crystal structures. In both compounds, the obtained band splitting is too small to explain the large orbital polarization (∼ 60 meV) observed by ARPES studies. This result means that the orbital order originate from the electron-electron correlation, which is not included in the band calculation. Figure 6 (a) is the non-magnetic bandstructure in the orthorhombic LaFeAsO obtained by the WIEN2k software. The spin-orbit interaction is not taken into account. The crystal structure parameters in the orthorhombic phase is given in Ref. [66]. The orthorhombic structure deformation (a − b)/(a + b) is 0.3%. Due to the electron-phonon interaction, the four-fold symmetry of the bandstructure is slightly violated: The splitting between the d xz -and d yz -bands, ∆E band ≡ E yz − E xz , is 16 meV at X-point, and ∆E band = 2 meV at Γ-point. Figure 6 (b) is the bandstructure in the orthorhombic FeSe. In the orthorhombic phase, the nearest Fe-Fe length is a = 2.6716Å and b = 2.6610Å, so (a − b)/(a + b) is 0.2% [65]. Here, the k-dependent orbital shift to fit the ARPES bandstructure introduced above is not taken into account. In FeSe, ∆E band = 14 meV at X-point, and ∆E band = 3 meV at Γ-point. Thus, the sign reversing orbital splitting observed in Ref. [33] cannot be explained by the band calculation.
The splitting is reduced by the renormalization factor z due to the self-energy. Since z ∼ 1/3 in FeSe and LaFeAsO, the renormalized splitting at X-point is z∆E band ∼ 5meV, which is one order of magnitude smaller than the experimental orbital splitting. Therefore, it is confirmed that the origin of the electronic nematic state in Fe-based superconductors is the electronelectron correlation. In the original SC-VC theory, the spin and charge susceptibilities are calculated self-consistently, by including the MT-VC and AL-VC for the spin and charge susceptibilities [16,49]. The strong orbital fluctuations are induced by the charge-channel AL-VC in Fe-based SCs, Ru-oxides and cuprate SCs [16,37,50]. In the main text, we studied the eight-orbital d-p Hubbard models based on the SC-VC theory, by taking the charge-channel AL-VC into account self-consistently. The obtained χ s (q) is equivalent to the RPA since the spin-channel VCs are dropped. It is easy to verify that the charge-and spinchannel MT-VCs are negligible in the present model. However, the smallness of the spin-channel AL-VC is verified only in the two-orbital Hubbard model in Ref. [50].
Here, we study the FeSe model using the SC-VC method, by taking the MT-VC and AL-VC for both spinand charge-channels in order to confirm the validity of the numerical study in the main text. The charge (spin) susceptibilities arê whereΦ c(s) (q) =χ 0 (q)+X MT,c(s) (q)+X AL,c(s) (q). The spin-channel AL-VC is given as where Λ ′′ m,m ′ ;c,d;g,h (q; p) ≡ Λ c,h;m,g;d,m ′ (q; p) − Λ g,d;m,c;h,m ′ (q; −p − q).
Also, the expressions of the charge-and spin-channel MT-VCs are given in Ref. [49]. The double-counting second-order terms with respect to H U inX MT,s(c) +X AL,s(c) should be subtracted [49] to obtain reliable results. In the main text, we introduced the first principles models for LaFeAsO and FeSe, and analyzed these models by using the SC-VC method. Here, we also introduce the effective models for BaFe 2 As 2 and NaFeAs, and analyze them using the SC-VC method.
In both BaFe 2 As 2 and NaFeAs, the FSs have relatively large three-dimensional characters. In addition, the unfolding of the bandstructure in BaFe 2 As 2 cannot be exactly performed because of its body-centered tetragonal crystal structure. Here, we introduce an simple effective BaFe 2 As 2 TB model H 0 BaFe2As2 by magnifying the size of the d xy -orbital hole-FS around k = (π, π) in the LaFeAsO unfolded model, in order to reproduce the ARPES bandstructure in Ba122 compounds. Here, we shifted the d xy -orbital band at M point by +0.20 eV. As for NaFeAs, we just use H 0 LaFeAsO as an effective NaFeAs TB model, e.g., H 0 NaFeAs = H 0 LaFeAsO , considering that the FSs in NaFeAs in the k z = 0 plane are similar to the FSs in LaFeAsO. We use H U NaFeAs in place of H U LiFeAs given in Ref. [34]. The bandstructures and the FSs of the effective TB models for BaFe 2 As 2 and NaFeAs are shown in Figs. 8 (a)-(c). Here, we perform the SC-VC analysis for the models H M = H 0 M + rH U M (M=BaFe 2 As 2 , NaFeAs), where r(< 1) is the reduction parameter. We choose the parameter r to satisfy the charge Stoner factor is α C = 0.98; The obtained T -dependences of the spin and charge Stoner enhancement factors, S S ≡ (1 − α S ) −1 and S C ≡ (1 − α C ) −1 respectively, are shown in Fig. 8 (d) and (e). As for BaFe 2 As 2 , both spin and orbital fluctuations strongly develop at T ∼ 50 meV in the case of r = 0.36. This result is consistent with experimental re- lation T mag ≈ T str in BaFe 2 As 2 . As for NaFeAs, only orbital fluctuations strongly develop whereas spin fluctuations remain moderate at T ∼ 50 meV in the case of r = 0.287. This result is consistent with experimental results in NaFeAs [62], in which T mag (= 40K) is more than ten Kelvin smaller than T str (= 53K). Thus, normalstate phase diagrams in BaFe 2 As 2 and NaFeAs are well explained by analyzing their effective Hamiltonians using the SC-VC method. In the main text, we studied the first-principles d-p Hubbard models for LaFeAsO and FeSe by applying the SC-VC theory. In both models, strong spin-fluctuationdriven orbital fluctuations are induced by AL-VC. In FeSe, we found that very small spin susceptibility χ s max is sufficient to realize the orbital order, consistently with experimental results.
Here, we discuss why strong orbital fluctuations are induced by tiny spin fluctuations in FeSe. In Figs. 9 (a) and (b), we show the spin and orbital susceptibilities, χ s max ≡ χ s (Q) and χ c x 2 −y 2 (0) ≡ χ c 2,2;2,2 (q) + χ c 3,3;3,3 (q) − 2χ c 2,2;3,3 (0), in the FeSe model and LaFeAsO model obtained by the SC-VC theory. Here, 32 × 32 k-meshes and 256 Matsubara frequencies are used. In both models, the charge Stoner factor is α C = 0.98 at T = 50 meV, and the obtained orbital susceptibilities show similar Tdependence. We setŪ = 1.76 (r = 0.25) in FeSe, and U = 1.74 (r = 0.41) in LaFeAsO, as we did in the main text. As for the spin susceptibility, in LaFeAsO, strong spin fluctuations develop at T = 50 meV (α S = 0.98), consistently with previous theoretical studies [16,37]. In FeSe, in contrast, χ s max is almost constant till T = 50 meV (α S = 0.87), consistently with experimental reports in FeSe. Now, we discuss why the spin fluctuation strength required to realize α C ≈ 1 is so different from LaFeAsO to FeSe. One reason is the difference in the ratioJ/Ū : Figure 9 (c) shows the T -dependence of the AL-VC on d xzorbital, X AL,c 2 (0) ≡ X AL,c 2,2;2,2 (0), obtained in the LaFeAsO and FeSe models. In both models, α C = 0.98 is satisfied at T = 50 meV. At T = 50 meV, the AL-VC for FeSe is about one-half of that in LaFeAsO. Thus, small AL-VC is enough to induce large orbital fluctuations in FeSe, since the charge Stoner factor is α C ≈ (1 − 5J/Ū )Ū Φ c 2 (0). In Fig. 9 (d), we show that X AL,c: non−zero 2 (0) ≡ X AL,c 2 (0) − X AL,c: zero 2 (0) is very small for both FeSe and LaFeAsO. Here, "zero" represents the zero-Matsubara term (classical contribution) in Eq. (7) in Sec. II. Thus, non-zero Matsubara terms in the AL-VC are negligible in the present calculation (by chance). Note that the U 2 -term in AL-VC gives negative contribution.
Another reason for the relation χ s max (FeSe) ≪ χ s max (LaFeAsO) at α C ≈ 1 is the difference in the orbital dependence of the spin fluctuation strength: The AL-VC for the xz-orbital is approximately given as where we dropped the inter-orbital terms ofχ s andΛ, and leave only the zero-Matsubara term in the Matsubara summation in Eq. (7) the same model parameters used in Fig. 9. As derived from Fig. 9 (a) and Fig. 10 (a), the ratio χ s 2 (Q)/χ s (Q) is just 0.22 in LaFeAsO, whereas the ratio increases to 0.53 in FeSe, since the relation χ s 4 (Q) ≪ χ s 2 (Q) (χ s 4 (Q) ∼ χ s 2 (Q)) is satisfied in FeSe (LaFeAsO) because of the absence (presence) of h-FS3. This orbital dependence of the spin fluctuations in FeSe is favorable for realizing the FO fluctuations.
To understand the model-dependence of the AL-VC in more detail, we calculate C 2 ≡ q χ s 2 (q) 2 and show the result in Fig. 10 (b): The ratio C LaFeAsO 2 /C FeSe 2 is just 1.35 since the width of the peak of χ s 2 (q) 2 around q = Q is much wider in FeSe. We also examine the square of the three-point vertex for d xz -orbital Λ 2 ≡ Λ 2,2;2,2;2,2 (q, k) at q = 0 and k = Q in Fig. 10 (c). In both models, the relation |Λ 2 | 2 ∝ T a with a ≈ 1 is satisfied for wide temperature range: Such strong T -dependence of the charge-spin coupling Λ 2 is essential for realizing the orbital fluctuations, so it should be taken into account in the numerical calculation. As results, we obtain a crude approximation for the AL-VC,X AL,c 2 ≡ 3U 4 |Λ 2 (0; (0, π))| 2 T C 2 , and show the result in Fig. 10 (d). This crude approximation qualitatively reproduces the exact numerical results for both FeSe and LaFeAsO given in Fig. 9 (c).
In summary, in both LaFeAsO and FeSe, strong orbital fluctuations are induced by AL-VC for the d xz(yz)orbital, X AL,c 2(3) (0). In FeSe, very small spin susceptibility χ s max is sufficient to realize the spin-fluctuation-driven orbital order, because of both the smallness ofJ/Ū and the largeness of C 2 . Strong T -dependence of Λ 2 is essential for realizing the orbital fluctuations due to AL-VC.
Appendix E: Strong T -dependence of the three-point vertex In this paper, we found that the strong orbital fluctuations in Fe-based superconductors originate from the AL-VC for the orbital susceptibility. The moderate increment of the AL-VC at low temperatures shown in Fig. 9 (c) gives rive to the Curie-Weiss behavior of χ c x 2 −y 2 (0). For the increment of the AL-VC, the strong T -dependence of the three-point vertex, shown in Fig. 10 (c), plays the significant role. Its strong T -dependence in Fe-based superconductors had been pointed out in Refs. [20,21,51,52].
Here, we calculate the three-point vertex for LaFeAsO and FeSe models for wide temperature range with high numerical accuracy, using 512×512 k-meshes and ∼ 2048 Matsubara frequencies. Figure 11 shows the square of the three-point vertex for d xz -orbital Λ 2 (0; Q) ≡ Λ 2,2;2,2;2,2 (0; Q) for T ≥ 10 meV. In both LaFeAsO and FeSe models, the coefficient a of |Λ 2 (0; Q)| 2 ∝ T a depends on the temperature range. In both models, a ≈ 1 for T = 20 ∼ 100meV, so the numerical result in Fig. 10 (c) is confirmed by this accurate calculation. When the band renormalization due to z < 1 is considered, the relation a ≈ 1 is realized for T = 20z ∼ 100z [meV].
As shown in Ref. 5 (b), the value of α cr S remains small (∼ 0.9) in the FeSe TB model withĤ U BaFe2As2 (J/Ū = 0.12) or withĤ U LaFeAsO (J /Ū = 0.134). In each case, the obtained T -dependences of S S and S C are qualitatively similar to those shown in Fig. 3 (c). Therefore, the main results of the present study are unchanged even ifJ/Ū in FeSe is slightly larger than 0.1. | 9,787.4 | 2015-09-03T00:00:00.000 | [
"Physics"
] |
Measuring Employee Performance Key Indicators by Fuzzy Petri Nets
The aim of this study is to device a system based on fuzzy Petri nets for measuring employee performance. Fuzzy Petri net models are very helpful for specifying the expert systems with imprecise description of rules. Much research has been done for measuring human resource based on features like performance indicators generated in their work place. Such features are inherently challenging full to quantify as they are highly subjective and imprecise in nature. Concurrent and reliable systems can be realized or specified using Petri nets. Hence in this study, due to these limitations we focus on establishing the method for constructing fuzzy Petri net for the domain of human performance.
INTRODUCTION
Employee Performance measurement systems helps to measures, evaluate and reward managers and employees.In literature performance measurement is mostly discussed in relation to low, middle and higher management.The group which is missing are the employees.Especially they need measures that are understandable and motivating for achieving their targets.
From earlier research performance measurement, when it is well implemented, helps to motivate managers.In this research the focus are the employees and it is found out that performance measurement systems, when implemented, helps to improve the quality of work for employees.It brings more interaction between managers and employees.The company goals and job expectations are clearer to employees.Psychological commitment is increased by using a performance measurement system.Moreover it motivates and takes care of a more dynamical work culture.From the perspective of the employees, it helps to increase the quality of working of the employees.
Performance measuring might be helpful to consider the following: • To improve the company's productivity • To make informed personnel decisions regarding promotion, job changes and termination • To identify what is required to perform a job (goals and responsibilities of the job) • To assess an employee's performance against these goals Hence in order to implement such a tool, in this study we try to establish the construction of fuzzy Petri nets for this context.
MATERIALS AND METHODS
Fuzzy Petri nets-a short introduction: Petri Nets (PN) are a graphical and mathematical modeling tool applicable to many systems.There are promising tools for describing and studying information processing systems that are characterized as being concurrent, asynchronous, distributed, parallel, nondeterministic and/or stochastic (Murata, 1989).
A Fuzzy Petri Net model (FPN) (He et al., 1999;Shen, 2006) is Petri net having places and transitions, where places are denoted by rings and transitions are denoted by rectangle.Each place represent an antecedent or consequent and may or may not contain a token associated with a truth degree between zero and one that represents the live of trust within the legitimacy of the antecedent or consequent.Each transition representing a rule is associated with a certainty factor value between zero and one.The certainty factor represents the strength of the belief in the rule.The relationships between places and transitions are represented by directed arcs (Edges), arcs exists only between places and transitions and vice versa.The formal definition is given below, Ref (Kouzehgar et al., 2011).As with (Liu et al., 2008) transition to places µ : T→(0.1) is an association function, a mapping from transitions to (0.1) i.e., certainty factor α : P→(0.1) is an association function, a mapping from places to (0.1) i.e., the truth degree β : P→D, is an association function, a mapping from places to proportions Mapping the rule base to FPN: Throughout this mapping technique, all principle is represented as transitions with its relating certainty factor and each antecedent is displayed by an input place and therefore the consequents are incontestable by an output place with scrutiny truth degrees.During this displaying a transition here a suggestion is enabled to be fired if its entire input place have a truth degree resembling or over a predefined limit esteem.After firing the rule, the output place can have a truth degree resembling the input place truth degree multiplied by the transition certainty factor.
Regularly, a collection of transitions emulated by a collection of places constitutes a layer.An l-layered like Petri net on these lines holds l-layers of moves emulated by places and an additional embody layer comprising of places simply.The places within the last layer are known as closing place.Such a system has 2 types of benefits.To start out with, it will speak to inaccurate learning like normal Fpns.Second, the system may well be ready with a collection of inputoutput examples (as in an exceedingly food forward neural net).
Celebrity fashions limited is one of India's consummate garments exporters with the capability to manufacture the largest number of trousers in the industry.The company has their own national premier men's wear brand, Indian terrain.The company has two subsidiaries namely Indian terrain fashions Ltd and Celebrity clothing Ltd.Our survey is based on the Poonamallee branch (Chennai) of celebrity fashions ltd.It has 1,000 employees working on it.Celebrity fashions continuously upgrade its facilities to set new benchmarks in the garment manufacturing industry by always keeping to its quality and time commitments.
We conducted the survey very successfully and collected the data as we planned; we partitioned the data into 22 parts obtained from the answers.We also categorize the data according to the nature of the answers as input and internal properties.This is described in the following sub section.• My superior recognizes my potential.
The internal properties:
The inside properties of the framework are made on the groundwork of some arrangement of the info properties: • The input properties Q1 to Q4 form an internal property called "Work engagement".
• The input properties Q5 to Q7 form an internal property called "Service environment of my organization".• The input properties Q8 to Q10 form an internal property called "Job satisfaction".• The input properties Q11 to Q14 form an internal property called "Personal attachment to my organization".• The input properties Q15 to Q18 form an internal property called "Reward from my job and organizations".• The input properties Q19 to Q22 form an internal property called "Relationship with my superior.
As it where we have a fuzzy deduction in two levels.
Level one is supposed to deduce the internal properties, level two is supposed to deduce the employee's performance based on the internal and input properties.
The corresponding Petri net model is illustrated in Fig. 2. In the Petri net model, according to the proportions dedicated to each place, transitions 1 to 10 respectively represent rules 1 to 10 in the introduced rule base above and firing each transition means the corresponding rule is fulfilled.
RESULTS AND DISCUSSION
In order to fulfill the rule base we must first map the rule base to Petri net as shown in Fig. 2. Then as with the algorithm et al., 1999;Yang et al., 2003) a special reach ability graph is generated on the basis ω-nets.represent the linguistic value: very low, low, medium, In the guideline, the second component demonstrates the antecedent, the third component indicates the consequent and the last number demonstrates the certainty factor committed to the rule.For instance Rule 1 is as follows: ^Q3 (m) ^Q4 (vh), If Q1 is very high and Q2 is high and Q3 is medium and Q4 is very high, then the work engagement The corresponding Petri net model is illustrated in Fig. 2. In the Petri net model, according to the proportions dedicated to each place, transitions 1 to 10 1 to 10 in the introduced rule base above and firing each transition means the
RESULTS AND DISCUSSION
In order to fulfill the rule base verification phase, we must first map the rule base to Petri net as shown in mentioned in (He ., 2003) a special reach ability of the concept of In this reach-ability graph, first, a zero vector is defined as the root node as long as the range of places.Then at any current marking, among the transitions yet not considered, the enabled transitions are determined.At every step by firing the set of enabled transitions, a new node is added to the graph in which the corresponding elements of the node-the places which are filled after firing the transitions-are set to ω which is assumed as an enormous price.During this manner at every step there's a marking.If firing of the transitions at a step ends in an exceedingly repetitive marking, the graph can have a loop.
The corresponding reach-ability graph for the above Petri web model is portrayed in Fig. 3.The places P0 to P28, 32, 35, 38 and 40, respectively are regarded as TRUE antecedents and are initially filled (set to ω) for this reason.That's why in the initial node there are thirty three ω's.During this marking transitions T1, T2, T3, T4, T5, T6, T7 and T8 are enabled, respectively.Once firing these transitions, within the second step, places P29, P30, P31, P33, P34, P36, P37 and P39, respectively are filled and also the corresponding values in the node vector are set to ω.On the final step by firing T9 and T10 (the enabled transitions), the places P41 and P42 will be stuffed up.
CONCLUSION
The system based on fuzzy Petri nets has been constructed and verified for an instance involving a corporate context, celebrity fashions limited.This can be extended with other key performance indicators.
•
At my work I feel energetic.• My job inspires me.• I am enthusiastic about my job.• At my job I feel strong and vigorous.Service environment of my organization: • My organization does a good job keeping customers informed of changes that affect them.• I understand management vision of my organization.• Managers in my organization are very committed to improving the quality of work.Job satisfaction: • All in all am satisfied with my job.• In general I like working at my organization.• In general I do not like my job.Personal attachment to my organization: • I am proud to tell others I work at my organization.• I feel strong sense of belonging to my organization.• Working at my organization means a great deal to me personally.• I really feel that problems faced by my organization are also my problems.Reward from my job and organization: • When I do my work gives me a feeling of my accomplishment.• When I perform my job well it contributes to my personal growth and development.• When I do my work well receive a higher salary or pay rise.• When I do my work well receive a higher bonus or rewards.Relationship with my superior: • My working relationship with my superior is effective.• My superior considers my suggestion for change.• My superior and I are well suited to each other.
Fig. 1: The decision model
(
vl), "Job satisfaction ^Ser envir (m) job sat (vh) ^Per ^Rel with sup (vh), Ser envir (h) Job sat (h) ^Per att ^Rel with sup (m), "Employee In the above structure, Employee Performance inside a 5-tuple comprising of the Input Property Set (IPS), Internal Property Set (InPS); Output Property Set (OPS) and Rule Set (RS).Q1 to Q22 represent Question 1 to 22 as input properties.Work engagement, Service environment of my organization, Job satisfaction and Personal attachment to my organization, Reward from my job and organization and Relationship with my vl, l, m, h and vh represent the linguistic value: very lo high, very high, respectively.In the guideline, the second component demonstrates the antecedent, the third component indicates the consequent and the last number demonstrates the certainty factor committed to the rule.For instance Rule 1 is as follows: EPM.R1 = Q1 (vh) ^Q2 (h) Work engagement (vh) | 2,664.2 | 2015-03-05T00:00:00.000 | [
"Computer Science",
"Business"
] |
Immune receptors with exogenous domain fusions form evolutionary hotspots in grass genomes
Understanding evolution of plant immunity is necessary to inform rational approaches for genetic control of plant diseases. The plant immune system is innate, encoded in the germline, yet plants are capable of recognizing diverse rapidly evolving pathogens. Plant immune receptors (NLRs) can gain pathogen recognition through point mutation, recombination of recognition domains with other receptors, and through acquisition of novel ‘integrated’ protein domains. The exact molecular pathways that shape immune repertoire including new domain integration remain unknown. Here, we describe a non-uniform distribution of integrated domains among NLR subfamilies in grasses and identify genomic hotspots that demonstrate rapid expansion of NLR gene fusions. We show that just one clade in the Poaceae is responsible for the majority of unique integration events. Based on these observations we propose a model for the expansion of integrated domain repertoires that involves a flexible NLR ‘acceptor’ that is capable of fusion to diverse domains derived across the genome. The identification of a subclass of NLRs that is naturally adapted to new domain integration can inform biotechnological approaches for generating synthetic receptors with novel pathogen ‘traps’.
INTRODUCTION
Plants have powerful defence mechanisms, which rely on an arsenal of plant immune receptors (Jones, Vance and Dangl, 2016;Dodds and Rathjen, 2010). The Nucleotide Binding Leucine Rich Repeat (NLR) proteins represent one of the major classes of plant immune receptors.
Plant NLRs are modular proteins characterized by a common NB-ARC domain similar to the NACHT domain in mammalian immune receptor proteins (Jones, Vance and Dangl, 2016). On the population level, NLRs provide plants with enough diversity to keep up with rapidly evolving pathogens (Hall et al., 2009;Joshi et al., 2013).
With over 50 fully sequenced plant genomes today, it is Hotspots in Plant Immunity Gene Fusions 2 timely to apply comparative genomics approaches to investigate common trends in NLR evolution across the plant kingdom, including key crop species.
In contrast to the highly conserved NB-ARC domains, the Leucine Rich Repeats (LRRs) of NLRs show high variability (Noel et al., 1999;Jacob, Vernaldi and Maekawa, 2013). The functional consequence of high LRR variation is thought to be the generation of novel recognition specificities (Bakker et al., 2006;Sukarta, Slootweg and Goverse, 2016). In addition, recent findings show that novel pathogen recognition specificities can also be acquired through the fusion of non-canonical domains to NLRs (Le Roux et al., 2015;Kroj et al., 2016). These exogenous domains can serve as 'baits' mimicking host targets of pathogen-derived effector molecules and therefore act in concert with LRR variation to broaden the spectra of recognised pathogenderived effectors (Cesari, Bernet al., 2014a;Cesari et al., 2014b;Le Roux et al., 2015).
NLRs plant immune receptors were discovered over 20 years ago through cloning of plant disease resistance genes in Arabidopsis (Mindrinos et al., 1994;Bent et al., 1994). Sequencing of the Arabidopsis genome allowed annotation of the NLR repertoire based on a genomewide scan for the conserved NB-ARC domain that subsequently revealed common and non-canonical NLR architectures. Application of this method to newly sequenced plant genomes has revealed common principles in NLR composition. Additionally, genome scans have contributed to our understanding of the genome-wide architecture of NLRs, including a tendency for NLRs to form major resistance clusters (Christopoulou et al., 2015;Christie et al., 2016). The relatively poor quality of assembled genome sequence in repetitive regions has hampered accurate identification and annotation of NLR genes, which are present at high copy number in the genome and also encode repetitive LRR domains. To overcome this problem, a method called resistance gene enrichment sequencing was developed (Jupe et al., 2013;Witek et al., 2016;Andolfo et al., 2014); it involves enrichment of NLRs from genomic or transcribed DNA and enables their accurate assembly. The identification of NLRs across plant genomes using uniform computational methods, such as scanning genomes with Hidden Markov Models (HMMs) for the NB-ARC domain, has allowed the NLR repertoire to be compared across species (Sarris et al., 2016;Kroj et al., 2016;Yue et al., 2016). This has led to identification of plant families with a significantly expanded or reduced number of NLRs (Sarris et al., 2016;Kroj et al., 2016;Zhang et al., 2016) and the identification of co-evolutionary links between NLR diversification and their regulation by miRNAs (Zhang et al., 2016). Comparative genomics analyses also revealed that formation of NLRs with non-canonical architectures is common across flowering plants (Sarris et al., 2016;Kroj et al., 2016).
The NLR copy number variation identified in genomic and RenSeq scans of different plant genomes has been attributed to the birth and death process of gene evolution (Michelmore, Meyers and Young, 1998). The mechanisms by which new NLR genes are created and upon which selection can act remains elusive. The prevailing consensus holds that NLR diversity is likely to be generated through a variety of mechanisms including duplication, unequal crossing over, non-homologous (ectopic) recombination, gene conversion and transposable elements (Jacob, Vernaldi and Maekawa, 2013 (Vogel et al., 2010;Choulet et al., 2010;Wicker et al., 2016 Figure 1A). One hotspot clade was particularly enriched in NLR-ID proteins (59 % are NLR-IDs) compared to 8% of proteins with NLR-IDs across all clades ( Figure 1A, hotspot 1, highlighted in red). This clade was found to be nested within an outer clade ( Figure 1A, highlighted in blue) with only 0 to 14 % of proteins containing NLR-IDs. These two clades include proteins representative of all the studied grass species with the exception of Z. mays ( Figure 1E). Therefore, we predict that this hotspot clade originated before the split of Panicodae, Ehrhartoidae and Pooidae (BEP and PACCMAD clades) from the rest of the Poaceae 60 MYA (Vogel et al., 2010). Supporting our hypothesis, an outer ancestral clade was apparent ( Figure It is also clear that NLR(-ID) protein duplication has proliferated most strongly in these species for this hotspot clade ( Figure 1E). However, the relative ratio of NLRs with and without extra domains in this clade has remained relatively constant at around 59% suggesting that the rate of domain recycling has been constant across these species ( Figure 1B; Supplemental Table1).
Two other major NLR-ID hotspots were investigated To further understand the evolution of ID fusions, the section of the tree in Figure 1 for hotspot 1 and the associated outer and ancestral clades were re-aligned and analyzed by maximum likelihood phylogeny ( Figure 3A;
Genomic locations involved in proliferation and diversification of NLR-IDs
We observed that NLRs from the hotspot clade were found on different chromosomes across and within species. For five species analyzed in this study, the chromosomal location of NLR-IDs was available from the genome annotation. We looked to see whether there was any enrichment of NLR-IDs from the hotspot clade on any particular chromosome and investigated whether these inter-species differences could be explained by whole-genome rearrangement during evolution (Table 2). (Salse et al., 2008;Clavijo et al., 2016). This indicates that proliferation of NLR-IDs in Triticeae might be linked to greater plasticity of its genome. Since some of the larger translocations in wheat occurred after the formation of NLR-ID hotspot 1, it is also possible that the interaction across members of NLR-ID locus contribute to larger genomic rearrangement events.
When we examined orthologous NLRs located on different wheat sub-genomes, we identified rapid local
Possible mechanisms driving NLR-ID diversification
Any mechanism that creates gene fusions requires a move or a copy and paste event of an exogenous gene from one location to another. Since NLRs from the hotspot clade are mostly found at syntenic locations, yet harbour diverse fusions, it is most likely that these NLRs act as hotspot 'acceptors' for exogenous genes to create NLR-IDs rather than move themselves. We observed that the overall number of NLRs in the hotspot increases proportionally to the total increase of NLRs in the genome. Therefore, we hypothesize that duplication of (Leister et al., 1998), or alternatively by local activity of transposable elements and endogenous DNA repair machinery as has been previously documented for other types of gene duplications in cereals (Wicker, Buchmann and Keller, 2010).
In the future, the availability of higher quality genome assemblies as well as multiple genomes for each species will allow more detailed analyses of syntenic gene clusters and will identify precise location of DNA
Identification of NLRs and NLR-IDs in plant genomes
NLR plant immune receptors were identified in nine monocot species by the presence of common NB-ARC domain (Pfam PF00931) as described previously (Sarris et al., 2016). T. aestivum (TGAC v1) and A. tauschii genomes (ASM34733v1) were downloaded from EnsemblPlants and analyzed using the same pipeline as before (Sarris et al., 2016). All up to date scripts are available from https://github.com/krasilevagroup/plant_rgenes.
Phylogenetic Analysis
An Supplemental Table 3: Genomic locations of all NLRs and NLR-IDs present in the tree in Figure 1A, anchored to the genetic map of T. aestivum CS42. | 2,053.8 | 2017-01-01T00:00:00.000 | [
"Biology"
] |
Mechanism Analysis of Hidden Chaos in a Generalized Vijayakumar System
In order to further discover the hidden chaotic attractor and its generating mechanism in the Vijayakumar system, we give a generalized system showing hidden chaotic attractors which are not from homoclinic orbit or heteroclinic orbit and consider Hopf bifurcations (codimension one and two) by
rst Lyapunov coecient and second Lyapunov coecient. e existence of periodic orbits is strictly proved theoretically. We have considered the problem of Hopf bifurcation in the chaotic system with hidden attractors, which will be helpful to reveal the intrinsic relationship between the local stability of equilibria and global complex dynamical behaviors of the chaotic system. Finally, numerical simulations are obtained for showing the correctness of theoretical results.
Introduction
e discussion of chaos is very interesting and important for nonlinear theory. e research of related subjects has a long history. Recently, attractors can be classi ed into two different types: self-excited or hidden [1][2][3][4][5][6][7][8]. Up to now, hidden chaotic attractors for the 3D autonomous chaotic systems can be found with only one stable equilibrium [9,10], without equilibrium [11,12], and with in nite equilibria [13][14][15], which makes the topology of the found chaotic systems di erent from that of the well-known 3D autonomous chaotic system. Moreover, multistability allows exibility of systems without changing parameters' values and can be used to correct control strategies and parameters to induce switching between di erent coexistence attractors. e selfexcited attractor has an attractive basin associated with unstable equilibrium [1][2][3][4][5]. Especially, in order to study periodic solutions, most researchers are more interested in studying periodic solutions [16][17][18][19]. Zoldi and Greenside [20] discovered unstable periodic orbit will be an important factor for chaos and then kicked something in the statistical correspondence between chaos and periodic orbits. Kawahara and Yamada [21] con rmed that the unstable periodic orbit will result in the classical Couette turbulent structure. e value of the connection number between these orbital pairs plays an irreplaceable role in the generation of chaos. erefore, the existence of periodic orbit is very important for hidden chaos. In recent years, scholars have begun to consider the complex dynamics about the hidden chaos. is represents that multistability is an important feature of many nonlinear problems. However, some deep-seated hidden complex behaviors have not been thoroughly studied in many realistic chaotic systems [8,16,22].
As we know, there are still abundant and complex dynamical behaviors, and the topological structure of the hidden chaos should be thoroughly investigated and exploited. In particular, the generation mechanism of hidden chaotic attractors has always been a scienti c problem of great concern. We will consider the coexistence of unstable periodic orbits and hidden chaotic attractors through Hopf bifurcation analysis. We study all possible bifurcations (general bifurcations and degenerate bifurcations) by the Lyapunov coefficient of Hopf bifurcation. More precisely, the first Lyapunov coefficient l 1 is obtained for the parameter space, which indicates the possibility of giving two branches of codimension, and the second Lyapunov coefficient l 2 is calculated. In addition, unstable periodic solutions can be obtained from bifurcation and can help us in better understanding, revealing an intrinsic relationship of the global dynamical behaviors with the stability of the equilibrium point, especially hidden chaotic attractors.
Based on the hidden chaotic attractors [23,24], we design a new oscillator with coexisting hidden chaos, limit cycles, and point attractors. In Section 2, a generalized system showing hidden chaotic attractors is given. In Section 3, Hopf bifurcation methods about codimensions one and two are given out, in particular, how to obtain the Lyapunov coefficients related to the stability of the equilibrium. In Section 4, the existence of periodic orbits from Hopf bifurcation can be obtained. Finally, in Section 5, we make some concluding remarks and future works.
The New Chaotic System with Hidden Chaos
Based on the three-dimensional autonomous system proposed by Vijayakumar et al. [23], we give the generalized system, where a, d are positive constants, and b, k are arbitrary real constant. If a � 1, b � 0, d � 2.3, system (1) is the three-dimensional autonomous system proposed by Vijayakumar et.al. [23], which only shows the numerical results. Now, we want to consider why the hidden chaos can be found theoretically. Here, by choosing some parameter values k � −0.02, d � 2.3 and using certain numerical methods [25,26], the system has different kinds of chaotic attractors (see Figure 1, Tables 1 and 2). e characteristic polynomial of the system (1) at the only one equilibrium E(k, 0, 0) is (2) If the system (1) will have Hopf bifurcation, the parameters should meet a + b + k � 1. In order to give the generation of hidden chaos, we will consider b and a as bifurcation parameters, respectively. Proposition 1. If we choose b as bifurcation parameter and b � 1 − a − k in the system (1), characteristic values of E(0, 0, 0) have one negative real eigenvalue −1 and a pair of purely imaginary eigenvalues ± � � a √ i. (1), characteristic values of E(0, 0, 0) have one negative real eigenvalue −1 and a pair of purely imaginary eigenvalues ±
Framework of the Hopf Bifurcation Methods
e Hopf bifurcation methods mainly refer to [19,[27][28][29][30]]. For the system, where X ∈ R 3 and μ ∈ R 4 , and f is a class of C ∞ . If we denote (3) has an equilibrium point X � X 0 at μ � μ 0 , mark the variable X − X 0 also by X, and then write As where A � f x (0, μ 0 ) and, for i � 1, 2, 3, Suppose λ 2,3 � ± iw 0 (w 0 > 0) are a pair of complex eigenvalues. Let p, q ∈ C 3 be vectors such that where A T represents the transposed matrix A. Any vector y ∈ T c can be given as y � wq + wq, where w � 〈p, y〉 ∈ C. e 2D center manifold at the eigenvalues λ 2, 3 can be parameterized by w and w. rough the use of a form immersion, X � H(w, w), where H: C 2 ⟶ R 3 has a Taylor expansion of the following form: With h jk ∈ C 3 and h jk � h kj , then where F is given by (4). Taking into account the chart w for a central manifold, we can obtain With G 21 ∈ C, the first Lyapunov coefficient can be given as where And G 32 � 〈p, H 32 〉, the second Lyapunov coefficient l 2 is given by When the first Lyapunov coefficient l 1 ≠ 0, the dynamic behavior of the system (3) is orbitally topologically equivalent to As l 1 < 0(l 1 > 0), we can find the existence of codimension one Hopf point and stable (unstable) periodic orbits on this manifold. Moreover, when l 1 � 0, we can further consider the Hopf point of codimension two. When l 2 ≠ 0, the dynamic behavior of the system (3) is orbitally topologically equivalent to where η and τ are unfolding parameters.
Hopf Bifurcation and Hidden Chaos in New System
Using the mark in Section 3, we can write the multilinear symmetric functions (1)
Hopf Bifurcation about Parameter a
Theorem 1. For system (1) with a � a 0 � 1 − b − k, the first Lyapunov coefficient at the equilibrium E is given by where If l 1 > 0, the Hopf point at E is a weak repelling focus, and an unstable limit cycle can be found near the asymptotically stable equilibrium E for each a < a 0 � 1 − b − k, but close to a 0 ; if l 1 < 0, the Hopf point at E is weak attractive focus, and a stable limit cycle can be found near the unstable equilibrium E for each a > a 0 � 1 − b − k, but close to a 0 .
Proof. Considering α as the bifurcation parameter, the transversal condition is met. e first Lyapunov coefficient l 1 will show the stability of the equilibrium point E and periodic orbits generate from Hopf bifurcation.
In addition, one can also get where en, the following value is where 4 Mathematical Problems in Engineering erefore, Moreover, the results of eorem 1 will be obtained. Now, we continue to study the influence of the Hopf bifurcation for hidden chaos. We choose parameters b � 0.02, k � −0.02, d � 2.3 from the work in [22]. According to eorem 1, we have a 0 � 1, l 1 � 1.1014 > 0, and E is unstable point. An unstable periodic solution should be obtained near the stable equilibrium point for a � 0.95 (see Figure 2). e result will show the generation of hidden chaos with stable equilibrium point (see Table 1). □ Remark 1. Now, we let b � 0.02, d � 2.3 and obtain coefficient l 1 In addition, we can know k < 1 − b (i.e., k < 0.98) from a > 0. erefore, if k ∈ k|k < 0.9606 { }, the first coefficient l 1 are positive, and unstable periodic orbit can be obtained. If k ∈ k|0.9606 < k < 0.98 { }, the first coefficient l 1 are negative, and stable periodic orbit can be obtained. Now, we will consider the sign of the second Lyapunov coefficient l 2 when l 1 � 0 with k � 0.9606.
Theorem 3. For system (1), with b � b 0 � 1 − a − k, the first Lyapunov coefficient at the equilibrium E is given by If l 1 > 0, the Hopf point at E is weak repelling focus, and an unstable limit cycle near the asymptotically stable equilibrium E can be found for each b < b 0 � 1 − a − k, but close to b 0 ; if l 1 < 0, the Hopf point at E is weak attractive focus, and a stable limit cycle near the unstable equilibrium E can be found for each b > b 0 � 1 − a − k but close to b 0 .
Proof. Here, we have e complex coefficient G 21 defined in Section 3 is We then have eorem 3 from first Lyapunov coefficient l 1 . Consider system (1) with a � 1, d � 2.3, k � −0.02. e first Lyapunov coefficient associated with the equilibria E is 1.1014. en, the equilibrium E undergoes a transversal Hopf bifurcation when b � b 0 � 0.02. More specifically, when b � 0.015 < b 0 , but near to b 0 , there exists an unstable limit cycle around the asymptotically stable equilibria E (see Figures 3(a) and 3(b)). e result will herald the emergence of hidden chaos (see Figure 3(c)). In
Conclusion
In this paper, Hopf bifurcations in generalized system chaotic systems with hidden chaos are obtained theoretically.
rough this analysis, we obtain the parameter conditions for which the system presents Hopf bifurcations. en, we make an extension of the analysis to the more degenerate cases. e calculation of the first second and second Lyapunov coefficients, which makes possible the determination of the Lyapunov stability at the equilibria, can make the system exhibit Hopf bifurcation in a much larger parameter region. e first and second Lyapunov coefficients are obtained for exhibiting Hopf bifurcation and showing periodic orbits in the parameter region. In addition, numerical simulations of several parameter values are carried out to illustrate and verify some analysis results. e cascade of period-doubling bifurcation and the existence of hidden attractors are related to Hopf bifurcation at the equilibrium point in a sense.
is interesting phenomenon is worth further studying, both theoretically and experimentally, to further reveal the intrinsic relationship between the local stability of equilibrium and global complex dynamical behaviors of the chaotic system. In addition, a fractional-order version of the chaotic system can be designed using integrated circuit technology [31][32][33][34][35][36][37], as required in wireless systems. It is expected that a more detailed theoretical analysis will be excavated in the forthcoming paper.
Data Availability
All data used to support the findings of this study are included within the article.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2,886.8 | 2022-09-13T00:00:00.000 | [
"Physics"
] |
Casimir force in critical ternary polymer solutions
Consider a mixture of two incompatible polymers A and B in a common good solvent, confined between two parallel plates separated by a finite distance L . We assume that these plates strongly attract one of the two polymers close to the consolute point (critical adsorption). The plates then experience an effective force resulting from strong fluctuations of the composition. To simplify, we suppose that either plates have the same preference to attract one component (symmetric plates) or they have an opposed preference (asymmetric plates). The force is attractive for symmetric plates and repulsive for asymmetric ones. We first exactly compute the force using the blob model, and find that the attractive and repulsive forces decay similarly to L . To go beyond the blob model that is a mean-field theory, and in order to get a correct induced force, we apply the Renormalization-Group to a φ -field theory ( φ is the composition fluctuation), with two suitable boundary conditions at the surfaces. The main result is that the expected force is the sum of two contributions. The first one is the mean-field contribution decaying as L , and the second one is the force deviation originating from strong fluctuations of the composition that decreases rather as L . This implies the existence of some cross-over distance L∗ ∼ aNφ ( a is the monomer size, N is the polymerization degree of chains and φ is the monomer volumic fraction), which separates two distance-regimes. For small distances (L < L∗) , the mean-field force dominates, while for high distances (L > L∗) the fluctuation force is more important.
Introduction
Consider a binary liquid made of two components A and B of different chemical nature, which is in contact with a solid wall.Close to the consolute point T c , one of the two components prefers to be condensed near the wall.This is the critical adsorption, which has been the subject of numerous theoretical and experimental studies1 [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15].Also, critical adsorption has been observed for a one-component fluid near the critical point, which is in contact with an attractive wall [16].
From a theoretical point of view, the first formulation is due to Fisher and de Gennes [7].Using a scaling argument, the authors show, in particular, that far from the surface (distal regime) and at the critical point, the composition fluctuation profile of the preferred component, φ (z), decays with the distance z from the wall according to a universal law, that is φ (z) ∼ z −βt/νt .Here, β t 0.325 and ν t 0.63 are the standard bulk critical exponents of Ising-like magnetic materials [17,18].To take into account the effect of the wall, one has introduced a surface field h, which is proportional to the difference of surface chemical potentials, µ s A and µ s B , of the two components, that is h ∼ (µ s A − µ s B ) /k B T .Near the critical point, the profile obeys the scaling law: φ (z) = z −βt/νt f (z/ξ t ), where ξ t ∼ a |1 − T c /T | −νt is the thermal correlation length (a is an atomic scale) and f (x) is a universal scaling function.The latter was calculated to the first order in = 4 − d (4 is the critical dimension) by Brézin and Leibler [8,9] using the Renormalization-Group (RG) approach.To investigate critical adsorption, these authors started with a semi-infinite space with an appropriate boundary condition incorporating the order parameter and its first derivative at the surface.In fact, this condition depends on two microscopic parameters (c, h), where the coupling constant c that measures the interaction strength between mixture and surface, is positive, but the surface field h described above, must be strong enough (h → ∞) to ensure the condensation of the preferred species on the surface.This limit defines another surface universality class, called normal transition [13,14,19] .Indeed, even in the high-temperature regime (T > T c ), the non-zero value of the surface field yields a non-trivial order parameter profile.More developments using RG related to the critical adsorption can be found in [10][11][12].
Very recently, critical adsorption was also observed experimentally for critical polymer solutions.Within this context, Craig and Law [20] accomplished ellipsometric measurements of critical adsorption choosing five solutions of polystyrene (A-component) in cyclohexane (B-component), for various molecular weights of the former.Measurements agree with theoretical predictions of Fisher and de Gennes [7].Notice that the considered mixture is made of a polymer (polystyrene) and a simple liquid (cyclohexane).
Critical fluids confined between two parallel plates termed critical film [21,22], which may be a fluid near the liquid-gas critical point, a binary liquid near the consolute point, or liquid 4 He near the λ-point, generate long-range forces between the confining walls [21].This force that originates from strong fluctuations of the order parameter near the critical point, is called critical Casimir force in the literature [21].The word "Casimir" is related to the well-known Casimir effect discovered by Casimir [23], according to which the vacuum quantum fluctuations of a confined electromagnetic field generate an attractive force between two parallel uncharged conducting plates, which are separated by a finite distance L. This effect has received its final confirmation in recent experiments [24,25].Fisher and de Gennes [7] remarked an analogous effect, arising in Statistical and Condensed Matter Physics, for the systems exhibiting a critical point and restricted by boundaries.For this case, the large-scale critical fluctuations of density appear to be the analog of the vacuum quantum fluctuations.
The physical system we consider in this paper is a mixture of two incompatible long polymers A and B dissolved in a common good solvent.We assume that the ternary polymer solution is confined between two adsorbing parallel plates 1 and 2, separated by a distance L much smaller than the thermal correlation length ξ t (L ξ t ).This characteristic length that will be defined below, measures the spatial extent of correlations.The opposite case, where L ξ t , contributes to the leading critical behavior only by exponentially decreasing small corrections [26,42].We assume that the plates strongly attract one of the two polymers close to the consolute point.This means that one is in the critical adsorption conditions.As a result, the plates experience an effective force resulting from strong fluctuations of the composition in the critical region, which depends on separation L and the considered surface universality class.For simplicity, we will assume that either plates have the same preference to attract one component (symmetric plates) or they have an opposed preference (asymmetric plates).These two boundary conditions or surface universality classes will be denoted by: (↑↑) or (↑↓).Besides the chemical segregation between unlike chains, one is in the presence of excluded volume interactions leading to the swelling of chains.The question to answer is about the effect of the presence of a good solvent on the force expression.
Our findings are as follows.The first step consists in computing the induced force using the standard blob model introduced many years ago by Broseta, Leibler and Joanny [43], which is a direct consequence of renormalization theory [43][44][45][46].In this model, chains are viewed as sequences of blobs, whose size coincides with the usual screening length ξ, but unlike chains interact through a Flory effective interaction parameter defined below.We show that the induced force is attractive for symmetric plates, and repulsive for asymmetric ones.For the two cases, we show that the induced forces decay like L −4 , and compute exactly the associated universal amplitudes.These are similar to the amplitudes corresponding to the molten state [47][48][49], up to a multiplicative power factor of the monomer concentration.The blob model is a mean-field theory, which is reliable only for extremely high molecular-weight or very strong monomer concentration [43].To go beyond this approximation, and in order to get a correct induced force close to the consolute point, we use a ϕ 4field theory (the ϕ-field is the order parameter or composition fluctuation), to which the Renormalization-Group machineries [43][44][45][46] can be applied.This field theory is supplemented by suitable boundary conditions on the two confining plates incorporating two pairs of surface microscopic parameters (c 1 > 0, h 1 ) and (c 2 > 0, h 2 ).The parameters c i measure the interaction strengths between polymers and plates, and h i represents the surface chemical potential differences.These latter play the role of surface magnetic fields for magnetic materials.Critical adsorption emerges for high surface fields.For instance, the two considered surface universality classes (↑↑) and (↑↓) correspond to the limits (h 1 → +∞, h 2 → +∞) and (h 1 → +∞, h 2 → −∞), respectively.Our central result is that the total induced force is the sum of two parts.The first one is the mean-field force that decreases with the distance like L −4 , and the second one represents the force deviation originating from strong fluctuations of composition, and which decays rather as L −3 , with a known universal amplitude.This implies the existence of some cross-over distance L * ∼ aN φ 1/2 depending on the polymerization degree N of chains and the monomer fraction φ, and which separates two distance-regimes.For small separations (L < L * ), the mean-field force dominates, while for high separations (L > L * ), the fluctuation force becomes more important.
Finally, this paper extends some recent papers on the computation of the induced force for confined critical binary polymer mixtures [47][48][49].
The remaining presentation proceeds as follows.Section 2 deals with the computation of the induced force for solutions of polymer blends, within the framework of mean-field theory and RG.We draw our conclusions in section 3.
Mean-field results
In order to construct the free energy enabling us to compute the expected force, we start with recalling some useful background on the demixing transition in the presence of a good solvent.
Consider a mixture of two chemically different polymers A and B, dissolved in a common good solvent.This mixture may be polystyrene (PS)-poly(methyl methacrylate) (PMMA) in toluene or PS-PDMS (poly(dimethylsiloxane)) in propylbenzene.The ternary mixture is assumed to be confined between two interacting parallel plates 1 and 2, which are separated by a finite distance L smaller than the thermal correlation length ξ t (L ξ t ).We suppose, as before, that near the critical point T K one species has the tendency to condensate near the preferred plate.The quantities T K and ξ t will be defined below.
To simplify, we assume that the two polymers A and B have the same polymer-ization degree N .Thus, we are concerned with a monodisperse system.We denote the overall monomer fraction by φ = φ A + φ B , where φ A and φ B are the respective monomer fractions of A and B-polymers.
In a dilute solution, where the overall monomer fraction φ is below the threshold φ * ∼ N 1−dν (ν 0.588 [50] in d = 3), A and B-chains behave like separated swollen coils avoiding each other completely, and in principle no phase separation is expected.In semi-dilute solution (φ * φ 1), however, chains overlap and can be viewed as a sequence of uncorrelated subunits or blobs of types A and B. Each chain contains Z (φ) ∼ N φ 1/(dν−1) blobs.The blob size or screening length, ξ (φ), depends only on the total monomer fraction φ, and scales as [51]: ξ (φ) ∼ aφ ν/(1−νd) , where a is the monomer size.
Using the renormalization theory, the authors of [43] have shown that, for a high-molecular weight solution, a given chain cannot distinguish between an A and B-chain.This means that the chemical mismatch is irrelevant, and manifests itself only in correction to the leading behavior of the osmotic pressure.In fact, these corrections are important and govern the thermodynamics of demixing transition.From the obtained expression of the osmotic pressure, the authors derived the following free energy per site (blob) where x = φ A /φ is the composition of polymer A. In the above equality, accounts for the Flory effective interaction parameter between unlike blobs, where χ is the standard Flory interaction parameter, and ∆ 2 is some crossover exponent, which in d = 3 is given by ∆ 2 0.30.
Such a value agrees with experiment3 .This exponent characterizes the correction to the osmotic pressure [43].Expression (2a) can be understood, in a certain sense, as a renormalization of interactions due to the chemical mismatch between A and B-polymers.Typical values of the effective interaction parameter χ near demixing concentration are 10 −3 to 10 −2 for strongly incompatible pairs such as PS-PMMA of molecular weight M w ∼ 10 6 [53], and 10 −2 to 10 −1 for the more incompatible pair PS-PDMS of the same molecular weight 3 .Going back to expression (1), we note that it shows an obvious analogy with that defining the usual Flory-Huggins free energy of a mixture of two polymers A and B in the molten state [51,52].The difference is that A and B-chains have blobs of size ξ as new subunits, and the segregation parameter is no longer χ but the effective one χ defined by relations (2a) and (2b).Of course, these two parameters coincide in the limit φ → 1.
The model of free energy defined through relations ( 1), (2a) and (2b) constitutes the so-called blob model [43], which is a direct consequence of renormalization theory.
Let us recall the analytical expression for the demixing critical point location, which can be obtained by equating to zero the first and the second derivatives of free energy (1) with respect to composition x.One gets the location of the critical point [43] where φ K is the critical monomer fraction whose expression can be found in [43].We simply note that φ K is larger than the overlap monomer fraction φ * defined above.The critical point is located at the top of the coexistence curve.Below φ K (φ < φ K ), the ternary mixture is homogeneous, while above φ K (φ > φ K ), this mixture phase separates in two phases alternatively rich in A and B-polymers.Finally, we recall that the critical temperature T K , at a fixed concentration, is given by [43] with the exponent b 0.62 (d = 3) [43].The above relation tells us that the critical temperature T K should be proportional to the polymerization degree N .Now, to describe the critical phase behavior of the ternary mixture, we introduce an order parameter that is defined by where x is the composition of species A. The above definition means that the order parameter x is proportional to the shift x − x K , where x K = 1/2 is the critical composition.The order parameter x depends on the d-dimensional position vector r = (ρ, z), where ρ ∈ R d−1 is the transverse vector and z ∈ [0, L] is the perpendicular distance from plate 1 taken as the origin.Thus, the two plates (hyperplanes) 1 and 2 are located at z = 0 and L, respectively.The homogeneity property of plates implies that x depends only on the perpendicular distance z.We denote by x 1 and x 2 the respective values of the order parameter on plates 1 and 2. Symmetric plates correspond to x 1 = x 2 , and asymmetric ones correspond to x 1 = − x 2 .Since swollen A and B-chains can be regarded as sequences of new subunits or blobs, but interact chemically through the Flory effective interaction parameter χ defined by equations (2a) and (2b), the total free energy (per unit area) is given by a formula similar to that defining a binary polymer mixture [49].Then, we write where A is the common area of plates.Here, t 0 = (2/Z (φ) − χ) /2 is the reduced temperature, u 0 = 1/3Z (φ) is the coupling constant, κ (φ) = ξ 2 (φ) /9, and (c 0 i , h 0 i ) are the surface microscopic parameters relative to plates 1 and 2. Notice that the integrand in the bulk part of the above free energy can be obtained expanding the free energy (1) to the fourth order around the critical composition x K = 1/2.The gradient term is introduced to take into account the interfacial energy between A and B-rich phases.
We note that the above free energy is similar to that corresponding to a confined binary polymer mixture [49], with the simple substitutions: a → ξ, N → Z (φ).This means that chains in the solution can be regarded as sequences of Z (φ) blobs of the same size ξ.Taking advantage of the results in [49] and using the above substitutions, we find for the induced forces (per unit area) for symmetric (or attractive) plates, and for asymmetric (or repulsive) ones, with the following universal amplitudes Here, Γ (x) is the gamma function [53].
Let us comment on these results.Firstly, we note that the above expressions obtained within the framework of the blob model show that the presence of a good solvent simply induces a renormalization of the force amplitudes, through the multiplicative power factor φ (1−ν)/(3ν−1) ∼ φ 1/2 (d = 3, ν = 3/5) depending on the monomer fraction φ.
Secondly, for both symmetric and asymmetric plates, the attractive and repulsive forces decay according to the same negative fourth power law, but with different amplitudes.
Thirdly, as for confined polymer blends, the repulsive force is four times more important than the attractive one.The reason for that is explained in [49].
Finally, in the limit φ → 1, we recover the results corresponding to the molten state [49].
The blob model is a mean-field theory, and it was found [43] that this is valid only for an extremely high molecular-weight or a very high monomer concentration.To go beyond the mean-field theory, and in order to obtain correct results close to the critical point where fluctuations of composition are strong enough, we shall use the renormalization theory applied to the field theory described below.
RG results
The first step consists in rewriting the above free energy (6) by rescaling the composition fluctuations in bulk and at the surfaces and parameters of the problem, according to Here, (t 0 , u 0 ) and (c 0 i , h 0 i ) are the parameters defined above, where ξ (φ) is the screening length.
With these considerations, the total free energy rewrites The ϕ-field depends on the spatial coordinates r = (ρ, z), with ρ ∈ R d−1 and 0 z L, ϕ i being the surface fields defined on the (d−1)-dimensional plates 1 and 2. t ∼ (T − T K ) /T K represents the reduced temperature, g is the coupling constant, and (c i , h i ) are the new surface parameters.Then, fields ϕ and ϕ i , and bulk and surface parameters have the following dimensions: , where l is some length.At the critical dimension of the system d c = 4, the coupling constant g becomes marginal.
Thus, our theoretical model is a ϕ 4 -field theory described by the above Landau-Ginzburg-Wilson Hamiltonian.Recall that critical adsorption emerges in the limit h i → ±∞.
The second step consists in computing the Casimir force using this field theory.We first note that the above Hamiltonian is nothing else but that describing the critical properties of binary liquid mixtures of small molecules near the consolute point, one-component fluids near the liquid-gas critical point, or Ising-like magnetic materials near the Curie temperature.Thus, the ϕ-field (order parameter) may play the role of the difference between the compositions for simple liquid mixtures, the difference between liquid and gas densities for one-component fluids, or the local magnetization for Ising-like magnetic materials.In this sense, the ternary mixture of our interest belongs to the universality class (n = 1, d), where n is the number of components of the order parameter.Hence, the critical phase behavior for ternary polymer mixtures is of Ising type [17,18].As noted in [43] , this fact seems to be in good agreement with the recent light scattering experiments 3 , essentially based on the so-called "optical θ" method [54].For instance, in a recent experiment [55], one has studied the solutions of PS and PDMS in propylbenzene, and found that coexistence curve exponent β t is close to the Ising theoretical value.
We can thus take advantage of the work by Krech [29], which is concerned with the computation of the Casimir force in confined liquid mixtures.To determine the force expression for confined ternary polymer mixtures, we shall follow the techniques used by the author.
Let us first write the Casimir force as where the quantity Π 0 a,r represents the mean-field force calculated above, relations (7) or (8).The remaining part, δΠ a,r , accounts for the force deviation due to strong fluctuations of the composition.We recall that the induced force is defined through the expectation mean-value of perpendicular component of the stress tensor [29], which has been calculated using the so-called loop expansion [17,18].The meanfield contribution Π 0 a,r represents the zeroth order of this expansion, while δΠ a,r accounts for the contribution of higher orders.
To determinate the force deviation δΠ a,r , we start from the Casimir free energy per unit area, δf a,r , resulting from fluctuations of the composition.According to [29], this free energy can be written as In this scaling form, the factor 1/L d−1 simply expresses the natural dimension of the reduced Casimir energy δf a,r /k B T .Here, is the thermal correlation length, where ν t 0.63 is the standard Ising exponent, and R (φ) ∼ aN 1/2 φ (2ν−1)/2(1−3ν) represents the size of a chain in semi-dilute solution [51], with the swelling exponent ν 0.588 ( [50]) that must not be confused with ν t .On the other hand, the scaling function g a,r (x) is analytic for x 1 (L ξ t ).Then, at the critical point T = T K (ξ t → ∞), g a,r (0) is finite and we write it as: g a (0) = δ∆ ↑↑ / (d − 1) or g r (0) = δ∆ ↑↓ / (d − 1), where δ∆ ↑↑ and δ∆ ↑↓ are the Casimir amplitudes (in our notations).With these considerations, at criticality, the Casimir energy decays in a universal way as The Casimir force deviation (per unit area), δΠ a,r , is just minus the first derivative of δf a,r , with respect to separation L: δΠ a,r = −∂δf a,r /∂L.We then find at three dimensions for attractive walls, and for repulsive ones.We note that, in general, the force amplitudes are universal, and they depend only on space dimension d and surface universality class (the choice of boundary conditions).The amplitudes δ∆ ↑↑ and δ∆ ↑↓ have been calculated through a perturbative expansion with respect to the coupling constant g.Then, at a fixed point g * , these amplitudes become a series in = 4 − d (4 is the critical dimension of the system) that must be resummed using Borel-Leroy techniques4 to get their best values at dimension d = 3 ( = 1).All these questions have been addressed in [29], and we simply give the values of these force amplitudes in d = 3 These values are in good agreement with MC simulation [56].With these considerations, the total Casimir forces (per unit area) write where the mean-field amplitudes ∆ 0 ↑↑ and ∆ 0 ↑↓ are defined by equations (7a) and (8a).In the former (high distance-regime), the fluctuations force dominates, and in the second one (small distance-regime), the mean-field force is more important.This curve is drawn with parameter N = 100.
These results call for some comments.
Firstly, equations ( 17) and ( 18) tell us that, when they are reduced by the k B T Kfactor, the attractive and repulsive Casimir forces Π a,r are universal, independently of the chemical structure of polymers and plates.
Secondly, we emphasize that the force expressions ( 17) and ( 18), when they are compared to those corresponding to the molten state [49], indicate that the solvent induces a drastic change of the force expression.Indeed, the swelling of chains modifies the dependence of the force on the distance, through the appearance of the L −3 -decay.This change of the behavior not surprising, since in the presence of a good solvent, fluctuations of composition close to the consolute point are strong enough.Thirdly, the above formulae suggest the existence of a cross-over phenomenon occurring at some characteristic distance L * , obtained by making a comparison between the mean-field contribution (∼ L −4 ) and the fluctuation one (∼ L −3 ).This comparison gives the cross-over distance which depends on the molecular-weight (through N ) and monomer fraction φ.At the threshold φ ∼ φ * , the length L * becomes of the order of the gyration radius R G ∼ aN ν of a single chain in a dilute solution.Therefore, this defines, at a fixed molecular-weight, a cross-over line III separating two domains I and II in the (φ, L)plane (figure 1).In the high separation domain (L > L * ), fluctuations of composition dominate, and then, the effective force behaves as L −3 .In low separation domain II (L < L * ), however, a mean-field result is expected, and the effective force scales rather as L −4 .Indeed, this can be understood as follows.When the distance between the plates is lowered, the local monomer concentration is increased, resulting in strong screening of excluded volume interactions.That is why the mean-field theory works at small distances.
In figure 2, we superpose the curves representing an attractive mean-field Π 0 a force (dashed line) and attractive fluctuation force δΠ a (solid line), versus separation L. In figure 3, we report the curves describing repulsive mean-field Π 0 r force (dashed line) and repulsive fluctuation force δΠ r (solid line), versus separation L. For the two cases, the curves intersect at the cross-over distance L * , which is different for the two boundary conditions.All these curves are drawn with parameters: a = 10 Angstroms, N = 100, φ = 0.5.
Conclusions
The purpose of the present work is to determine the Casimir force within the confined ternary polymer solutions between two parallel adsorbing plates.These solutions are made of two incompatible polymers A and B dissolved in a common good solvent.In addition to the chemical segregation between unlike chains, excluded volume interactions are present.
To compute the expected force, we have restricted ourselves to two surface universality classes: symmetric and asymmetric plates.The induced force is attractive for symmetric plates, and repulsive for asymmetric ones.Calculations were done, first, using the blob model.For the two boundary conditions, we have shown that the forces decay similarly to L −4 .We found that the force amplitudes are similar to those corresponding to the molten state, up to a multiplicative power factor of the monomer concentration.
The blob model is a mean-field theory, which is valid only for very strong monomer concentrations or extremely high molecular-weights.To see this, denote by ∆ * φ = φ − φ K and ∆ * T = T − T K , respectively, the range of monomer concentrations and temperatures, for which the fluctuations of the composition are strong enough, so that the mean-field approach is no longer reliable.The size of the critical region has been determined using a Ginzburg criterion [43].We simply sketch the result that: ∆ * φ/φ K ∼ M −∆ 2 /(1+∆ 2 ) , and ∆ * T /T K ∼ (φ K /φ * ) −1/(3ν−1) , where M is the molecular-weight and ∆ 2 = ∆ 2 (3ν − 1) 0.22 is a cross-over exponent.Thus, in the limit of extremely long chains and very high concentrations, the above expressions suggest that the critical region is very narrow, and then, the phase behavior can be obtained using the mean-field approximation.A typical value of the molecular-weight may be M = 2.2 × 10 6 for the nearly pair PS-PMMA [57].
To go beyond the blob model, and in order to obtain a correct expression for the induced force, we applied the RG-machineries to a ϕ 4 -field theory we described above.We have shown the existence of two distance-regimes.Below some characteristic length L * ∼ aN φ 1/2 , mean-field theory can be applied, an then, the force decays as L −4 .Above L * , however, one assists to a drastic change of the force expression due to the presence of strong fluctuations of the composition.In this regime, it was found that the force decreases rather as L −3 .
We point out that this paper is a natural extension of the recent published ones, which were concerned with the computation of the induced force for confined critical binary polymer mixtures.The difference between these papers and the present one comes from the inclusion of a good solvent as a third component.This implies a change of the power-law decay of the expected force in comparison with polymer blends.
At the experimental level, we think that the induced force could be measured in an experiment similar to that used for the measurements of the repulsive force between two plates coated by adsorbed polymers [58,59], keeping fixed both the molecular-weight and the monomer concentration (above the threshold), and varying the separation between confining walls.
Figure 1 .
Figure 1.Cross-over curve III separating the two domains I and II in the (φ, L)plane.In the former (high distance-regime), the fluctuations force dominates, and in the second one (small distance-regime), the mean-field force is more important.This curve is drawn with parameter N = 100. | 6,906.2 | 2004-01-01T00:00:00.000 | [
"Physics"
] |
The [OII]/Halpha ratio of emission line galaxies in the 2dF redshift survey
We investigate the systematic variation of the [OII]3727/Halpha flux line ratio as a function of various galaxy properties, i.e., luminosity, metallicity, reddening, and excitation state, for a sample of 1124 emission-line galaxies, with a mean redshift z ~ 0.06, drawn from the Two Degree Field Galaxy Redshift Survey. The mean observed and extinction-corrected emission-line flux ratios agree well with the values derived from the $B$-band selected Nearby Field Galaxy Survey galaxy sample, but are significantly different from the values obtained from the Halpha-selected Universidad Complutense de Madrid Survey galaxy sample. This is because the different selection criteria applied in these surveys lead to a significant difference in the mean extinction and metallicity of different samples. We use the R_{23} parameter to estimate the gas-phase oxygen abundance and find that the extinction-corrected [OII]3727/Halpha ratio depends on the oxygen abundance. For 12+log(O/H)>8.4, we confirm that the emission-line ratio decreases with increasing metallicity. We have extended the relationship further to the metal-poor regime, 12+log(O/H)<8.4, and find that the correlation between the extinction-corrected [OII]3727/Halpha ratio and the metallicity reverses in comparison to the relationship for metal-rich galaxies. For metal-poor galaxies, in contrast with metal-rich ones, the variation of extinction-corrected [OII]3727/Halpha ratio is correlated with the ionization states of the interstellar gas.
INTRODUCTION
Spectral features in integrated spectra of galaxies allow us to determine important aspects of their evolutionary state. With the advent of the current generation of large telescopes, spectrophotometric studies are possible for fainter and more distant galaxies. Measuring the evolution of the star formation rate since the earliest cosmic epochs in the Universe is crucial for an accurate understanding of the formation and evolution of galaxies. Since the pioneering papers by Lilly et al. (1996) and Madau et al. (1996), several deep spectroscopic surveys have enabled detailed investigations of the star formation history of the universe (e.g., Hammer et al. 1997, Tresse et al. 2002, Hippelein et al. 2003. The interpretation of integrated spectral properties, and estimates of the cosmic star formation rate over an extended redshift range requires the use of a range of star formation indicators. Unfortunately, there are significant discrepancies between different star formation rate indicators (Hopkins et al. 2003, and reference therein). The flux of the Hα Balmer line is directly linked to the total ionizing flux, making this line the most robust and reliable tracer of star formation. Hα emission line still suffers from attenuation by dust however. The [OII]λ3727 emission line has been used widely in a number of studies of the star formation rate in redshift ranges where the Hα emission line moves into the nearinfrared (e.g., Thompson & Djorgovski 1991;Cowie et al. 1997;Hogg et al. 1998;Hippelein et al. 2003). However, published calibrations of the star formation rate in terms of the [OII]λ3727 emission vary by factors of a few (Gallagher et al. 1989;Kennicutt 1992;Guzmán et al. 1997;Rosa-Gonzàlez et al. 2002). Jansen et al. (2001) and Kewley et al. (2004) used the Nearby Field Galaxy Survey (Jansen et al. 2000, NFGS hereafter), and Aragón-Salamanca et al. (2005, submitted) the Hα selected Universidad Complutense de Madrid Survey (UCM) to investigate the variation of [OII]λ3727 as a function of galaxy properties. Kewley et al. (2004) have found no systematic difference between star formation rates using Hα and [OII]λ3727 luminosities after correcting for the effects of internal extinction and metallicity on [OII]λ3727 luminosity. Unfortunately, these are relatively small surveys and they sample rather limited ranges of interstellar gas parameters.
Since the pioneering works by Gallagher et al. (1989) and Kennicutt (1992), several spectroscopic studies of galaxies in the Local Universe have been undertaken (e.g., Tresse et al. 1999, Salzer et al. 2000, Jansen et al 2000, Carter et al. 2001, Gavazzi et al. 2004). However, the number of emission line galaxies observed in these surveys for which all the emission lines needed to identify the nature of the ionizing source and to estimate gas phase metallicity (i.e., [OII]λ3727, Hβ, [OIII]λ4959, λ5007, [NII]λ6548, Hα, [NII]λ6584, [SII]λ6717, and [SII]λ6731) are observed with high confidence do not exceed a few hundred galaxies at best. The advent of large spectroscopic surveys, such as the Sloan Digital Sky Survey (Stoughton et al. 2002, Abazaijan et al 2003 and Two Degree Field Galaxy Redshift Survey (Colless et al. 2001, 2dFGRS hereafter) provides larger emission line galaxy samples with all the needed emission lines.
The 2dFGRS was carried out with the primary aim of studying the three-dimensional clustering properties of galaxies and determining the luminosity function. However for a subsample of the 2dFGRS galaxies, the quality of the spectra is good enough to determine accurate emission line properties. We select normal star forming emission line galaxies with strong emission lines, high-quality spectra, and high signal-to-noise for a detailed study of the sensitivity of [OII]λ3727/Hα flux ratio to galaxy and interstellar medium properties. We aim to establish the properties of a local sample that can be used as a comparison for more distant galaxy samples. The 2dFGRS spectra are suitable for carrying out such an investigation, and have the following properties: (i) the large spectral coverage means that the galaxy spectra contain most of the prominent optical emission lines, including [OII]λ3727 and Hα, and emission lines needed to identify the ionizing source, (ii) we can correct Hα and Hβ for the absorption features in the spectrum of the underlying stellar population.
The paper is organized as follows. In Sect.2, we describe how we obtain the emission line galaxy sample used in this paper from the original 2dFGRS data set. In Section 3, we describe surface photometry measurements of the 2dFGRS galaxies, and assess the effects of aperture on the emission line properties. Section 4 discusses the dependence of emission line [OII]λ3727/Hα flux ratio on interstellar gas properties. In Sect. 5, we present the results of this analysis and summarize our conclusions.
GALAXY SAMPLE
Our sample is drawn from the 2dFGRS data set, which consists of optical (3600-8000Å) spectroscopy of more than 250 000 galaxies brighter than bj = 19.7, with a full width at half-maximum (FWHM hereafter) spectral resolution of 9 A. The survey covers two contiguous declination strips, plus 99 randomly located fields. One of the strips is located close to the south Galactic pole, while the other strip is located on the celestial equator in the northern Galactic hemisphere. Full details of the survey strategy are given in Colless et al. (2001). We first exclude all galaxies observed before 31 August 1999, since these were observed whilst there was a fault with the atmospheric dispersion compensator within the 2dF instrument (Lewis et al. 2002a). For these galaxies, the fitting procedure to determine the line properties gives results of poor quality. This cut leaves us with 200 160 galaxies. We selected galaxies with high quality redshift determinations, reducing the sample to 185 731 galaxies. We used a fully automatic procedure to measure the emission lines properties. A detailed discussion of the procedure and the determination of the fitting quality is presented by Lewis et al. (2002b). Here we summarize the basic points of this procedure. The line fitting consists of a simultaneous fitting of a series of absorption lines and a series of emission lines. Some of these are very close in wavelength (e.g., Hβ in absorption and emission). This technique works very well in fitting broad absorption lines and narrow emission lines (see figure 2 in Lewis et al, 2002b). The fitting allows a common wavelength shift for all the lines so relative shifting between the lines is not allowed. It does not work as well for broad emission lines where the absorption component is not well constrained or where several emission lines can combine to give a nonunique solution (e.g., [NII]λ6548 and Hα). These results are however, good enough to identify broad emission line cases. Note that only high signal-to-noise (S/N) spectra have been fitted. To get accurate estimates of the gas phase properties, and to avoid any possible bias, we select galaxies having a good quality fit for all of the emission lines needed to classify the galaxies, i.e., the nature of the ionizing sources, and to measure the gas-phase oxygen abundance. This requirement leaves us with a sample of 10 284 galaxies. Equivalent widths were corrected from the observed to the rest frame. We select only galaxies whose spectra have a relatively high S/N ratio, i.e. S/N ≥ 10 measured on the continuum between 4000Å and 7500Å. This leaves us with a sample of 7 353 galaxies. In our subsequent analyses, we use only galaxies which show Balmer lines in emission with equivalent widths larger than 10Å after correcting for the underlying stellar absorption. This corresponds to the spectral resolution of the 2dF instrument, and galaxies with weaker emission are subject to large systematic uncertainties from instrumental effects, particularly affecting estimates of the internal dust extinction from the Balmer decrement. This equivalent width threshold also minimizes the effect of the underlying stellar absorption. The requirement of having the Hβ equivalent width larger than the spectral resolution of the 2dF instrument drastically reduces the number of emission line galaxies in the final sample, and leaves us with 1 327 galaxies.
As we are interested in normal emission line galaxies, we have excluded galaxies which are dominated by Active Galactic Nuclei (AGN hereafter). We first exclude galaxies which have a Hβ emission line FWHM larger than 10Å (corresponding to a velocity width of > ∼ 670 kms −1 ), since these are likely to be Seyfert I galaxies. We then use the classical diagnostic ratios of two pairs of relatively strong emission lines (Baldwin et al. 1981, Veilleux & Osterbrock 1987 to distinguish between galaxies dominated by emission from star-forming regions and galaxies dominated by emission from non-thermal ionizing sources. We classify galaxies according to their position in [OIII]λ5007/Hβ vs.
[SII]λ6717, λ6731/Hα diagrams. The demarcation between star-forming galaxies and AGN in both diagrams was taken from Kewley et al. (2001). Fig. 1 shows the distribution of the sample galaxies in the diagnostic diagrams. We used the conservative requirement that a galaxy must be classified as a star-forming galaxy in both diagnostic diagrams in order to be retained in our sample (see Lamareille et al. 2004 for more detail on the classification of emission line objects). The diagrams show that our sample contains galaxies with a large range of excitation levels, suggesting that our sample contains both metal-rich and metal-poor galaxies. This sample is thus suitable for studying the properties of dust obscuration and emission lines over a large range of metallicities and excitation parameters.
196 galaxies show Balmer decrements smaller than the intrinsic Hα/Hβ ratio of 2.85 which corresponds to case B recombination with a temperature of T = 10 4 K, and a density of ne ∼ 10 2 − 10 4 cm −2 (Osterbrock 1989). This is probably due to an intrinsically low extinction, coupled with uncertainties in the correction of the underlying stellar absorption, and/or errors in the data reduction. As this implies a physically impossible negative extinction, those galaxies were removed from the sample. Thus, we end up with a final sample of 1 124 normal emission line galaxies.
The distributions of global properties, i.e., galaxy colours, bj-band absolute magnitudes, redshift, and the η parameter respectively, of the selected sample are shown in Fig. 2. The corresponding numerical data for emission line galaxy properties presented in this paper are provided for the reader through CDS ⋆ , or directly from the authors. The parameter η is a linear combination of the first two projections derived from the Principal Component Analysis of the 2dFGRS spectra. This parameter is found to be a measure of galaxy spectral type, i.e., a measure of the average emission/absorption line strength of a galaxy (see Madgwick et al. 2002 for a detailed discussion). As one may expect, the final emission line galaxy sample contains galaxies with bluer colours and later spectral types than the bulk of galaxies in the parent 2dFGRS sample. As a result of the selection procedure, the redshifts in the final emission line galaxy sample do not exceed z ∼ 0.13, with a median around z ∼ 0.06. The observed gaps in the redshift distribution occur when one or more of the emission lines lie close to a night-sky emission line, reducing the signal-to-noise ratio around that line, and hence the quality of the line fitting, which cause them to be excluded them from the final emission line galaxy sample.
The completeness of the final galaxy sample is difficult to quantify, and it is possible that the selection procedure could disguise the existence of intrinsic correlations between galaxy properties that we aim to investigate. Therefore it is important to assess to what extent the final sample of emission line galaxies covers similar regions in the parameter space (e.g., luminosity, colour, spectral type, surface photometry properties) as emission line galaxies in the original 2dFGRS sample. To ensure that our final sample of emission line galaxies is representative of the parent 2dF-GRS emission line galaxies, we compare the distribution of galaxy properties for both samples.
The parent 2dFGRS emission line galaxy sample was selected as follows. It contains objects in the final 2dFGRS data release having reliable redshifts, for which we have a reliable match between the APM target coordinates with the SuperCOSMOS Sky Survey image catalogue . There is also the additional criterion that heliocentric radial velocities cz > 1000km/s. This constraint was added to guard against including any incorrect velocities caused by having a Galactic star superimposed on a galaxy image, or an outright failure of the redshift estimation. It also removes very nearby galaxies for which the velocity is a poor indicator of distance, and hence guards against getting incorrect absolute magnitudes: only a very small number of dwarf galaxies with M (bj ) > −15 mag are rejected (a small number because the volume is small, and dwarfs because of the apparent magnitude limits of the survey). We have estimated the surface photometry parameters for 2dFGRS galaxies using blue images obtained from the SuperCOS-MOS Sky Survey (see Sect. 3.1 for a detailed discussion of surface photometry measurements). The effective radius is [OIII]λ5007/Hβ diagram shows the separation between starburst galaxies and AGNs as defined empirically by Kauffmann et al. (2003) using SDSS data. estimated as the semi-major axis of the ellipse that contains half the light of the galaxy, while the effective surface brightness is the surface brightness at the half-light isophote. Fig. 3 shows the variation of intrinsic surface brightness and physical effective radius as a function of absolute bj-band magnitude respectively (see Section 3.1 for details of the calculation of these quantities). Our 1 124 galaxy sample is shown as dots, with the bars showing the medians and the standard deviations in 1.5 magnitude wide bins. The parent 2dFGRS emission line galaxy sample is shown as a greyscale plot on a logarithmic scale. For our sample galaxies, for which Hα equivalent widths are larger than ∼ 25Å, strong correlations are apparent between surface photometry properties and absolute magnitude, i.e., luminous/faint galaxies tend to have on average large/small physical sizes and high/low central surface brightness. The first parent 2dFGRS emission line galaxy sample, shown in the upper panels, was constructed from the 2dFGRS sample by selecting galaxies with η > 0, i.e., Hα line in emission, and contains 71 207 galaxies. The comparison between the parent 2dFGRS emission line galaxy sample, and the final 1 124 emission line galaxy sample shows that (i) the parent emission line galaxy sample includes luminous galaxies, i.e., brighter than M(bj) ∼ −21, that are excluded from the final 1124 emission line galaxy sample, and (ii) for a given galaxy magnitude, galaxies with low surface brightness and larger effective radii tend to be excluded from our final 1124 emission line galaxy sample. This is because galaxies in the final sample were selected on the basis of their Hβ equivalent width being larger than 10Å. To take this selection criterion into account, we have constructed a second parent 2dFGRS emission line galaxy sample by selecting galaxies from the 2dFGRS sample requiring both η > 0 and Hβ equivalent width larger than 10Å. This second sample contains 12 660 galaxies. The bottom panels of Fig. 3 show a comparison between our 1 124 emission line galaxy sample and the sec-ond 2dFGRS parent emission line galaxy sample. It shows that emission line galaxies in the final emission line galaxy sample cover roughly similar ranges of galaxy parameters as the second parent 2dFGRS emission line galaxy sample, and that galaxies are distributed similarly in both samples. The cut on Hβ equivalent width excludes bright/physically big galaxies, since galaxies with low emission line equivalent width tend to dominate the bright end of the galaxy luminosity function (e.g., Salzer et al. 1989;Kong et al. 2002). Despite this cut, the final 1 124 emission line sample covers a large range of galaxy luminosities, i.e., 7 magnitudes, similar to the magnitude range covered by NFGS sample (Jansen et al. 2000) or 15R-North galaxy redshift survey sample (Carter et al. 2001). This is attributed to the large scatter that affects the galaxy luminosity versus emission line equivalent width relation (e.g., Jansen et al. 2000). The distributions of the final 1 124 emission line galaxy sample and the second parent 2dFGRS emission line galaxy sample are similar in both (bj-R) vs. µ(bj) and (bj-R) vs. Re diagrams. Fig. 4 shows the variation of (bj-R) colour, Hα equivalent width, and extinction-uncorrected [OII]λ3727/Hα flux ratio (see below for a detailed discussion of the procedure we use to estimate this ratio) as a function of bj -band magnitude for the final 1 124 galaxy sample and the parent 2dF-GRS emission line galaxy sample. Galaxies in the parent 2dFGRS sample were selected by having η > 0 and Hβ equivalent width larger than 10Å. Again, the final 1 124 galaxy sample is distributed similarly to the 2dFGRS parent sample of emission line galaxies with Hβ equivalent width larger than 10Å. The parent sample of emission line galaxies with Hβ equivalent width larger than 10Å contains galaxies with low Hα equivalent width, and high observed [OII]λ3727/Hα flux ratio that are not present in the final 1 124 emission line galaxy sample. However, these galaxies represent less than 1% of the parent sample. The simi- lar distributions of both emission line galaxy samples suggest that our final 1 124 galaxy sample is representative of the parent sample of 2dFGRS emission line galaxies with Hβ equivalent width larger than 10Å in terms of its luminosity, colour, and surface photometry properties. Also, the 2dFGRS spectroscopic sample is representative of the complete 2dFGRS magnitude-limited photometric sample down to the surface brightness limit of the 2dFGRS, i.e., ∼ 24.5 bj mag arcsec −2 (Cross et al. 2001). Hence, the final 1 124 emission line galaxy sample is fairly representative of the vigorously star forming, i.e., Hβ equivalent width larger than 10Å, local emission line galaxies, within the 2dFGRS limits. The correlations we aim to investigate are expected then not to be severely biased by the selection procedure of the final sample of 1 124 emission line galaxies. However, we can not rule out the possibility that we are missing faint galaxies with strong emission lines, but with low surface brightness, i.e., close to or lower than the 2dFGRS surface brightness limit.
APERTURE EFFECTS
The fibres in the 2dF instrument cover a 2.1 arcsecond diameter region of each galaxy, and this small aperture coverage of galaxies may bias the distribution of galaxy properties, and the estimate of the emission line properties (e.g., Kochanek et al. 2001). The aperture in fibre-fed spectroscopy is usually centred on the inner part of the galaxies so that the nuclear light is collected. For 2dFGRS, the in- . Comparisons between the variation of (b j -R) colour (lef t), Hα equivalent width (middle), and observed [OII]λ3727/Hα flux ratio (right), as a function of b j magnitude for the final 1 124 emission line galaxy, shown as dots, and the parent emission line sample, shown as a greyscale plot on a logarithmic scale. The galaxies in the parent 2dFGRS emission line galaxy sample were selected as those with η > 0. and Hβ equivalent width larger than 10Å.
ternal precision with which the fibres are aligned with the galaxy centre is 0.16 arcsec on average, with no fibres outside 0.3 arcsec (Colless et al. 2001). The absolute accuracy of the input astrometry is ∼ 0.5 arcsec (Maddox et al. 1990a). The fraction of the light from outer parts of a galaxy will depend on its redshift, intrinsic size, and surface brightness profile, as well as the size of the fibre and seeing during the observation. The bias introduced by such observational procedure depends also on the morphological type of the observed galaxies. The larger the bulge-to-disk ratio, the more serious may be the potential bias. Even though the distribution of star-forming regions tend to be centrally distributed in bulge-dominated galaxies, luminous star forming regions tend to be located in the outer regions of the disk. The major caveat might come from irregular galaxies where star forming regions can be found anywhere. Aperture effects are therefore a concern, and we must assess how close the spectra of our sample galaxies are to fully integrated galaxy spectra. We first consider the surface brightness profiles of the galaxies, to estimate the fraction of a galaxy's light that is sampled by the fibre. We then search for any correlation between emission line properties and the fraction of light sampled.
Surface photometry of 2dFGRS galaxies
Surface photometry was carried out using blue image data from the United Kingdom Schmidt Telescope obtained from the SuperCOSMOS Sky Survey , Hambly, Irwin & McGillivray 2001. Data were downloaded from the public Survey server in Edinburgh † for a 2.0 arcmin square region around each galaxy.
The SExtractor program (Bertin 1998, Bertin & Arnouts 1996 was run over each data file and the SExtractor object corresponding to the 2dF target was identified, on the basis of a close match to the celestial coordinates of the 2dFGRS target. For each 2dF target, SExtractor provided the object centroid coordinates, an overall image ellipticity and the orientation of the object. This analysis used only those pixels having a surface brightness brighter than 25.0 bj mag arcsec −2 . A series of concentric elliptical annuli were then defined on a linear scale for each 2dFGRS target which were centred on the image centroid of the target and had the overall image ellipticity and orientation. The mean Super-COSMOS intensity was then measured within each annulus.
The results from these annuli provided a surface brightness profile in the form of the mean intensity as a function of the semi-major axis of the annuli.
Exponential surface brightness profiles were fitted to the surface brightness profiles weighting each data point according to an estimate of the error in the intensity. These fits were performed between intensities corresponding to 22.3 and 27.0 bj mag arcsec −2 . The bright limit was imposed to avoid problems associated with photographic saturation in the UKST survey data (e.g Maddox et al. 1990ab, Shao et al. 2003. Data points within 1.5 arcsec of the image centre were excluded to avoid any problems caused by the presence of a nucleus or by seeing. The fitted profile is characterised by the central surface brightness and the exponential scale length along the major axis. As the method avoids the saturated higher surface brightness regions of galaxy images, in the case of spiral galaxies it is mostly sensitive to their discs. The central surface brightness is therefore an extrapolation to the image centre of the fitted profile. The SuperCOSMOS intensities were converted to magnitude surface brightnesses using calibrations for each UKST plate derived from the public SuperCOSMOS Sky Survey total magnitudes. The photometric zero point of the Super-COSMOS data for each UKST plate was selected to minimise the differences between the total magnitude under the fitted exponential profile and the SuperCOSMOS Sky Survey total bj magnitudes for all 2dFGRS targets within that UKST field. No attempt was made to account for photometric variations across individual UKST plates: the Super-COSMOS Sky Survey pixel data have already been corrected for vignetting effects, and no strong evidence of residual effects were found during a comparison of the fitted profile data with total magnitudes from the APM Catalogue (Maddox et al. 1990a). The magnitude surface brightnesses of the fitted exponential profiles have therefore been put on to the SuperCOSMOS Sky Survey scale. The root-meansquare difference between the total magnitude under the fitted exponential profiles and the total magnitudes from the SuperCOSMOS Sky Survey Catalogue is 0.15 mag, which will be caused by factors including a failure to account for bulges of spiral galaxies, the r 1/4 profiles of ellipticals and peculiar galaxy morphologies. This figure is slightly larger for lower redshifts and for lower surface brightnesses, but does not vary with colour. The data of Hambly, Irwin & MacGillivray (2001) indicate that the absolute calibration of the SuperCOSMOS Sky Survey Catalogue blue magnitudes is accurate at the 0.1-0.2 mag level.
The surface photometry gave the inferred central surface brightness µ •, obs and the angular scale length. To cor- rect for cosmological surface brightness dimming, the intrinsic central surface brightness was computed as: where z is the redshift, k(z) is the k-correction and A Gal is the interstellar Galactic foreground extinction. The kcorrection was computed for the observed 2dF redshift using the Poggianti (1997) results for the bj photographic band, determining the galaxy type from the Madgwick et al. (2002) η parameter. The physical scale length was obtained from the angular scale length using the angular diameter distance computed from the redshift. The mean value of the bj kcorrection for the emission-line galaxies was 0.15 mag. No corrections were made for inclination and internal reddening.
Aperture coverage
We use the surface photometry, discussed in Sec. 3.1 to estimate the fibre covering fraction for the emission line galaxies by measuring the ratio of the light within the fibre aperture, centred on the nucleus, to the total luminosity. Fig. 5 shows the distribution of the covering fraction for the galaxy sample. It shows that the average spectrum in our sample contains 11% of the total flux of the galaxy, with a standard deviation of 6%; the median of the covering fraction distribution is 10%. The average fraction of galaxy light collected by the fibres depends on redshift: the 14% aperture covering fraction for z > 0.1 galaxies is a factor two larger than for z < 0.05 galaxies. Intrinsically brighter galaxies tend to be physically larger than fainter galaxies, but for magnitudelimited survey such as the 2dFGRS they also tend to be found out to larger distances, where fainter galaxies are lost from the sample, so the projected fibre aperture is larger. Thus for a magnitude-limited sample, the fraction of galaxy light seen by the fibres is not a strong function of absolute magnitude (see Fig. 9 of Tremonti et al. 2004 for a similar conclusion using a sample of emission line galaxies drawn from the SDSS).
To make sure that the aperture size is not biasing the properties of our galaxy sample, we examine the galaxy properties as a function of the fraction of galaxy light collected through the fibres. The figure shows that the equivalent widths do not show any sizeable dependence on the fibre covering fraction, independently of redshift. The emission line equivalent widths and equivalent width ratio show no trend with redshift. The distributions of emission line equivalent widths for galaxies with redshifts z ≥ 0.05, a redshift limit recommended by Zaritsky et al. (1995) to minimize the effects of the aperture bias, do not show any trend with the observed fraction of the galaxy light. Consequently, there is no evidence that the properties of our normal emission line galaxy sample are systematically biased. We conclude that the galaxy sample may be used with confidence to study the properties of emission line galaxies.
THE [OII]λ3727/Hα FLUX RATIO
In this section, we will investigate the sensitivity of [OII]λ3727/Hα emission line flux ratio to galaxy and interstellar emitting gas properties.
Flux calibration and reddening
The relative flux calibration, over the whole spectral coverage of the 2dF spectrograph is uncertain; thus the flux ratio of two distant lines, such as [OII]λ3727 and Hα, may not be accurately estimated and may also be subject to systematic errors. Fortunately, using equivalent widths and broad-band photometry, one can accurately estimate the flux ratio. Let us write the emission line equivalent width as a ratio between the emission line flux and the adjacent continuum flux in the observed spectrum, so the extinction-corrected [OII]λ3727/Hα emission line flux ratio is given by:
I([OII])/I(Hα) = EW ([OII])/EW (Hα) × F c,[OII] /Fc,Hα
I(Hα)/I(Hβ) is the intrinsic Balmer line flux ratio, F(Hα)/F(Hβ) is the observed Balmer line flux ratio, τV is the effective V -band optical depth, and κ λ is the optical interstellar extinction curve. We adopt the Milky Way interstellar extinction law of Cardelli, Clayton, & Mathis (1989), with RV = 3.1. We make the stellar absorption correction to Hα/Hβ flux ratio on a galaxy-by-galaxy basis by fitting Gaussian profiles to both an absorption and emission component for Hβ. We also correct the Hα/Hβ flux ratio for Galactic extinction using values taken from Schlegel, Finkbeiner, & Davis (1998) extinction maps. We assume an intrinsic ratio of I(Hα)/I(Hβ) = 2.85, corresponding to the case B recombination with a temperature of T = 10 4 K, and a density of ne ∼ 10 2 − 10 4 cm −3 (Osterbrock 1989). The different extinction laws available in the literature show similar behaviour in the optical, making the results of our subsequent analysis independent of the chosen extinction law. Fig. 7 shows the distribution of inferred colour excess for our galaxy sample. The mean E(B − V ) for the galaxies in our sample, after correcting for Galactic extinction, is 0.34 ± 0.01, and a median of 0.33 ± 0.01, with a standard deviation of 0.2 magnitude. The mean value found for the galaxy sample is consistent with the widely used average colour excess, E(B − V ) ∼ 0.3, for Hα measurements of star forming galaxies (e.g., Nakamura The two-sided probability of obtaining this value by chance is almost zero. This indicates that the colour excess and the absolute galaxy magnitude are correlated; i.e., bright galaxies tend to be more affected by internal extinction than faint Figure 9. Distribution of the [OII]λ3727/Hα flux ratio. The continuous line shows the distribution of the extinction-corrected ratio, while the dashed line shows the ratio before extinction correction. The vertical arrows show the means of the extinctionuncorrected and extinction-corrected ratio distributions, i.e., 0.62 ± 0.02 and 1.26 ± 0.03 respectively. galaxies, and that there is a large scatter about this trend. The absence of galaxies in the lower right corner of the plot, i.e., the lack of bright galaxies with low colour excess, is unlikely to be due to a selection effect as the selection of sample galaxies was based uniquely on the strength of emission lines compared to the continuum, not on galaxy absolute magnitude and/or emission line ratios. Samples of emission line galaxies selected with different Hβ equivalent width cuts between 10Å and 20Å do not show a larger zone of exclusion in the lower right corner of Fig. 8. If the dust were smoothly distributed throughout the galaxies, light from the general stellar population would be obscured, and we would expect to see a lower central surface brightness in galaxies with a larger colour excess. We find no significant correlation between the colour excess and the galaxy central surface brightness or the physical effective radius. This suggests that the obscuring dust in galaxies is not distributed in the same way as the stellar light, it is concentrated close to the sources of line emission (e.g., Stasińska & Sodré 2001), and/or it has different scale-height to stars. Fig. 9 shows the distribution of the observed and the extinction-corrected [OII]λ3727/Hα emission line flux ratio. The mean value of the extinction-corrected ratio is 1.26 ± 0.02, compared to 0.62 ± 0.02 of the observed emission line ratio. Both observed and extinction-corrected mean ratios for our emission line galaxy sample are comparable to the same mean ratios for NFGS galaxies (Kewley et al. 2004); this is reasonable since both galaxy samples select roughly similar emission line galaxies. However the observed ratio for our sample is different from the values seen in the UCM galaxy sample (Aragón-Salamanca et al. 2004), and radio-detected galaxies in the First Data Release of the SDSS (Hopkins et al. 2003). On the other hand, the mean value of the extinction-corrected ratio is comparable to what is derived for the UCM samples. Note that the Hopkins et al. (2003) sample has the lowest observed mean [OII]λ3727/Hα ratio; this is understandable since radio-selected samples tend to be less biased against galaxies with a larger dust content than optically-or Hα-selected galaxy samples. This confirms the Jansen et al. (2001) finding that an important factor leading to different emission line ratios in different galaxy samples is the sample-dependent mean dust extinction. Thus using [OII]λ3727 as a star formation rate indicator requires calibration in a reddening-independent way (see also Kewley et al. 2004). Fig. 10 shows the relationship between internal dust reddening, in terms of colour excess, E(B-V), and the logarithm of the observed [OII]λ3727/Hα ratio. Large circles and bars show the means and the standard deviations of the observed [OII]λ3727/Hα ratio in 0.2 magnitude wide bins. The trend indicated by large solid circles does not change if the medians are used instead of the means. The Spearman correlation coefficient is −0.35, with the two-sided probability of obtaining this value by chance almost zero, i.e., ∼ 8 × 10 −32 . This indicates a statistically significant correlation between the colour excess, E(B-V), and emission line [OII]λ3727/Hα flux ratio, consistent with what was found for NFGS galaxies (Jansen et al. 2001;Kewley et al. 2004). We found no convincing relationship between galaxy luminosity and extinction-corrected [OII]λ3727/Hα ratio for our galaxy sample, similar to what is observed for UCM galaxies (Aragón-Salamanca et al. 2005, see also Hopkins et al. 2003). However, Jansen et al. (2001) found that after cor-recting for the internal extinction, a weak correlation still exists between the [OII]λ3727/Hα ratio and the galaxy luminosity. They interpret this correlation as an indication of the sensitivity of emission line ratio to gas-phase abundance. If a common extinction law is valid for all the galaxies in the sample, there should be a simple relationship between the observed [OII]λ3727/Hα flux ratio and the colour excess. The solid lines in Fig. 10 show the relationships expected using the extinction law of Cardelli et al. (1989) for different values of the intrinsic [OII]λ3727/Hα flux ratio. The dashed lines show the expected relationships if the extinction law of Seaton (1979) is used. The fact that the predicted relationship using the Seaton (1979) extinction law is not significantly steeper than what is expected using the extinction law of Cardelli et al. (1989) suggests that the presence or absence of a correlation between absolute magnitude and the extinction-corrected [OII]λ3727/Hα is not tied strongly to the adopted extinction law. It is possible that the large scatter of 2dFGRS data may mask a correlation.
[OII]λ3727/Hα ratio and metal abundance
To what extent does the systematic variation of galaxy chemical abundance regulate the variation of the [OII]λ3727/Hα ratio? Because of the sensitivity of [OII]λ3727 to metallicity, one may expect that the [OII]λ3727/Hα ratio may be related to the metal-content of the star-forming region. Unfortunately, 2dFGRS spectra do not have the required S/N to accurately measure the needed emission lines to estimate electronic temperaturebased abundances. Without a reliable electron temperature diagnostic, we have estimated the gas-phase oxygen abundance using the so-called strong emission line method first proposed by Pagel et al. (1979), and extensively used in the literature (e.g., Dopita & Evans 1986, Zaritsky et al. 1994, Contini et al. 2002, Melbourne & Salzer 2002, Pettini et al. 2001. This approach is based on the idea that strong lines, i.e., [OII]λ3727, [OIII]λ4959, λ5007, and Hβ, contain enough information to get an accurate estimate of the oxygen abundance (Mc-Gaugh 1991). This is done through the so-called parameter R23, introduced by Pagel et al. (1979), and defined as: R23 = ([OIII]λ4959, λ5007 + [OII]λ3727)/Hβ. The R23 parameter is estimated usually from emission line flux ratio.
Recently, have shown that the use of equivalent widths instead of fluxes to derive R23 gives similar results. Due to the limited quality of the relative flux calibration over the whole spectral range covered by the 2dF spectra, we prefer to use the equivalent widths to estimate R23 rather than emission line fluxes. Fig. 11 shows the relationship between the intrinsic [OII]λ3727/Hα flux ratio and the abundance-sensitive R23 parameter. Large filled circles and bars show the means and the standard deviations of logarithmic reddening-corrected [OII]λ3727/Hα ratio distributions in 0.15 dex wide bins. The solid line is the linear fit to the NFGS galaxy sample (Jansen et al. 2001). The Spearman correlation coefficient is −0.55, with the two-sided probability of obtaining this value by chance being almost zero. This indicates that intrinsic [OII]λ3727/Hα flux ratio is correlated with the abundancesensitive R23 parameter with a large statistical significance. The observed correlation does not come as a surprise as the Filled squares show galaxies with high ionization-sensitive ratios, i.e., log([OIII]λ5007/Hβ) ≥ 0.5. These galaxies tend to have, at a given oxygen abundance, lower reddening-corrected [OII]λ3727/Hα ratios than galaxies with low and intermediate log([OIII]λ5007/Hβ) ratios (see text for more details). variation of the R23 parameter is related to the variation of both the [OII]λ3727/Hα ratio, and the ionization conditions as traced by the ratio of two different oxygen emission lines, i.e., log R23 ∝ log([OII]λ3727/Hα) + log(1 + O32), where O32 = [OIII]λ4959, λ5007/[OII]λ3727 is an ionizationsensitive ratio.
The dependence of the abundance-sensitive R23 parameter on the metallicity is degenerate. Indeed, at a fixed value of R23 two different values of metallicity are possible: at the same oxygen abundance, different ionization parameters lead to different values of R23 (McCall et al. 1985). Different techniques have been developed to break this degeneracy with some success (Alloin et al. 1979;McGaugh 1991, van Zee et al. 1998, Kobulnicky et al. 1999. To estimate the oxygen abundance we have used the calibration of McGaugh (1991). This calibration is parameterized as a function of the excitation-sensitive parameter O32. We have used the secondary metallicity indicator [NII]λ6583/Hα to determine which branch of the McGaugh calibration to use (see Lamareille et al. 2004 for a detailed discussion of the abundance estimate for the galaxy sample). Fig. 12 shows the relationship between oxygen abundance, expressed in terms of 12 + log(O/H), and the extinction-corrected [OII]λ3727/Hα ratio. Large filled circles and the associated bars show the means and the standard deviations of the [OII]λ3727/Hα ratio distributions in 0.2 dex wide bins. The relationship between the extinction-corrected [OII]λ3727/Hα ratio and oxygen abundance splits into two regimes. For metal-poor galaxies, i.e., 12 + log(O/H) < ∼ 8.4, the intrinsic [OII]λ3727/Hα flux ratio increases with oxygen abundance. For these galaxies, the Spearman rank correlation coefficient is 0.73, with a twosided probability of obtaining this value by chance almost equal to zero, i.e., 2.5 × 10 −51 . This indicates a strong corre-lation between the extinction-corrected [OII]λ3727/Hα ratio and oxygen abundance for metal-poor galaxies. On the other hand, metal-rich galaxies, i.e., 12 + log(O/H) > ∼ 8.4, show a similar trend but with a slope of the opposite sign. The Spearman rank correlation coefficient is −0.6, with a two-sided probability of obtaining this value by chance almost equal to zero, i.e., 2.5 × 10 −33 , suggesting a strong anti-correlation between the intrinsic [OII]λ3727/Hα flux ratio and oxygen abundance. The metal-rich branch in the [OII]λ3727/Hα vs. 12 + log(O/H) diagram is consistent with the same relationship for the NFGS galaxy sample constructed by Kewley et al. (2004) using the McGaugh (1991 calibration. However, there is a concern here. Because radial abundance gradients are known to exist in spiral galaxies, it has been argued, depending on the metallicity gradients and the relative weight of different Hii regions in the integrated emission line spectra, that the R23 parameter might not be a useful indicator of galaxy overall metallicity (Stasińska & Sodré 2001; but see Kobulnicky et al. 1999 for a different view). To establish the dependence of [OII]λ3727/Hα flux ratio on the emitting gas metallicity, it is useful to confirm the observed correlation in Fig 12 using metallicity indicators other than the R23 parameter. The [NII]λ6584/Hα ratio has been proposed recently as an empirical metallicity indicator (van Zee et al. 1998, Denicoló et al. 2002. The [NII]λ6584/Hα ratio is less sensitive to the electron temperature than the R23 parameter, making this ratio less affected by the doubled value problem (e.g., Kewley & Dopita 2002). A valuable advantage of using this emission line ratio is its independence of both reddening and the accuracy of the relative flux calibration. In integrated spectra of galaxies, one expects however a non negligible contribution from a diffuse medium (e.g., Collins et al. 2000;Zurita et al. 2000). The [NII]λ6584/Hα ratios in the diffuse medium are generally larger than in nearby Hii regions. The effect of the diffuse medium is to increase the [NII]λ6584/Hα ratio by about 30% at most: this increase is smaller than the metallicity dependence of this ratio, making the ratio a useful metallicity indicator (see Stasińska & Sodré 2001, and references therein). It is worth mentioning that our sample galaxies are distributed along a well defined sequence in the [NII]λ6584/Hα ratio against R23 parameter diagram, interpreted as a metallicity-excitation sequence, similar to the sequence defined by local Hii galaxies (Fig. 14c of McCall et al. 1985). Another proposed empirical metallicity indicator that does not suffer from the double value problem is the [NII]λ6584/[OII]λ3727 ratio (Dopita et al. 2000, Kewley & Dopita 2002. It is however strongly dependent on the extinction correction, and the spectrophotometric accuracy of the spectra. Fig. 13 shows the relationship between the extinctioncorrected [OII]λ3727/Hα ratio and the [NII]λ6584/Hα ratio. ratio. For metal-rich galaxies, i.e., log(O/H) + 12 > ∼ 8.4, nitrogen is thought to be predominantly a secondary element (e.g., Villa-Costas & Edmunds 1993, Henry et al. 2000, so the observed trend reflects the sensitivity of the intrinsic [OII]λ3727/Hα ratio to abundance within this metallicity regime. For galaxies with low [NII]λ6584/Hα ratio, i.e., mainly metal-poor galaxies for which nitrogen is a primary element (Matteucci 1986), the relationship between [NII]λ6584/Hα and the extinction-corrected [OII]λ3727/Hα flux ratio is reversed. The Spearman correlation coefficient is 0.33, with the two-sided probability of obtaining this value by chance of 1.3 × 10 −8 . This indicates a statistically significant correlation between the metallicity indicator and the extinction-corrected [OII]λ3727/Hα flux ratio for metalpoor galaxies. This confirms that the variation of extinctioncorrected [OII]λ3727/Hα ratio is coupled with the evolution of metallicity.
4.3
[OII]λ3727/Hα ratio and excitation state Kewley et al. (2004) have found that for 12 + log(O/H) > ∼ 8.5, the variation of extinction-corrected [OII]λ3727/Hα flux ratio does not depend on the ionization state of interstellar emitting gas. For our galaxy sample, the scatter of the extinction-corrected [OII]λ3727/Hα flux ratio at a given metallicity appears to be related to the variation of the ionization parameter in galaxies. Indeed, at a given oxygen abundance, galaxies with large ionization-sensitive [OIII]λ5007/Hβ ratio, shown as filled squares in Fig. 12, tend to have lower intrinsic [OII]λ3727/Hα flux ratio than galaxies with a low-to-intermediate ionization-sensitive ratio. This suggests that the variation of the excitation state of the interstellar emitting gas in galaxies may contribute to the observed variation of the [OII]λ3727/Hα ratio.
The left panel of Fig. 14 shows the diagnostic diagram of [OII]λ3727/Hβ ratio as a function of [OIII]λ4959, λ5007/Hβ for our galaxy sample. Large/small circles show metalpoor/metal-rich galaxies, i.e., 12 + log(O/H) ≤ 8.4 (> 8.4). The continuous line shows the theoretical sequence of Mc-Call, Rybski, & Shields (1985) for line ratios of Hii galaxies as a function of metallicity. Along the track, the metallicity is high at the lower left, i.e., for low excitation systems, and low at the upper right, i.e., for high excitation systems (McCall et al. 1985). Most of the metal-poor galaxies in the sample lie in the moderate-to high-excitation regime populated by local Hii galaxies, i.e., log([OIII]λ5007/Hβ) ≥ 0.3, while metal-rich galaxies are located in the low-excitation regime. The right panel of Fig. 14 shows the [OIII]λ5007/Hβ ratio versus the absolute bj -band magnitude for our sample galaxies. The galaxies define a continuous sequence in this diagram. The observed sequence is interpreted as being a variation in the metallicity of the ionized gas (Dopita & Evans 1986;Stasińska 1990). On average faint/metal-poor galaxies tend to be highly ionized, while bright/metal-rich galaxies are characterized by low-ionization parameters.
The line ratio O32 is a function of both ionization parameter and metallicity (Kewley & Dopita 2002). For a galaxy sample that covers a large range of metallicity, a given O32 could correspond to different combinations of abundances and ionization parameters. In order to distinguish between the effects of ionization and metallicity, we have split the galaxy sample into metal-rich, i.e., 12 + log(O/H) > 8.4, and metal-poor, i.e., 12 + log(O/H) ≤ 8.4, galaxy subsamples. Fig. 15 shows the extinction-corrected [OII]λ3727/Hα flux ratio versus O32 ratio for metal-rich and metal-poor subsamples respectively. The O32 ratio has been estimated using emission line equivalent widths. have shown that estimates of this ratio using equivalent widths give results similar to using emission line fluxes. Large circles show galaxies with log([OIII]λ5007/Hβ) ≥ 0.5. Note that for the metallicity range covered by galaxies in our sample, the [OIII]λ5007/Hβ ratio is sensitive mainly to ionization parameter, and is almost independent of metallicity (Kewley et al. 2004).
For metal-rich galaxies, the Spearman correlation coefficient for the relationship between extinction-corrected [OII]λ3727/Hα ratio and O32 ratio is 0.07, with the twosided probability of obtaining this value by chance of 0.05. This indicates that there is no statistically significant correlation between the intrinsic [OII]λ3727/Hα flux ratio and the ionization-sensitive O32 ratio, in agreement with the Kewley et al. (2004) result. The subsample of metal-rich galaxies spans a limited range in ionization-sensitive ratio; the distribution of O32 ratio for metal-rich galaxies has a mean of 0.63, with an accuracy of 0.01, and 70% of the galaxies of this subsample with a ratio less than 0.5. The low O32 ratio suggests that for metal-rich galaxies, a significant fraction of oxygen emission results from O + species.
Metal-poor galaxies exhibit a larger range of ionizationsensitive diagnostic ratios, extending to extreme excitation states. The majority of galaxies in our sample with large excitation-sensitive ratios are metal-poor. The distribution of the O32 ratio for metal-poor galaxies has a mean of 1.66± 0.07, with 72% of the galaxies having O32 larger than unity.
Metal-poor galaxies with low to moderate excitation, i.e., log([OIII]λ5007/Hβ) < 0.5 cover a wide range Figure 14. Left: The relationship between [OII]λ3727/Hβ and [OIII]λ4959, λ5007/Hβ for our sample galaxies. Large (small) circles show metal-poor (metal-rich) galaxies (12 + log(O/H) ≤ 8.4 [> 8.4]). The solid line shows the theoretical sequence from McCall et al. (1985), which fit the local HII galaxies with metallicity decreasing from the left to the right. Right: Excitation sensitive ratio versus b j -band absolute magnitude for our sample galaxies.
The lack of a dependence of the extinction-corrected [OII]λ3727/Hα ratio on ionization state of the interstellar medium for NFGS galaxies may be attributed to the absence of such highly ionized metal-poor galaxies. This galaxy sample consists mostly of normal star-forming galaxies, with few active starburst galaxies and extremely metal-poor dwarfs (Jansen et al. 2000). The observed dependence of extinctioncorrected [OII]λ3727/Hα flux ratio on ionization state of the interstellar medium for metal-poor and highly ionized galaxies suggests that at low metallicity, where the electronic temperature is very high, a significant fraction of oxygen atoms may be in the form of O ++ and higher excitation levels. For metal-poor galaxies with a high ionization parameter, the variation of the extinction-corrected [OII]λ3727/Hα ratio is regulated by the variation of the ionization parameter rather than metallicity. Fig. 16 shows the distributions of Hα and [OII]λ3727 emission line equivalent widths, oxygen abundance, and bj-band absolute magnitude for galaxies with log([OIII]λ5007/Hβ) ≥ 0.5. The highly ionized galaxy pop-ulation in our sample, for which a strong anti-correlation is observed between the extinction-corrected [OII]λ3727/Hα flux ratio and the excitation-sensitive ratio O32, consists mainly of faint metal-poor galaxies, in which the starburst is still vigorously active, keeping the high ionization conditions. An important conclusion regarding the high ionization galaxy population is that the estimate of their star formation rate based on the [OII]λ3727 luminosity may be significantly underestimated, even when the dependence of the extinction-corrected [OII]λ3727/Hα ratio on metallicity is corrected. Guzmán et al. (1997) have shown that for a z = 0.137 compact field galaxy with extreme ionizationsensitive ratios, i.e., O32 = 3.8 and [OIII]λ5007/Hβ = 5.27, the star formation rate based on [OII]λ3727 luminosity is underestimated by a factor of 6, compared to the star formation rate based on Hα luminosity.
SUMMARY AND CONCLUSIONS
We have used spectrophotometric data for a sample of 1 124 nearby star-forming galaxies from the 2dFGRS sample, spanning a range of 7 magnitudes in M(bj), to investigate the systematic variation of the [OII]λ3727/Hα emission-line ratio as a function of galaxy and interstellar emitting gas properties.
The 2dF fibres cover, on average, about 11% of the total light of the galaxy. No evidence for systematic aperture bias affecting the estimate of the emission line properties is found. This suggests that our spectra are sufficiently representative of the integrated galaxy spectra. The nebular extinction as derived from the Balmer decrement is found to correlate with the intrinsic absolute luminosity. The mean of the distribution of the extinction-corrected emission line [OII]λ3727/Hα flux ratio is similar to what was found for Figure 16. Distribution of properties of highly ionized galaxies, i.e., log([OIII]λ5007/Hβ) ≥ 0.5. The upper panels shows the equivalent widths of [OII]λ3727 and Hα emission lines, with medians of 85Å and 48Å respectively. The lower panels show the distributions of oxygen abundance and absolute b j -band magnitude. The median of oxygen abundance distribution is 12 + log(OH) = 8.1(∼ Z ⊙ /4), and −17.4 for b j -band magnitude distribution.
other galaxy samples, selected in different ways, confirming that the internal reddening is a driver behind the variation of the observed [OII]λ3727/Hα. We confirm that there is a strong correlation between the extinction-corrected [OII]λ3727/Hα ratio and the oxygen abundance for metal-rich galaxies, and extend the observed correlation further to the metal-poor regime, i.e., 12 + log(O/H) < ∼ 8.4. This relationship consists of two branches, i.e., where the [OII]λ3727/Hα ratio is increasing (decreasing) as a function of the oxygen abundance for 12 + log(O/H) < ∼ 8.4( > ∼ 8.4). For metal-rich galaxies, there is no clear dependence of the extinction-corrected [OII]λ3727/Hα ratio on the ionization parameter, in agreement with what was reported for NFGS sample galaxies. However, a strong correlation is seen for metal-poor galaxies, especially for those with high ionization-sensitive ratios. These galaxies tend to be faint and strong [OII]λ3727 emitters. For these galaxies, the [OII]λ3727/Hα ratio is more sensitive to the variation of ionization parameter than to the variation of oxygen abundance.
An emission-line galaxy spectrum is the result of many physical properties of the ionized gas, e.g., the chemical abundance and dust content, and of the relative importance of the ongoing star-forming activity, e.g., the star formation timescale. The excitation state depends both on the emitting gas abundance and on the ionizing stellar flux, which in turn depends on the effective temperature of the ionizing stars, which depends on the stellar initial metallicity, and on the age of the ongoing star formation event. Different detection techniques preferentially detect emission line galaxies at different stages of the starburst. An important conclusion is that using the [OII]λ3727 emission line as a star formation rate indicator requires a good understanding of the selection criteria of the galaxy sample under investigation, and how they determine its properties, i.e., extinction, metallicity, and excitation state. | 12,189.8 | 2005-10-01T00:00:00.000 | [
"Physics"
] |
Person-Centered Learning using Peer Review Method An Evaluation and a Concept for Student-Centered Classrooms
— Using peer assessment in the classroom to increase student engagement by actively involving the pupils in the assessment process has been practiced and researched for decades. This paper analyzes the applicability of peer assessment to exercises at secondary school level and makes recommendations for its use in computer science courses. Furthermore, a school pilot project introducing student-centered classrooms, called “learning office”, is described. Additionally, a concept for the implementation of peer assessment in such student-centered classrooms is outlined. The evidence collected suggests that peer review is a viable option for small-and medium-sized exercises in the context of computer science education at secondary school level under certain conditions, which are discussed in this paper
Introduction
Peer assessment has been used in the classroom as a method to increase student engagement by actively involving the pupils in the assessment process for decades. In 1983, Carl Rogers, who is seen as the inventor of the person-centered approach as a result of his research in client-centered psychotherapy [1], described a science course at a university involving peer assessment as an "inspiring addition to the documentation of a person-centered approach" and admired the professor's evaluation process in which "the students play a major part" [2, pp. 89-93]. Freiberg built on Rogers' work and provided evidence of the positive effects of applying person-centered principles to the classroom, citing a study by the National Consortium for Humanizing Education (NCHE) and summarizing that students "learn more and behave better when they receive high levels of understanding, caring, and genuineness than when they receive low levels of support" [3]. Motschnig et al. iteratively introduced person-centered principles to a computer science course in higher education, making it the best-rated bachelor-level course of the university excluding courses rated by less than five students [4], which indicates that introducing person-centered approaches such as peer review to computer science courses leads to a positive impact on the students' perception of the course as well as the learning effect. Dochy et al. performed an extensive literature review [5] and depicted positive effects of peer assessment on students' learning as they become more involved in both the learning and assessment process. Gibbs analyzed reports and studies regarding students' experience of feedback in his article "How assessment frames student learning" in 2006 [6] and found indicators that suggest an increase in student performance if their work is peer reviewed by other students. Gibbs stated that it is not the quality of the feedback which increases student engagement, but rather the instantaneousness of feedback and the fact that it is peer reviewed. He derived eleven "conditions under which assessment supports student learning", six of which address feedback, depicting the importance of the quality, quantity and timing of feedback.
Bauer et al. conducted an empirical study which analyzed students' opinion on online peer review in the context of higher education, implementing a peer review system for a scientific writing course [7]. They concluded that students appreciate online peer review and highlight the importance of the review criteria. A computer science course addressing Unix shell programming in higher education was evaluated by Sitthiworachart et al. [8]. They concluded that students appreciate peer assessment as they realized their own mistakes by looking at the work of others and start thinking about their own work more deeply. In addition, most of the students were satisfied with the mark from the peer assessment. Gehringer used peer review in three computer science classes, an undergraduate one and two graduate ones, and evaluated the students' perception of the peer review method [9]. He found that students perceive peer review as being helpful to the learning process and value the feedback on their work.
However, reviews do not need to be done online: Figl et al. compared online peer reviews with face-to-face peer reviews in 2006 [10], focusing on collaboration aspects. They found out that face-to-face reviews improve communication as they promote discussions, but may be more effort for the students to conduct, which is why they recommend combining both methods. Standl developed a framework of educational patterns to be applied to computer science courses at secondary school level, including the peer check as one of the assessment methods for person-centered learning [11]. He suggests that students assessing each other learn more than students who only get assessed by the teacher. However, he recommends using this pattern primarily for projects as it is a time-consuming task.
This paper builds on the existing literature and includes published research [12]. It analyzes the applicability of peer assessment to smaller exercises in computer science classes at secondary school level. Courses bound to a strict curriculum as an external requirement may require students to learn certain topics through exercises, which is not typical for person-centered classrooms. However, introducing person-centered methods such as peer review to a traditional classroom setting may still provide the benefits of the peer assessment method, which is analyzed in this paper. Standl suggests using peer review after a project phase, which raises the question of whether peer assessment is also useful for small-and medium-sized exercises. Based on the reviewed literature, it seems natural that the advantages of peer review can also be observed when they are used for regular exercises in traditional classrooms.
Rogers questions the traditional school practices characterized by state-designated and prescribed curricula, standard tests and instructor-chosen grades, which reduce "meaningful learning" to an "absolute minimum" [2, pp. 11-21]. In this paper, we outline an approach to student-centered classrooms with individual advancement compliant to a given curriculum as well as the use of peer review as a person-centered learning method. These student-centered classrooms have been introduced within the scope of a school pilot project and are called "learning offices", which is derived from the German word "Lernbüro" [13].
This paper deals with the following research questions and makes recommendations for using peer review for regular exercises in computer sciences courses at secondary school level: 1. To what extent does peer review promote student-centered learning? 2. Is the feedback quality of students comparable to the feedback of a teacher? 3. Do students receive feedback in a timelier manner using peer review? 4. Does grading become more or less transparent? 5. Are reviews by peers a reasonable alternative to teacher assessment? 6. What is a reasonable number of exercises to be assessed by peers? 7. Are students overall satisfied with the peer review method?
Test Setup
Two secondary school classes in their 13 th year of study, hereafter referred to as "A" and "B", consisting of 29 and 28 students respectively were introduced to the peer assessment method within the scope of the same software engineering course. Both classes had 13 exercises to be assessed throughout the software engineering course. Two of these 13 exercises were assessed by peers, while the other eleven exercises were assessed by one of the two teachers. The students were asked to evaluate those two exercises using an anonymous online questionnaire and compare them to exercises evaluated by the teacher. At the end of the course, they were asked to rate the learning motivation for doing all 13 exercises. The software engineering course used a blended learning concept, i.e. the lessons were supported by an online Moodle course which contained all relevant information, learning material, discussion forums, and completed exercises as well as grading. The two peer reviews have been conducted using the Moodle workshop activity, which allows students not only to upload their solution for the respective exercise, but also to grade each other. A workshop activity consists of five phases [14]: 1. Setup. During this phase the instructor describes the exercise and provides instructions for the students. Another essential task of this phase is creating the assessment form: teachers define criteria to be used by the students to grade each other. A criterion has a description and can be graded using points or a scale. One of the most important tasks of the instructor is defining clear and concise criteria for the assessment phase, which should be mutually agreed upon with the students. 2. Submission. Students can submit their solutions during the submission phase and, if set, the submission deadline has not yet been exceeded. After the students have uploaded their solutions, the instructor can randomly or manually assign submissions to the students. By default, a random assignment uses at least 5 reviewees per submission. However, this number can and should be adjusted according to the preferences of the teacher and the students. Meeting the submission deadline is crucial as assigning late submissions requires a lot of effort. 3. Assessment. During this phase students review the solutions of their colleagues if the optionally configurable deadline has not yet been exceeded. Students use the assessment form for grading, which has been defined by the teacher in the setup phase. 4. Grading evaluation. As soon as the workshop activity has been switched to the grading evaluation phase, students can no longer edit their assessments. In this phase the teacher can review the peer assessments and manually override them. 5. Closed. When the instructor closes the workshop, the grades as well as the feedback are posted to the students' gradebook.
Assignments
The students were asked to work on each of the 13 exercises for one to three weeks. The two peer assessments have been conducted using an iterative approach: students' feedback on the first peer assessment and the lessons learned have been discussed in the classroom and incorporated into the second one. The first peer reviewed exercise involved implementing a simple person database using the Java Platform, Enterprise Edition (Java EE), which "provides a standards-based platform for developing web and enterprise applications" [15] used for implementing multitier applications. The second assignment requested students to develop a graphical interface for a route planner using a RESTful API. Representational State Transfer (REST), initially developed by Fielding and defined in his dissertation [16], is a set of design criteria [17] for an architectural style for distributed systems. For this exercise, http://www.i-jep.org the students were asked to use the Google Maps Directions API [18] and implement a graphical user interface employing Qt and PySide [19].
After the students read the instructions and review criteria, they were given two weeks to solve the Java EE exercise and one week to implement the RESTful client. After the submission the students were randomly assigned five to six reviewers and five to six reviewees for the peer review. The whole process was designed to be anonymous: the reviewees did not know their reviewers and vice versa. This is untypical for person-centered approaches; however, it helps to reduce prejudice.
Evaluation
To quantitatively assess the students' attitude towards the peer review method, both classes were asked to voluntarily fill out an anonymous questionnaire to give feedback on both peer assessments and estimate the impact on several factors of learning. They were asked to compare the peer assessment with regular teacher assessment and rate their level of agreement from "disagree" 1 (1) to "agree" (5) on a semantic differential scale, a "generalizable technique of measurement" [20] developed by Osgood et al. to measure the meaning of words. Unlike the Likert scale, which provides labels for each possible option in its originally published form [21], a semantic differential scale only labels the end points and visually indicates a continuous scale to simulate an interval scale [22]. The students rated their agreement with the following statements: 1. I have received more feedback than usual. 2. I have received the feedback in a timelier manner. 3. The overall quality of the feedback was higher. 4. I studied the task's subject matter more thoroughly. 5. I have learned something new from the solutions of others. 6. I think that others learned something new from my solution. 7. Grading was more transparent. 8. I am overall very satisfied with the teacher assessment method. 9. I am overall very satisfied with the peer assessment method.
Furthermore, the students were asked to define their preferred number of exercises to be reviewed by peers: 100%, 75%, 50%, 25%, or 0% of all exercises. Finally, they could give positive feedback and ideas for improvement through two open questions.
At the end of the school year, the students were asked for another rating of all 13 exercises regarding their motivation to learn. They were asked to rate each exercise from "not motivating" (1) to "motivating" (5). The students were shown the results directly after they filled out the form and gave feedback in a final discussion.
The following error bars represent 95% confidence intervals calculated using the tdistribution first described by Gosset [23]. The bars should give an impression of the overall variability and do not necessarily indicate implications for a larger population as the students are not representative of it. The mean value with confidence intervals was chosen over the median value with the interquartile range as the visualizations are more powerful and intuitive. To account for multiple testing during the analyses, the level of significance was adjusted to !=0.005, which is equivalent to a Bonferroni correction for 10 simultaneous tests [24].
Results
The following charts show the mean level of agreement with the described statements. Fig. 1 presents the reported feedback quantity and timing of the first and second peer reviewed exercise. The students agreed in the second iteration with an average of 3.9 that they received more feedback when the exercise was peer reviewed, which is explainable by the number of reviewers: five assessments provide more feedback than a single assessment by the teacher. The measured increase in feedback quantity from the first iteration to the second is explainable by the revised feedback modalities. Written feedback was optional in the first iteration, which led to students giving only ratings, but no suggestions for improvement. This issue was discussed with the students and they agreed on giving written feedback on each submission in the second iteration. The students therefore report that they receive more feedback on peer reviewed exercises than on exercises assessed by the teacher.
The students also perceived a measurable improvement in the timing of feedback with an average level of agreement of 3.9 for both iterations. Receiving feedback on an exercise within one week seems to be faster than the average assessment time required by a teacher. Due to administrative work and late submissions, the second iteration needed two weeks for the grading phase in class B, which explains the drop.
Verbal answers regarding what the students liked about the quantity and timing of the feedback were: "Mostly more feedback than by a teacher" "Fast feedback" "Exercises were graded within one week. You don't have to wait for a month for a feedback." "Guarantee that the exercise is graded within one week" "Feedback within one week" depicts the feedback quality as well as the self-reported student engagement. The students were not too pleased with the quality of the feedback in the first iteration as they rather disagreed with this statement. After the mentioned change from an optional written feedback to a mandatory feedback, there was a distinct increase in feedback quality to a total response average of 3.2, indicating that the quality of the feedback by students is indeed comparable to the feedback of a teacher. This suggests that students give reasonable feedback like a teacher would.
The reported student engagement seems to be slightly better than when only assessed by the teacher: the average level of agreement of both iterations was 3.1, whereas the response of class A reached 3.5 in the second iteration. The noticeable drop of student engagement in the second iteration of class B can be explained by analyzing the qualitative feedback of the students: three students who disagreed with this statement wrote that they experienced an unfair deduction of points, resulting in frustration and a drop of student engagement. Such cases should be discussed with the respective reviewer and reviewee to clarify the problem and mutually agree on a fair grading. The teacher's role as a facilitator here is to minimize unfair penalties. Furthermore, the REST exercise was less open and less creative, which seems to have a limiting effect on student engagement. Some answers to what they liked about feedback quality and student engagement were: "Feedback of others is indeed helpful" "Receiving hints which are not given by teachers in some cases" "Altogether, better feedback than usual" "Finally useful feedback!" "Detailed feedback" "Suggestions for improvement" "More in-class communication about problems and solutions" "Pupils study the subject matter more in-depth" The results of the evaluation of the own learning effect gained from the reviewing process as well as the estimation of the learning effect of others are shown in Fig. 3. Most of the students reported to have learned something in the first iteration, while they rather disagreed with this statement in the second iteration. The reason again lies within the assignment: while the Java EE exercise allowed many different solutions (free choice of database, user interface, validation etc.) and was more creative, the REST exercise had more predefined elements and contained a suggested user interface as a screenshot. Therefore, the solutions to the REST exercise were similar, which is why students could not really learn new and different techniques. This suggests that peer review is especially suitable for more creative and open exercises.
Verbal feedback on what they liked about the learning effect includes: "You can see what others did better/worse" "Making sure that everything works on different computers" "Many hints" "A reasonable, informative comment. I am happy!" "I could give other students reasonable feedback, maybe even more than a teacher due to his limited time" "Grading good and bad solutions (Dos and Don'ts)" "You learn different coding styles" "Seeing different approaches" "Seeing how others solved the exercise" "You see how others solved the exercise. This improves the learning effect." Fig. 4 shows the reported transparency of grading as well as the preferred number of exercises being peer reviewed. Students report that peer assessment follows a more transparent grading scheme than teacher assessment. The average level of agreement on both iterations was 3.6; only about 12% disagreed with this statement. Five reviews seem to give a better estimate of the grade than a single assessment by the teacher as seen by the students.
On average, the students would prefer about half of the exercises (52%) peer reviewed. The noticeable drop in class B can again be explained by the frustration of some students due to reported unfair assessment. Qualitative feedback on what they liked about grading included: "Criteria clear and understandable" "You know what to focus on and what is important for grading" "A wide selection for a precise grading" "Mainly fair and understandable feedback" "The criteria were more precise this time" "More transparent" The self-reported satisfaction with teacher assessment and peer assessment is depicted in Fig. 5. The students still like to be assessed by the teacher. Although they report to like the peer assessment method with a total average of 3.4, the students still prefer the opinion of the teacher as an expert. One of the students stated: "Teachers are always the best at grading. It's simple and reasonable." This can also be a com-pliment to the teacher as they are very satisfied with his or her teaching and grading. However, although students value high-quality feedback from a teacher, peer assessment still seems to trigger higher student engagement, as also suggested by Gibbs' research [6]. Furthermore, the students themselves stated in the second iteration that student feedback is of the same quality as teacher feedback. Fig. 6 shows the results of the final rating of all 13 exercises regarding their motivation to study the respective topic. In three of four cases, the peer reviewed exercise received a rating notably better than the average of all other exercises. A single factor analysis of variance (ANOVA) [25] shows that the difference between the mean values of all 13 groups is highly significant (p=1.36*10 -14 ). A non-parametric Wilcoxon rank sum test with continuity correction [26], [27] against the null hypothesis that the true location shift equals 0 also returns p=2.20*10 -4 . Interestingly, class B rated the second peer reviewed exercise distinctly better than the first one. This seems to contradict their initial feedback right after the peer reviewed exercises were completed as they reported more positive effects in the first iteration. In the discussion following the final rating of all 13 exercises, the students stated that the Java EE exercise was outside the sphere of their interest, which led to a low rating for the Java EE exercise.
Correlation Analysis
To better understand the analyzed factors of learning as well as their interactions with each other, a correlation analysis using Spearman's rank correlation coefficient [28] has been carried out. Unlike Pearson's correlation [29], the Spearman correlation is nonparametric and can be used for monotonic functions, including ordinal data. The p-values have been derived from a two-tailed t-test based on the approximation using the t-distribution as described in [30]. Table 1 represents the correlation matrix of the reported levels of agreement with the presented statements of both classes at the time the Java EE exercise was carried out. The matrix shows the Spearman's correlation coefficients between the respective factors as well as the p-values. The table was designed to be symmetric for easier use and simple lookup. The highest significant correlation with r s =0.73 (p=1.2*10 -8 ) has been measured between the satisfaction with the peer review method and the preferred number of peer assessments in class, which is a reasonable connection: the more satisfied students are with this method, the higher the number of reviews they would prefer. In turn, the satisfaction is strongly linked to the reported feedback quality with r s =0.59 (p=1.4*10 -5 ). If students are satisfied with the quality of the received feedback, they also tend to be satisfied with the peer review method itself. Feedback quality is also significantly connected to feedback quantity and the preferred number of peer assessments in class with r s =0.56 (p=5.0*10 -5 ) and r s =0.54 (p=1.2*10 -4 ) respectively.
The measured correlations between the preferred number of peer assessments in class with the transparency of grading as well as the feedback quantity are also significant with r s =0.44 (p=0.002) and r s =0.43 (p=0.003) respectively, suggesting that not only feedback quality, but also feedback quantity and transparency of grading are crucial factors in favor of the peer review method. The preferred number of peer assessments also significantly correlates with the reported student engagement with r s =0.44 (p=0.002). Even if less strongly and not significant with this sample size, further factors also seem to impact the preferred number of reviews and satisfaction: the timing of feedback relates to them with r s =0.38 (p=0.009) and r s =0.37 (p=0.010) respectively; student engagement relates to satisfaction with r s =0.35 (p=0.019). The correlation matrix of the analyzed factors of learning in the second iteration is shown in Table 2. This time, the highest correlation has been found between the own learning effect and the preferred number of peer assessments with r s =0.72 (p=6.1*10 -6 ), which was not the case in the first iteration. This is a new and interesting finding: the correlation seems to be stronger in exercises which have a more predefined solution compared to exercises allowing creativity. This may indicate that the students who did not manage to completely solve an exercise with a predefined solution especially benefit from the peer review method. The second highest correlation has been measured between the satisfaction with the peer review method and the number of peer assessments in class with r s =0.72 (p=6.1*10 -6 ), which is again a reasonable connection.
The preferred number of peer assessments is also linked to the student engagement with r s =0.60 (p=4.2*10 -4 ), while the student engagement seems to correlate with the perceived transparency of grading with r s =0.50 (p=0.005) and the satisfaction with r s =0.49 (p=0.006), although the correlation was not significant at the chosen significance level. The transparency of grading may correlate with the satisfaction and feedback quality with r s =0.47 (p=0.008) and r s =0.41 (p=0.023) respectively, which might be proven with a larger sample size. The links between feedback quality and the preferred number of peer assessments as well as the satisfaction could not be proven to be significant this time. Feedback timing also did not highly correlate with the number of peer reviews or with the satisfaction this time, which could be explained by the fact that students had to wait two weeks for the feedback instead of one, showing the importance of instantaneous feedback. However, the timing significantly correlates with the student engagement with r s =0.51 (p=0.004), suggesting that fast feedback may promote student engagement.
To sum up, the satisfaction with the peer review method as well as the preferred number of peer assessments seem to be especially connected with the following factors: 1. Student engagement: students who report a higher student engagement also wish to have more exercises assessed by peers. This correlation proved significant in both iterations. This may suggest that the peer review method promotes student engagement of those students who are satisfied with this method, or that engaged students tend to favor peer assessment. 2. Learning Effect: the learning effect was strongly connected to the preferred number of peer reviews in the second iteration. This indicates that students who learned from other submissions favor a higher number of peer reviews. Peer review may therefore benefit the learning effect. 3. Feedback quality: the first iteration revealed a significant correlation between the quality of feedback and satisfaction with the peer review method as well as the preferred number of peer assessments in class, indicating that peer assessment and feedback quality are strongly connected. The second iteration showed a similar correlation, however, it did not prove significant with the given sample size.
4. Feedback quantity: feedback quantity also correlated with the satisfaction and the preferred number of peer reviews in the first iteration. This may indicate that the number of reviewers is especially important for open exercises. 5. Transparency of grading: the preferred number of peer reviews correlated with the perceived grading transparency in the first iteration. Students perceiving this method as fair also tend to give higher ratings for this method, while students who seemingly experienced unfair gradings do not favor this method. The second iteration also revealed a similar non-significant correlation.
Although the timing of the feedback did not prove significantly linked to the satisfaction and the preferred number of peer assessments, it seems to correlate with the student engagement. Moreover, the second iteration, which involved a grading phase of two weeks instead of one, showed a distinctly weaker correlation between feedback timing and the rating of this method, which depicts the importance of fast feedback.
Recommendations
Based on the introduced empirical results, the following recommendations for promoting student-centered learning using the peer review method in computer science courses at secondary school level in traditional classrooms can be formulated: 1. Qualitative feedback. An agreement to provide written feedback should be made with the students. By default, giving qualitative feedback is optional in the Moodle workshop activity. 2. Anonymous feedback. Although this is unusual for person-centered approaches, reviewer and reviewee should be anonymous to reduce prejudices. This maximizes transparency of grading and prevents "upvoting" and "downvoting". This may sound easy at first glance, but students may be used to putting their name on their submissions. 3. Fast feedback. One of the main advantages of peer assessment is fast feedback. In order to be useful, feedback should be given in a timely manner [6]. One week seems to be a reasonable time for computer science exercises. 4. Black-box testing. The rating criteria should be formulated in a way that every student -regardless whether he or she was able to solve the exercise -can assess them. Some students and teachers raised the objection that students may not be qualified to assess the assignments of others. However, if the criteria focus on features, students can rate the submissions from a user's point of view. 5. Transparent criteria and conditions. The rating criteria as well as the general conditions for giving feedback need to be communicated and agreed upon. This ensures that students use the same rating scale and grading becomes reproducible. 6. Final grading by the teacher. Although most of the peer review grades did not need to be changed, it is important to listen to the students when they experienced an unfair grading. The issue should be discussed with the teacher being the facilitator to clarify the problem and mutually agree upon a fair grade.
7. Shared level of basic knowledge. Students need a certain level of expertise in order to give good feedback on the solutions of others. If they still struggle with computer science basics, it is questionable whether they are able to thoroughly test a program, even if the criteria are formulated from a user's perspective. 8. Exercises allowing creativity. Peer review seems to be especially useful if there are multiple possible solutions for the exercise as the students seem to learn more from each other and are more engaged. This corresponds with the findings of Standl [11], who recommends using peer review for project-based assignments. The more behavioristic an exercise is, the less powerful peer review becomes. 9. Do not overuse it. Students report that they would be fine with a peer review on every other exercise. However, the two peer reviews were a refreshing alternative to the teacher assessments in this case. They are still time-consuming, both for the teacher and the students, and could lose their charm if they are overused.
Learning Office
The concept of a learning office or learning atelier features a studying environment supporting students in self-organized individualized learning. A pioneer in the field, Margret Rasfeld, described the learning office by several distinct attributes [13], [31]. Firstly, learning contents are split into well-defined modules which the students work on independently. Self-explanatory materials allow the students to work at their own pace and current individual level. The students learn self-organization as they have to plan ahead, carry out and finish their modules in order to cover the total content of the subject. Secondly, instead of presenting materials, the teachers provide aid in the organization of students' study plans, similar to the "work contracts" in the "experiment" described in Rogers' book [2, pp. 45-56], as well as help them to structure and revise their learning efforts. Instruments for structuring the students' learning are their personal logbooks, learning paths and regular tutorials. Thirdly, while working on contents is individualized and certain situations like oral presentations require working alone, the students are also encouraged to work in groups and tutor each other. Fourthly, in this concentrated working atmosphere not only are the students more aware of their own learning status and goals, but also are the teachers enabled to individually support them. Finally, in order to complete a module, the students have to successfully pass tests. Upon completion of a module, they receive a certificate with detailed feedback and recommendations for building their future work on these acquired skills.
For each subject, the students in the learning office have to complete a predefined set of tasks or exercises. These are typically either marked as mandatory or as optional, where in order to positively complete the course, students have to do all of the mandatory exercises. If this has been achieved, the number of optional exercises solved contribute to the final mark between one and four. Alternatively, in some classes where it is harder to distinguish between mandatory and optional topics, exercises may contain both. In these cases, grading an exercise is not dual but takes into account to what extent the tasks have been fulfilled.
Additionally to exercises, in many subjects there are written examinations for each module which are also taken into the final grade, either at self-chosen individual date within a given timeframe or at a fixed date for all students of a class. Besides serving as a tool for grading, these examinations play another crucial role within the learning office: they propose a timeline for the students indicating a deadline until all tasks of a module have to be fulfilled. As it turns out, such a proposed timeline significantly helps students in structuring their efforts. These subjects usually have the exercises and tasks arranged in a linear order where the topics sometimes strongly build on each other, such as applied mathematics. Some subjects on the other hand provide exercises without a given order, only being dependent on exercises within the same module, but the modules themselves are largely independent.
Grading of exercises and tasks is done in different ways dependent on the subject: • In-class evaluation. In some subjects, the students have to do practical tasks in class and present their solutions to the teacher. They get direct feedback from the teacher, whether the task is finished or whether certain aspects still have to be refined. • In-class examination. Sometimes a task may consist of a written examination about a (sub)module. The student approaches their teacher at the beginning of a lesson and is given a set of questions which have to be solved during the lesson. • Off-class evaluation. Assignments are handed in on the online platform Moodle to be corrected or graded later. This typically includes a first feedback loop by the teacher where errors are pointed out and the student is given the chance to correct them. Only after the second round of handing in assignments, the actual grading process starts.
Exercises may be turned in as hand-written materials, which is typical for languages and several tasks in applied mathematics. An alternative is handing in computer-based exercises on Moodle, which happens more frequently in technical and IT subject.
Concept for Peer Review in the Applied Mathematics Learning Office
The subject applied mathematics has structured modules based on booklets which provide detailed descriptions, explanations and exercises for the students and help them to create their personal scripts with definitions, descriptions, graphs and solved exercises. These hard-copy booklets are supported by online materials in modulebased Moodle courses. In addition to the restructured material of the booklets, the online courses provide explanatory videos created by their teachers which guide students through the examples. Furthermore, students get the chance to check their current learning advancement by trying online exercises on Moodle, which are individually created from a predefined set of questions covering the basic contents of the module. A big advantage is that these online self-checks provide immediate feedback to the students without the fear of feeling embarrassed in front their peers or teacher if they are not yet ready for a test.
Materials and in-class evaluations are manually checked by the teacher who also provides individual feedback to the students on their performance. Unlike the computer science course presented in the previous chapters, students have 4-5 years less experience at school and therefore lack the requirements for a fully student-centered reviewing and evaluation system. Nonetheless, the learning office is designed for students tutoring each other. In the learning environment of the applied mathematics learning office, two usages of an additional peer reviewing process step are about to be implemented for in-class evaluation and off-class evaluation. An important aspect is to apply this method mainly to exercises in a first reviewing process which does not involve grading of the person who receives the feedback.
The person giving feedback is graded for their competence and motivation in providing feedback, to ensure a certain quality on the one hand and to allow them to receive a gratification for their efforts showing a deeper understanding than their peers. Giving peer review requires a higher level of understanding than just solving a problem oneself and allows the students with a deeper understanding to reach this aspect of the subject which could not be gained by just going step by step through their own work.
In addition, for the in-class examination, we can add an additional loop where the students who have already finished this task may share their knowledge with their peers who have just taken their test, explaining to them errors and mistakes pointed out by the teacher during the examination. In this way, the learning office's intended synergies are applied as not only do the students with a deeper understanding provide the others with feedback on their in-class work, but they are also able to reach a higher cognitive level of competence in the subject.
Discussion
We analyzed the applicability of the peer review method to small-and mediumscaled exercises in computer science courses at secondary school level to introduce person-centered approaches to traditional classrooms. Based on empirical evidence collected over one year, the following answers to the research questions have been found: 1. To what extent does peer review promote student-centered learning? Peer review seems to promote student-centered learning if the method is used correctly. The results indicate a clear improvement in feedback quantity and timing as well as student engagement and motivation to learn. In addition, the students liked the peer review method and regarded it as a refreshing alternative to predominating teacher assessments.
Is the feedback quality of students comparable to the feedback of a teacher?
Yes. Students report that feedback quality is indeed comparable to the feedback of a teacher if written feedback is given.
3. Do students receive feedback in a timelier manner using peer review?
Yes. The students received feedback on their assignments faster.
Does grading become more or less transparent?
The students stated that grading became more transparent, which is explainable by the higher number of persons who assess the submission. Furthermore, teachers have to pay special attention to the definition of the rating criteria for a peer review, so that the grading scheme of exercises is sufficiently transparent. 5. Are reviews by peers a reasonable alternative to teacher assessment? Yes. Peer review seems to be a reasonable alternative to teacher assessment in computer science courses. Nevertheless, some constraints need to apply in order to make it a useful tool for teaching and assessing.
What is a reasonable number of exercises to be assessed by peers?
Students report that about every other exercise could be reasonably peer reviewed. However, peer review should not be overused. The exact number of exercises depends on the type of exercise.
7. Are students overall satisfied with the peer review method? Yes. Students are satisfied with peer review as an assessment tool. However, they report that they still value the high-quality feedback of an expert.
Conclusion
To conclude, the overall feedback on the peer review exercises was very positive. The students reported that the quality of the feedback by students is comparable to the feedback of a teacher if written feedback is provided. The students received the feedback faster and they valued that grading was more transparent, because they received more than one grading. In addition, the teacher needs to pay special attention to the rating criteria. Peer review seems to be a reasonable alternative to teacher assessment; about every other exercise could be peer reviewed according to the students' feedback.
The correlation analyses revealed a strong and significant correlation between the preferred number of peer assessments and student engagement, which could indicate that this method promotes student engagement of students who are satisfied with the peer review method. The learning effect was highly correlated with the preferred number of peer reviews in the second iteration and seems to be higher in exercises with a predefined solution, which makes this method particularly attractive for applied mathematics. Feedback quality and feedback quantity were strongly connected to the satisfaction with the peer review method as well as the preferred number of peer reviews in the first iteration, which may indicate that feedback quality and quantity is especially important for open exercises. The transparency of grading was strongly connected to the preferred number of peer assessments in the first iteration. Although the timing of the feedback did not prove significant in both iterations, it correlated with student engagement in the second iteration.
Overall, the students were satisfied with this method. However, the students reported that they still value the high-quality feedback of a teacher, which can be a compliment to the teacher as they are very satisfied with his or her teaching and grad-ing. It was found that some additional constraints such as open assignments as well as obligatory and fast feedback need to apply to make peer review practicable and reasonable for small-and medium-scaled exercises in traditional classrooms. We plan to further investigate peer review with different subjects and its use in student-centered classrooms to follow up on our current findings. | 9,527 | 2018-02-28T00:00:00.000 | [
"Computer Science",
"Education"
] |
The Hausdorff Algebra Fuzzy Distance and its Basic Properties
Algebra Fuzzy Absolute Value Space, Algebra Fuzzy Metric Space, Hausdorff Algebra Fuzzy Metric Space. In this article we recall the definition of algebra fuzzy metric space and its basic properties. In order to introduced the Hausdorff algebra fuzzy metric from fuzzy compact set to another fuzzy compact set we define the algebra fuzzy distance between two fuzzy compact sets after that basic properties of the Hausdorff algebra fuzzy metric between two fuzzy compact sets are proved. Finally the main result in this paper is proved that is if (S, m, ⊚) is a fuzzy complete algebra fuzzy metric space then (AFH(S), h, ⊚ ) is a fuzzy complete algebra fuzzy metric space. How to cite this article: Z. A. Khudhair and J. R., Kider, “Some Properties of Hausdorff Algebra Fuzzy Metric Space,” Engineering and Technology Journal, Vol. 39, No. 07, pp. 1185-1194, 2021. DOI: https://doi.org/10.30684/etj.v39i7.2001 This is an open access article under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0).
INTRODUCTION
Kider in 2011, [1] introduced the definition of a fuzzy normed space. Also he proved this fuzzy normed space has a completion in [2]. Also Kider in 2012, [3] introduce a new type of fuzzy normed space. Kider in 2014, [4] proved that the Hausdorff standard fuzzy metric space is complete. Kider and Kadhum in 2017, [5] introduce the fuzzy norm for a fuzzy bounded operator on a fuzzy normed space and proved its basic properties then other properties was proved by Kadhum in 2017 [6]. Ali in 2018, [7] proved basic properties of complete fuzzy normed algebra. Again Kider and Ali in 2018, [8] introduce the notion of fuzzy absolute value and study properties of finite dimensional fuzzy normed space. The concept of general fuzzy normed space were presented by Kider and Gheeab in 2019, [9] [10] also they proved basic properties of this space and the general fuzzy normed space GFB (V, U). Kider and Kadhum in 2019, [11] introduce the notion fuzzy compact linear operator and proved its basic properties. Kider in 2020, [12] introduce the notion fuzzy soft metric space after that he investigated and proved some basic properties of this space again Kider in 2020, [13] introduce new type of fuzzy metric space called algebra fuzzy metric space after that the basic properties of this space is proved.
Here in this work the definition of algebra fuzzy metric space is used then basic properties of this space with some examples is recalled. After that the algebra fuzzy metric from a point in the universal set to a fuzzy compact set and the algebra fuzzy metric from a fuzzy compact set in the universal set to another a fuzzy compact set are defined. This will be the back ground to define the Hausdorff algebra fuzzy metric from a fuzzy compact set in the universal set to a fuzzy compact set. Then basic properties of the Hausdorff algebra fuzzy metric space is investigated and proved. The final result in this paper is that if (S, m, ⊚) is a fuzzy complete algebra fuzzy metric space then (AFH(S), h, ⊚ ) is a fuzzy complete algebra fuzzy metric space where AFH(S) is the set of all nonempty fuzzy compact set in S.
Definition 2.1: [13]
Let ⊚:I × I →I be a binary operation function then ⊚ is said to be continuous t-conorm ( or simply tconorm) if it satisfies the following conditions s, r ,z, w ∈ I where I=[0, 1]
Lemma 2.2:[13]
If ⊚ is a continuous t-conorm on I then
Example 2.3:[13]
The algebra product a ⊚ b = a + bab is a continuous t-conorm for all a, b ∈I.
Definition 2.5:[13]
If S≠ ∅, ⊚ is a continuous t-conorm and m: S×S→I satisfying the following conditions: (1)0 < m(s, r) ≤ 1 ; (2) m(s, r) = 0 if and only if s = r; (3) m(s, r) = m(r, s) ; , ⊚) is algebra fuzzy metric space known as the discrete algebra fuzzy metric space. Definition 2.8: [13] If (S, m, ⊚) is algebra fuzzy metric space then fb(s, j) ={uS: m(s, u) <j} is known as an open fuzzy ball with center sS and radius j ∈(0, 1). Similarly closed fuzzy ball is defined by fb[s, j] ={uS: m(s, u) ≤j}. Definition 2.9: [13] If (S, m, ⊚) is algebra fuzzy metric space and W⊆S is known as fuzzy open if fb(w, j) ⊆W for any arbitrary w∈W and for some j ∈(0, 1). Also D⊆S is known as fuzzy closed if is fuzzy open then the fuzzy closure of D, D ̅ is defined to be the smallest fuzzy closed set contains D. Definition 2.10: [13] If (S, m, ⊚) is algebra fuzzy metric space then D⊆S is known as fuzzy dense in S whenever D ̅ =S. Theorem 2.11: [13] If FB(s, j) is open fuzzy ball in algebra fuzzy metric space (S, m, ⊚) then it is a fuzzy open set. Proposition 2.12: [13] Suppose that (S, m, ⊚) is algebra fuzzy metric space then →s if and only if m( , s) →0. Definition 2.13: [13] In algebra fuzzy metric space (S, m, ⊚) a sequence ( ) is fuzzy Cauchy if for any given 0<t<1 then there is N ∈ ℕ with m( , )< t, for each m, n ≤ N. Definition 2.14: [13] An algebra fuzzy metric space (S, m, ⊚) is known as fuzzy complete if ( ) is fuzzy Cauchy sequence then →s ∈ S.
Definition 2.20:[13]
If (S, m S , ⊚) and (V, m V , ⊚) are two algebra fuzzy metric spaces and U S. Then a function T:S→V is called fuzzy continuous at u∈U. If for every 0 < r < 1, we can find some 0 < t < 1, with m V [T(u), T(s)] < r as s∈U and m S (u, s) < t.
If T is fuzzy continuous at every point of U then T is said to be fuzzy continuous on U.
Theorem 2.21:[13]
If (S, m S , ⊚) and (V, m V , ⊚) are two algebra fuzzy metric spaces and U S. Then a function T:S→V is fuzzy continuous at u∈U ⟺ whenever u n → u in U then T(u n )→ T(u) in V. Here we will began to introduce the basic notions and some properties of these notions that will be used later in section three.
Definition 2.23:
Suppose that (S, m, ⊚) is algebra fuzzy metric space then S is fuzzy compact if every fuzzy open covering Ω of U has a finite fuzzy open sub covering that is there is a finite sub collection
Definition 2.24:
Assume that (S, m, ⊚) is algebra fuzzy metric space and P be a subset of S. Then P is called totally fuzzy bounded if for each 0 < r < 1 , there is a finite set of points { 1 , 2 ,…, } ⊆P such that whenever u in S, m(u, ) < r for some ∈ { 1 , 2 ,….., }. This set of points { 1 , 2 ,..., } is called fuzzy rnet.
Proposition 2.25:
A totally bounded algebra fuzzy metric space is fuzzy bounded.
Proof:
Suppose that (S, m, ⊚) is totally fuzzy bounded and let 0 < r < 1 is given. Then there exists a finite fuzzy r-net for S, say A. Since A is a finite set of points and 0 < n(A) < 1, where n(A) = sup{m(d, a): d, a∈A}. Now let u 1 and u 2 be any two points of S. There exists points d and a in A such that m(u 1 , d) <r and m(u 2 , a) < r . Now for n(A) and r there is t, where 0
Proposition 2.26:
If (S, m, ⊚) is a compact algebra fuzzy metric space then S is fuzzy totally bounded.
Proposition 2.27:
If (S, m, ⊚) is a compact algebra fuzzy metric space then (S, m, ⊚) is fuzzy complete.
Proof:
Assume that (S, m, ⊚) is a fuzzy compact algebra fuzzy metric which is not fuzzy complete. Then we can find a fuzzy Cauchy (p k ) in S does not has a limit in S. Let p∈S, since p k ↛p ∃ 0 < r < 1 such that m(p k , p) ≥ r for k=1, 2, … but (p k ) is fuzzy Cauchy ∃ N∈ ℕ s. t. m(p j , p m ) <t ∀ j, m ≥ N. Choose m ≥ N for which m(p m , p) < t . So, the fuzzy open ball fb(p, t) contains { p 1 , p 2 , …,p k } where k ∈ ℕ. Now consider fuzzy { fb(p 1 , t(p 1 )), fb(p 2 , t(p 2 )),… fb(p k , t(p k ))} where 0 < t(p k ) < 1 and S=∪ j=1 k fb( , t( )). But each fb(p j , t(p j ))} contains p k for only a finite number of values so S, must contains p k for only a finite number of values of k. This is a contradiction. Hence (S, m, ⊚) must be fuzzy complete.
Theorem 2.28:
If (S, m, ⊚) is totally bounded and fuzzy complete algebra fuzzy metric space then (S, m, ⊚) is fuzzy compact.
Proof:
Assume that (S, m, ⊚) is not fuzzy compact. Then we can find a fuzzy open covering {O λ : λ∧} of S that does not have a finite fuzzy open sub covering. But S is fuzzy totally bounded, so it is fuzzy bounded, hence consider fb(p, r) for some 0 < r < 1 and some p∈S, clearly fb(p, r) ⊆S if S ⊆ fb(p, r) then we must have S = fb(p, r). Put t k = r 2 k but S is fuzzy totally bounded this means that S can be covered by finite fuzzy open balls of radius t 1 . By our assumption at least one of these fuzzy open balls, say fb(p 1 , After many steps we get a sequence (p k ) has the property that for each k, fb(p k , t k ) )≠ ⋃ j=1 k O λ j and p k+1 ∈fb(p k ,t k ). We next show that the sequence (p k ) is convergent. Since p k+1 ∈fb(p k ,t k ) it follows that is a fuzzy Cauchy sequence in S and since S is fuzzy complete, it fuzzy converges to p∈S.
is fuzzy open it contains fb(p, s) for some 0 < s < 1. Let N∈ ℕ, m(p k , p) < s Then, for any u∈S such that m(u, p k ) < t k . It follows that m(u, p) ≤ m(u, p k ) ⊚ m(p k , p) ≤ t k ⊚s < r, for some 0 < r < 1. So that fb(p k , t n ) fb(p, r). Therefore fb(p k ,t k ) has a finite fuzzy open sub covering, namely by the set O λ 0 . Since this contradicts fb(p k , t k ) )≠ ⋃ j=1 k O λ j . The proof is complete.
Theorem 2.29:
Suppose that (S, m, ⊚) is a complete algebra fuzzy metric space and assume that U ⊆ S. Then U is fuzzy complete ⟺ U is fuzzy closed.
Proof:
Assume that U is fuzzy complete then by Theorem 2.19 for any u∈ ̅ there is ( ) ∈U with →u but ( ) is fuzzy Cauchy and U is fuzzy complete so u∈U so ̅ ⊆U but U⊆ ̅ this implies that U= ̅ . Hence U is fuzzy closed. For the converse assume that U is fuzzy closed and let ( ) be a fuzzy Cauchy in U. Then →u∈S this implies that u∈ ̅ but U= ̅ so u∈U. Hence U is fuzzy complete.
Proposition 2.30:
Suppose that (S, m, ⊚) is algebra fuzzy metric space and let U ⊆ S. If U is fuzzy compact then U is fuzzy closed and fuzzy bounded.
Proof:
By Theorem 2.19 for any w∈ ̅ there is ( ) ∈U with →u but U is fuzzy compact so u∈U hence ̅ ⊆U but U⊆ ̅ this implies that U= ̅ . Thus U is fuzzy closed. Now assume that U is fuzzy unbounded so any sequence ( ) ∈U will be unbounded so any fuzzy open cover for U could not have a finite fuzzy open sub cover for U. This contradicts our assumption U is fuzzy compact. Hence U must be fuzzy bounded.
3.THE HAUSDORFF ALGEBRA FUZZY DISTANCE
In this section we used the definition of algebra fuzzy metric space and the basic properties of this space after that began to define the algebra fuzzy distance between two fuzzy compact sets this will give us the idea of the notion of Hausdorff algebra fuzzy distance between two compact sets. This notion is the key of all results in this section. Suppose that (W k ) is a fuzzy Cauchy sequence in AFH(S) then for any r∈(0, 1) there is N∈ ℕ such that h(W m , W k )≤ r or W m ⊆ W k ⊚r and W k ⊆ W m ⊚r for all m, k ≥ N. Theorem 3.17: Suppose that (S, m, ⊚) is a fuzzy complete algebra fuzzy metric space and assume that (w k ) is a fuzzy Cauchy sequence in (AFH(S), h, ⊚). Let (k n ) be an increasing sequence with 0< k 1 < k 2 < …< k n < … Assume that (w k j ) ∈ W k j is a fuzzy Cauchy in S then there is a fuzzy Cauchy (ŵ k )∈ W k with ŵ k j = w k j for each j ∈ ℕ.
Proof:
The sequence (ŵ k ) ∈ W k is constructed as follows for k ∈ {1, 2, …, k 1 } choose ŵ k ∈ { w ∈ W k : m(w, w k j ) = m(w k j , W k ) } then ŵ k is exists since W k is fuzzy compact. Similarly for j ∈ {2, 3, … } and each k ∈ { k j +1, k j +2, …, k j+1 } choose ŵ k ∈ { w ∈ W k : m(w, w k j ) = m(w k j , W k )}. Clearly ŵ k j = w k j by our construction. Since (w k j ) ∈ W k j is a fuzzy Cauchy sequence in S let t∈(0, 1) be given then there is k j , k n ≥ N 1 with m(w k j , w k n ) ≤ t. Also since (W k ) is a fuzzy Cauchy sequence in AFH(S) there is N 2 such that h(W j , W n ) ≤ t for all k, n ≥ N 2 . Now put N = N 1 ∧ N 2 and for i, n ≥ N we have m(ŵ i , ŵ n ) ≤ m(ŵ i , w k j )⊚m(w k j , w k n )⊚m(w k n , ŵ n ) where i ∈ { k j−1 +1, k j−1 +2, …, k j }and n ∈ { k n−1 +1, k j−1 +2, …, k n }. But h(W m , W k j ) ≤ t the there exists ŵ m ∈ W m ∩[(w k j )⊚t] so that m(ŵ m , w n j ) ≤ t. Similarly we can show that m(w k n , ŵ k ) ≤ t. Hence m(ŵ i , ŵ n ) ≤ t ⊚ t ⊚ t. So we can find r∈(0, 1) such that t ⊚ t ⊚ t < r. Hence m(ŵ m , ŵ n ) < r for all m, n ≥ N. Thus (ŵ k ) ∈ W k is a fuzzy Cauchy sequence. We will need the following Lemmas in the next main result. Lemma 3.18: Suppose that (S, m, ⊚) is a fuzzy complete fuzzy metric space and assume that (W k ) is a fuzzy Cauchy sequence in (AFH(S), h, ⊚ ) with W k → W ∈ AFH(S) where W = {s ∈ S: there is a Cauchy sequence (w k ) ∈ W k such that w k →s }. Then W ≠ ∅. Proof: ). In this way we can select a finite sequence w N j ∈ W N j with j=1, 2, …, k such that . For example let w N k−1 be the point in W N k+1 that closest to w N k . By induction we can find a sequence (w N j ) ∈ W N j such that m(w N j , w N j−1 )≤ ( 1 2 j ). Now we show that (w N j ) is a Fuzzy Cauchy sequence in S let 0< <1 and choose N β such that Hence by Theorem 3.17 there exists a convergent subsequence (d i ) ∈ W i for which d N i = w N i . Then lim d i exists and is in W. Thus W ≠ ∅. Lemma 3.19: Suppose that (S, m, ⊚) is a fuzzy complete fuzzy metric space and assume that (W n ) is a fuzzy Cauchy sequence in (AFH(S), h, ⊚) with W k → W ∈ AFH(S) where W = {s ∈ S: there is a fuzzy Cauchy sequence (w k )∈ W k such that w k →s }. Then W is fuzzy complete. Proof: Suppose (w i ) ∈ W i with w i → w we show that w ∈ W. For each i there exists a sequence (w i,k ) ∈ W k with w i,k → w i . There exists an increasing sequence (N i Moreover there is a sequence (n i ) with such n i ∈ ℕ that m(w N , . Put y n i = w N ,n i we see that y n i ∈ W n i and y n i → w. Now by Theorem 3.16 (y n i ) can be extended to a convergent sequence (z i ) ∈ W i so w ∈ W thus W is fuzzy closed. Hence W is fuzzy complete since S is fuzzy complete. Lemma 3.20: Suppose that (S, m, ⊚) is a fuzzy complete algebra fuzzy metric space and assume that (w k ) is a fuzzy Cauchy sequence in (AFH(S), h, ⊚) with W k → W ∈ AFH(S) where W = {s ∈ S: there is a Cauchy sequence (w k )∈ W k such that w k →s }. Then for every r∈(0, 1) there is N∈ ℕ such that W⊆ W k ⊚r for all k ≥ N.
Proof:
Let r∈(0, 1) then there is N ∈ ℕ such that h(W k , W n ) ≤ r for all k, n ≥ N. Now for k ≥ n ≥ N, W k ⊆ W n ⊚r. To prove that W⊆ W n ⊚r let w ∈ W then there is a sequence (w i )∈ W i such that w i → w. Now for k ≥ N, m(w k , w) <r. Then w k ∈ W n ⊚r by using fuzzy compactness of W n we can show that W n ⊚r is fuzzy closed. Then w k ∈ W n ⊚r for all k ≥ N so w must be in W n ⊚r. This shows that W⊆ W n ⊚r for all n ≥ N.
Lemma 3.21:
Suppose that (S, m, ⊚) is a fuzzy complete algebra fuzzy metric space and assume that (W k ) is a fuzzy Cauchy sequence in (AFH(S), h, ⊚) with W k → W ∈ AFH(S) where W = {s ∈ S: there is a fuzzy Cauchy sequence (w k )∈ W k such that w k →s }. Then W is fuzzy compact.
Proof:
We will prove that W is totally fuzzy bounded. Assume that W is not totally fuzzy bounded so for r ∈ (0, 1) does not exists a finite r-fuzzy net. Then there is (w i ) in W has the property m(w i , w j ) ≥ r for i ≠j. This Will gives a contradiction. Hence there is n ≥ N so that W ⊂ W n ⊚r by Lemma 3.20. For these w i there exists y i ∈ W n with the property m(w i , y i ) ≤ β where 0 < β < 1 with β < r. But W n is fuzzy compact some (y n i ) of (y i ) fuzzy converges. Thus there exists points in (y n i ) are close together as we want. In special cases there are two points y n i and y n j has the property m(y n i , y n j )≤ α where 0 < < 1 with < r. Now m(w n i , w n j )≤ m(w n i , y n i ) ⊚m(y n i , y n j ,) ⊚m(y n j , w n j ) ≤ β ⊚ α ⊚ β < r. Thus W is totally fuzzy bounded this implies that W is fuzzy compact by Theorem 2.28.
Here we reached to the position to give the main result in this section Theorem 3.22: Suppose that (S, m, ⊚) is a complete fuzzy metric space. Then (AFH(S), h, ⊚ ) is a complete algebra fuzzy metric space.
Proof:
Assume that (W k ) is a fuzzy Cauchy sequence in (AFH(S), h, ⊚ ) with W k → W ∈ AFH(S) where W = {s ∈ S: there is a fuzzy Cauchy sequence (w k )∈ W k such that w k →s }. Now by Lemma 3.17 W ≠ ∅ and by Lemma 3.18 and W is fuzzy complete. Also for every r∈(0, 1) there is N∈ ℕ such that W ⊆ W k ⊚r for all k ≥ N by Lemma 3.18 finally W is fuzzy compact by Lemma 3.19. Now we will show that W k → W it is enough to show that for 0 < r < 1 there exists N such that W k ⊆ W⊚r for all k ≥ N. But (W k ) is a fuzzy Cauchy so for given 0 < r < 1 there exists N∈ ℕ with h(W k , W n ) < r for all k, n ≥ N. Thus for k, n ≥ N, W k ⊆ W n ⊚r. Suppose that n ≥ N to prove that W n ⊆ W⊚r. Assume that y ∈ W n so it can be found (N i ) with n < N 1 < N 2 <… < N k < .. and for k, n ≥ N j , W k ⊆ W n ⊚ 1 2 j−1 note that W n ⊂ W N i ⊚ 1 2 . Since y ∈ W n there is w N i ∈ W N i with m(y, w N i ) ≤ ( 1 2 ). Since w N 1 ∈ W N 1 there is w N 2 ∈ W N 2 with m(w N 1 , w N 2 ) ≤ ( 1 2 2 ). Similarly by induction there exists (w N j ) with w N j ∈ W N j and m(w N j , w N j−1 ) ≤( 1 2 j+1 ). Using the fuzzy triangle inequality a number of times we can get m(y, w N j ) ≤ ( )for all j and also show that (w N j ) is a fuzzy Cauchy sequence. Now each W N j ⊂ W n ⊚ 1 2 and (w N j ) converges to a point a and since W n ⊚ 1 2 is fuzzy closed a ∈ W n ⊚ 1 2 Moreover m(y, d N j ) ≤ ( 2 )< r for some r, 0 < r < 1 so m(y, x) ≤ (1−r). Thus W n ⊂ W ⊚r for all n ≥ N. Hence W n → W consequently (AFH(S), h, ⊚) is a fuzzy complete algebra fuzzy metric space.
CONCLUSIONS
The definition of algebra fuzzy metric space is used in this study to introduce the notion of the algebra fuzzy distance from a point in the universal set S to a fuzzy compact set in S also the algebra fuzzymetric between two fuzzy compact sets is introduced. As in the ordinary case here the algebra fuzzy metric from a fuzzy compact set A to a fuzzy compact set B is not equal to algebra fuzzy metric from a fuzzy compact set B to a fuzzy compact set A this make us to introduce the notion of the Hausdorff algebra fuzzy metric between two fuzzy compact sets. The basic results of the algebra fuzzy metric are investigated. Finally the main result in this paper is proved that is if (S, m, ⊚) is a fuzzy complete algebra fuzzy metric space then (AFH(S), h, ⊚ ) is a fuzzy complete algebra fuzzy metric space. Here we may suggest for future work to study this space (AFH(S), h, ⊚ ). | 5,987.8 | 2021-07-25T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Theory of THz generation by Optical Rectification using Tilted-Pulse-Fronts
A model for THz generation by optical rectification using tilted-pulse-fronts is developed. It simultaneously accounts for (i) the spatio-temporal distortions of the optical pump pulse, (ii) the nonlinear coupled interaction of THz and optical radiation in two spatial dimensions (2-D), (iii) self-phase modulation and (iv) stimulated Raman scattering. The model is validated by quantitative agreement with experiments and analytic calculations. We show that the optical pump beam is significantly broadened in the transverse-momentum (kx) domain as a consequence of the spectral broadening caused by THz generation. In the presence of this large frequency and transverse-momentum (or angular) spread, group velocity dispersion causes a spatio-temporal break-up of the optical pump pulse which inhibits further THz generation. The implications of these effects on energy scaling and optimization of optical-to-THz conversion efficiency are discussed. This suggests the use of optical pump pulses with elliptical beam profiles for large optical pump energies. It is seen that optimization of the setup is highly dependent on optical pump conditions. Trade-offs of optimizing the optical-to-THz conversion efficiency on the spatial and spectral properties of THz radiation is discussed to guide the development of such sources.
Of various high field THz generation modalities, optical rectification (OR) of femtosecond laser pulses with tilted-pulse-fronts in lithium niobate has emerged as the most efficient THz generation technique. Among various efforts, it was developed in [9][10][11][12][13] as a means to achieve phase-matching in materials with large disparities between THz and optical refractive indices.
In this approach, an optical pump pulse is angularly dispersed to produce an intensity front which is tilted with respect to its propagation direction. THz radiation propagating perpendicular to this tilted intensity front or tilted-pulse-front (TPF) is then generated. Since the optical and THz radiation travel different distances in the same time, the difference between optical and THz refractive indices is compensated and phasematching is achieved. OR using TPF's has resulted in optical-to-THz conversion efficiencies (henceforth referred to as conversion efficiency) in excess of 1% [14][15] and the highest THz pulse energy of 0.4 mJ [16] to date. Therefore, the approach is promising for the development of laboratory scale THz sources with pulse energies greater than mJ level. Comprehensive theoretical models to aid understanding and quantitatively predict the performance of such systems are therefore of interest. The requisites of a physically accurate model and the current state of theory are described below.
As a consequence of the angular dispersion of the optical pump pulse in OR using TPF's, various frequency components of the optical pump pulse spectrum are spatially separated. This is tantamount to having different spectral bandwidths, pulse durations and average frequency at each spatial location. These effects are termed spatio-temporal distortions [17] and affect the properties of the generated THz radiation. Secondly, since the generated THz propagates perpendicular to the TPF, the optical pump and THz radiation propagate non-collinearly. Most importantly, as THz radiation is generated, it is accompanied by a dramatic cascaded frequency down-shift and spectral broadening of the optical pump pulse spectrum (cascading effects). On one hand cascading is responsible for conversion efficiencies which exceed the Manley-Rowe limit. On the other hand, in the presence of group velocity dispersion due to angular dispersion (GVD-AD) and material dispersion (GVD-MD), this spectral broadening inhibits further THz generation [18][19][20]. A comprehensive theoretical model should therefore be able to account for all of the above effects. This would require a simultaneous solution of optical and THz electric fields (henceforth referred to as field) in at least two spatial dimensions (2-D). In addition, spatio-temporal distortions imparted by the TPF setup would also have to be considered.
Previously presented models broadly comprise of (i) 1-D and 2-D spatial models without the inclusion of cascading effects (i.e., nonlinear coupling between THz and optical radiation is not considered) and (ii) 1-D spatial models which account for cascading effects. Of works in category (i), a one-dimensional (1-D) spatial model including the effects of material dispersion and GVD-AD was presented in [21][22]. In [23], a 1-D model considering material dispersion, GVD-AD and self-phase modulation (SPM) was presented. In [24], a 2-D model which took into account material dispersion, GVD-AD and crystal geometry was developed. In category (ii), an effective 1-D model with cascading effects was first presented in [25] and improved in [18].
In this paper, we present the formulation of a 2-D model which simultaneously accounts for spatio-temporal distortions of the optical pump pulse, cascading effects, SPM and stimulated Raman scattering (SRS), material dispersion, THz absorption as well as geometry of the nonlinear crystal. The developed model is applicable to the simulation of a variety of OR systems with different TPF setups, crystal geometries and optical pump pulse formats.
In Section 2, we outline our general approach. In Section 3, we describe the physics of THz generation using OR with TPF's. In particular, a discussion from transverse momentum (k x ) and time domain viewpoints is introduced. It is seen that the generation of THz results in the broadening of the optical pump pulse in both frequency and transversemomentum domains. In the presence of this increased frequency and transversemomentum spread, GVD-AD and GVD-MD cause a spatio-temporal break-up of the optical pump pulse which inhibits further THz generation. These descriptions serve to motivate the theoretical formulation, which is presented in Section 4. In Section 5, we validate the model by comparisons to experiments and analytic calculations. The impact of imaging errors on conversion efficiency is shown quantitatively. It is seen that small perturbations to the optimal imaging configuration can result in sizeable degradation of conversion efficiency. Insights into the experimentally measured broadening of the optical spectrum are provided. In Section 6, we discuss the meaning of effective propagation length in 2-D. This is then used to discuss scaling to large optical pump energies and optimization of conversion efficiency. It is seen that the optimization of conversion-efficiency is highly dependent on the optical pump parameters. Finally, we highlight the trade-offs incurred while optimizing the conversion efficiency on spatial and spectral properties of THz radiation. We conclude in Section 7. This paper thus provides an overview for constructing sources customized optimally for various applications. to generate a tilted-pulse-front. The model accounts for the angular dispersion of various spectral components which can generate THz radiation inside the nonlinear crystal by satisfying the appropriate phase-matching condition for optical rectification. From a time-domain viewpoint, the angularly dispersed pulse forms a tiltedpulse-front shown by the red ellipses. THz radiation is generated perpendicular to this tilted-pulse-front (red arrow). (b) 2-D computational space for solving coupled nonlinear wave equations for optical rectification. Nonlinear crystal geometry is accounted for by delineating an appropriate distribution of χeff (2) (x,z). Edges of the distribution are smoothed out to avoid discontinuities. The refractive index is homogeneously distributed throughout the computational space. The optical beam is centered at a distance h from the apex of the crystal which sets the limits to the computational region. The optical field at the beginning of the lattice is calculated analytically using dispersive ray pulse matrices. The THz field profile can be calculated at a distance zd from the crystal after Fresnel reflection is taken into account.
THz Output Field
The overall schematic of our approach is depicted in Fig. 1. An optical pump pulse with input electric field described by the complex variable 0 0 ( , , ) in op E x z ω at angular frequency ω, propagating in the z 0 direction is incident on a TPF setup. In Fig. 1, a commonly employed setup incorporating a diffraction grating and single lens is depicted. However, the model is applicable to a variety of TPF setups (e.g. telescope and diffraction grating, contact grating etc.). In Fig. 1(a), electric fields at two angular frequencies ω and ω+Ω are depicted for convenience although there are infinitely many frequency components.
An optical pulse with a TPF is typically angularly dispersed [17]. Therefore, various frequency components of the emergent optical field described by 0 0 ( , , ) out op E x z ω propagate at different angles. This is depicted in Fig. 1(a) as spectral components at ω and ω+Ω emerge with wave vectors (henceforth referred to as momentum) ( ), ( ) k k ω ω+ Ω respectively. In the time domain, such an angularly dispersed pulse has an intensity profile tilted with respect to its propagation direction as shown by the red ellipses in Fig. 1(a). This gives rise to the terminology of 'tilted-pulse-front '. These red ellipses make an angle π/2-γ with respect to the propagation direction of the optical pulse, where γ is termed the pulse-front-tilt angle. Since OR is intra-pulse difference frequency generation (DFG), in the phase-matched condition, the momentum of the generated THz (at angular frequency Ω) is given by ( ) ( ) ( ) k k k ω ω Ω = + Ω − as depicted by the red arrow in Fig. 1(a). The generated THz then emerges at an angle γ with respect to the direction of propagation of the optical pump pulse and exits approximately normal to the output facet.
Since the various frequency components of the optical pump pulse are angularly separated, there is a spatial variation in the average frequency (spatial-chirp) as well as amplitude across the optical beam profile. As a consequence, the pulse durations and bandwidths at various points in space are different, which affects the spatial and spectral properties of the generated THz pulse. In our model, we consider these effects by applying an analytic formulation of dispersive ray pulse matrices from [26] in Section 4.1. This approach efficiently models the various spatio-temporal distortions associated with the optical pulse for an arbitrary TPF setup and supplies 0 0 ( , , 0) out op E x z ω = at the input face of the nonlinear crystal as shown in Fig. 1(b).
The incident optical field excites a polarization in the nonlinear material to drive the generation of THz radiation. The generated THz in turn influences the propagation of the optical field and vice versa. Thus, the evolution of the optical and THz fields is described by a solution of a system of 2-D coupled nonlinear wave equations in the (z-x) co-ordinate system depicted in Fig. 1(b). In our approach, the (z-x) co-ordinate system is rotated with respect to (z 0 -x 0 ) by an angle α, which is the apex angle of the nonlinear crystal. The angle α is approximately equal to the pulse-front-tilt angle γ. The rotated coordinate system then has two key advantages. Firstly, in this set of axes, the THz radiation has small transverse-momentum components, i.e. k x~0 , which relaxes the constraints on spatial resolution Δx and consequently alleviates computational cost. Secondly, it makes it convenient to include the transmission of THz radiation at the crystal boundary.
In Fig. 1(b), we delineate how we consider the geometry of the crystal. An extended Cartesian space in the (z-x) co-ordinate system, uniformly filled with material of refractive index ( ) n ω is considered. Only regions of the computational space physically occupied by the crystal would have a non-zero value of second order susceptibility (2) ( , ) eff x z χ as shown in Fig. 1(b). If the length of the input crystal face is L and the optical field is incident at a distance h from the apex, the computational region extends from ( ) α as shown in Fig. 1(b). The initial optical field profile along the line can be calculated analytically by back-propagating the optical field calculated at the input crystal face at z 0 =0. In this model, we assume that the THz field is zero at the beginning of the computational space and consider only a single passage of the optical and THz beams through the crystal. In the limit of relatively thick crystals where the reflected THz energy is absorbed (absorption length at 300K is~ 2 mm) or with the use of THz anti-reflection coatings, this approximation is well justified.
Optical Fluence
THz Fluence Fig. 2(a). Spatial distribution of the optical and THz fluences: The THz field propagates in the z direction with the optical field at an angle γ~63° with respect to it. THz is only generated over a small portion of the optical field due to spectral broadening of the optical pulse by cascading which results in disruption of phase-matching due to group velocity dispersion (by angular and material dispersion) for subsequent portions of the beam. (b) Optical spectrum is broadened between (i)-(iii) due to cascading effects. (c) THz spectra at locations (i)-(iii) show significant spatial-chirp due to spatial variations of the optical electric field in (c). (d) Since each frequency component has a certain value of transverse-momentum in an angularly dispersed beam, spectral broadening also necessarily results in broadening in transverse-momentum kx. As the optical spectrum broadens, there is a broadening in transverse-momentum between z = -0.3 mm and 1.4 mm.
In Fig. 2, we provide a sample solution using the developed model. This will bring out the essential physics and put the formulation in Section 4 into context. Figure 2(a), depicts the optical and THz fluences as a function of space. The region within the green lines has non-zero (2) The optical pump pulse propagates obliquely in the nonlinear crystal as indicated by the cyan/light-blue colormap in Fig. 2(a). The THz pulse however propagates in the z direction, at an angle γ~63° to the optical pump pulse and emerges perpendicular to the output face as shown by the THz fluence in the red colormap.
As the optical pump pulse propagates, it generates THz photons at Ω and simultaneously suffers a frequency downshift by the same amount. With successive generation of THz photons it repeatedly experiences a frequency down-shift or 'cascaded' frequency down-shift which leads to large spectral broadening of the optical pump pulse spectrum as seen in Fig. 2(b).Even if the total depletion of the optical pump energy is only 1%, the drastic spectral reshaping renders undepleted pump approximations inaccurate.
As the optical pump pulse spectrum is modified between locations (i)-(iii), the subsequent THz spectrum is also modified as shown in Fig. 2(c) and vice-versa. As a result, there is significant spatial variation in both optical and THz spectra. The generated THz spectra are broadband, extending from 0 to 1 THz, consistent with earlier experiments and theory.
For an angularly dispersed pulse, each spectral component of the optical pulse at ω has a well-defined transverse-momentum k x (smaller ω's have a more negative k x value as shown in Fig. 1(a)). Therefore, spectral broadening of the optical pulse also directly leads to re-distribution of optical pulse energy among various transverse momentum values k x as seen in Fig. 2(d). The spread in transverse momentum in Fig. 2(d) is on the order of 10 4 m -1 , which is still much smaller than the optical wave number (~10 6 m -1 ) which means the beam is still relatively paraxial.
In the presence of this large spectral broadening one would expect a temporal breakup of the pulse due to group velocity dispersion. In addition, due to broadening in transverse momentum, there is also a spatial break-up of pulses. In combination, a rapid spatio-temporal break-up of the optical pulse occurs as shown in Fig. 3. Thus an initially clean TPF in Fig. 2(a) suffers a spatio-temporal break-up upon propagation in Fig. 2(c) as it propagates over very short distances on the order of ~ 2 mm. Due to this spatiotemporal break-up, different parts of the optical pulse arrive at different times and the generated THz no longer builds up coherently.
Propagation of Optical Pump Fields through the Optical Setup
As described in Section 2 and depicted in Fig. 1, a TPF setup imparts a number of spatio-temporal distortions to the optical pump pulse which influences the properties of the generated THz radiation.
In this section, we show how to account for these effects for an arbitrary TPF setup by employing dispersive ray pulse matrices [26]. Although developed for passive optical elements, our application of this approach to OR using TPF's results in a powerful model, closely connected to experiments. An explicit expression for the electric field of the optical pump pulse is obtained which allows calculations to be performed rapidly. Note that alternate ray-pulse matrix approaches such as [27] are also applicable. Since the beam size of the optical pump used in OR is much larger than the optical wavelength, paraxial approximations of ray-pulse matrix schemes are valid for the optical pump. Each The ray pulse matrix of the i th optical component ( ) i M ω is described by a 3x3 matrix for a single transverse spatial dimension as shown in Eq. (2).
( ) Equation (2) shows that the upper 2x2 matrix is nothing but the standard ABCD matrix for Gaussian beams. However, in order to account for dispersion, there are two additional terms E i and F i which correspond to the partial derivatives These terms refer to the shift in output beam position and output beam propagation direction in response to a shift in frequency. Here, we calculate F i upto the fourth order in frequency and accounts for GVD-AD and higher order terms. Note that the last row of ( ) i M ω is [0 0 1] as the source frequency does not change. With the knowledge of the input and output beam positions and propagation directions, a Huygen's integral can be used [26] to calculate the electric field of the emergent optical pump pulse after it has passed through the TPF setup as shown in Eq.(3). x z ω value is used as an initial condition to solve the nonlinear coupled system of wave equations. This expression accounts for spatial variations, material dispersion, angular dispersion including GVD-AD, spatial frequency-variations and spatial variations in pulsewidth. It allows for a one-to-one correspondence between the TPF setup configuration and the properties of THz radiation to be established.
Nonlinear Polarization due to Optical Rectification
In Section 4.1, we obtained the electric field of the optical pump pulse inside the crystal in the co-ordinate system (z 0 -x 0 ) as shown in Fig.1. In this section, we calculate the nonlinear polarization terms which drive the optical and THz fields. The nonlinear polarization will be calculated in the (z-x) co-ordinate system introduced in Fig.1 (b). The transformation between (z 0 -x 0 ) and (z-x) co-ordinate systems is easily obtained via Eq. x z ω . In Eq. (5), (2) ( , ) eff x z χ is the effective second order nonlinear susceptibility for OR at each spatial location and 0 ε is the free space permittivity. The spatial dependence of the effective non-linear susceptibility is used to account for the geometry of the nonlinear crystal as was shown in Fig. 1(b). If one substitutes the expression for the electric field from Eq. 3(a) in Eq. (5) The first term in Eq. (6) is the analogue term on the right hand side of Eq. (5). It signifies that an optical photon at angular frequency ω is created by an aggregate of DFG processes between optical photons at angular frequency ω+Ω and THz photons at angular frequency Ω. It represents the red-shift of the optical spectrum depicted in Fig. 2(b). The second term corresponds to an aggregate of SFG processes between optical photons at angular frequency ω-Ω and THz photons at angular frequency Ω. This term partially contributes to the blue-shift of the optical spectrum that was seen in Fig. 2(b). The third term in Eq. (6) represents the SPM term. Here, ( , , ) op E t x z is the time-domain electric field of the optical pump pulse and F t represents the Fourier transform between time and frequency domains. The intensity dependent refractive index coefficient is given by 2 ( , ) n x z . Since the SPM term in Eq. (6) contains details of the spatial distribution of the optical field, it also accounts for self-focusing effects. The final term models Stimulated Raman Scattering. This term is related to the SPM term but includes the effects of a Raman gain lineshape given by ( ') R h ω .
Solving the 2-D non-linear wave equation using Fourier Decomposition
In this section, we present our approach for solving the coupled system of nonlinear wave equations. The nonlinear polarization terms defined in Eqs. (5) and (6) In Eq. (7), ( ) ( ) / k n c Ω =Ω Ω is the wave number at the THz angular frequency Ω and n(Ω) is the corresponding refractive index. Similar to Eq. (7), one can also write the corresponding wave equation for the optical fields at various angular frequencies ω in Eq. 8(a).
Thus using Eqs. (1)-(12), we can model an arbitrary TPF setup for THz generation by OR. Equations 9 and 10 are effectively a 1-D system of coupled equations and can be solved in parallel for various k x and ω and Ω. Numerical integration was performed using a 4 th order Runge-Kutta method. The evaluation of Fourier transforms was accelerated by GPU parallelization.
Optical Element ABCDEF Matrix
Diffraction grating cos cos In this section, the developed model is validated against analytic theory and experiments.
Simulations assume the setup shown in Fig. 1(a). The ABCDEF matrices for the various optical components are presented in Table.1. The F term for the diffraction grating contains dispersive terms upto the 4 th order in frequency and accounts for GVD-AD and higher order dispersion terms. The full-width at half maximum (FWHM) pulse duration was assumed to be 0.5 ps with a fluence of 20 mJ/cm 2 and a peak intensity of 40 GW/cm 2 . The effective second order susceptibility was assumed to be (2) ( , ) eff x z χ = 360 pm/V [29] and the intensity dependent refractive index coefficient was n 2 =10 -15 cm 2 /W [30]. The optical beam with input e -2 radius of 2.5 in w = mm and is incident at h = 1.5 mm from the apex of the crystal. The focal length of the lens was 23 cm. The refractive indices and Raman gain lineshape are taken from [31] and [32] respectively. We calculate the conversion efficiency as a function of the grating incidence angle (θ i ), grating to lens (s 1 ) and lens to crystal (s 2 ) distances. The values of θ i , s 1 and s 2 obtained for maximum conversion efficiency are compared to analytic calculations [21]. Fig.4(a): Simulation of conversion efficiency as a function of imaging conditions. The surface plot shows the conversion efficiency versus displacements from optimal imaging distances Δs1 and Δs2 . s1, s2 are the lens-tograting and lens-to-crystal distances respectively. The inset shows conversion efficiency versus incidence angle to diffraction grating. As s1,s2 are varied, there is variation of the pulse-front-tilt angle which leads to a change in conversion efficiency. Careful optimization of the experimental setup is required to identify the optimal conversion efficiency point. (b) Theoretical calculations of optimal imaging conditions for various pulse-fronttilt angles based on analytic theory from [21]. For a pulse-front-tilt angle of 63°, the imaging conditions are in close agreement with the simulation results, validating the accuracy of the presented model. (c) Experimental scans of conversion efficiency vs displacements Δs1 and Δs2 agree well with the simulations in Fig. 4(a).
The simulation results are plotted in Fig. 4(a). A maximum conversion efficiency is obtained at s 1 = 60.89 cm, s 2 =36.84 cm and θ i =46.5°. It can be seen that these values of s 1 , s 2 satisfy the imaging condition for the setup, i.e. 1 The optimal imaging condition is determined by the magnification required to produce the optimal pulse-front-tilt angle inside the crystal. Analytic calculations were developed in [21] to supply optimal imaging conditions and are presented in Fig. 4(b). The blue and red curves in Fig. 4(b) correspond to the values of s 1 and s 2 for various pulse-front-tilt angle values γ and are plotted along y-axis on the left. The grating incidence angle θ i as a function of γ is given by the black curve plotted along the y-axis on the right. We know that for pumping at 1030 nm, the optimum pulse-front-tilt angle x ω at which each optical component at ω emerges from the setup changes which affects phase-matching via Eq. (5). Thus, the presented formalism can map the performance of the system directly to experimental conditions. In Fig. 4(a), we see how small deviations in imaging conditions can lead to sizeable degradation of conversion efficiency. For example, Δs 2~1 mm, leads to drop in conversion efficiency by about 40%. In Fig. 4(c), we show experimental scans of conversion efficiency for similar parameters. The white spaces in the figure indicate regions where data was not collected. For similar displacements Δs 1 and Δs 2 from the optimum values, the conversion efficiency reduction agrees well with the calculations in Fig. 4(a). The slight difference in the tilt of the ellipse in Fig. 4(c) can be attributed to a different grating incidence angle from that presented in Fig. 4(a).
Further verification of the model is provided by comparisons to experiments [14]. For the optical pump conditions described for Fig. 4(a), a conversion efficiency of 0.8% (w in = 3.5 mm h =2.2 mm) which is in reasonable agreement with the experimentally reported value of 1.15%. The larger number may be partially owed to uncertainties in THz absorption coefficients below 0.9 THz [31]. When the conversion efficiency is 0.8%, the absorption coefficient below 0.9 THz was ~ 10 cm -1 . When this was adjusted to 5 cm -1 ,the resulting conversion efficiency was 0.9%, which is closer to the experimental result. Fig. 5 (a) The experimental and theoretically calculated THz spectra are presented. The theoretical calculation is spatially averaged over x and is centred at 0.45 THz, in close agreement with experiments [14] (b) The experimental output optical spectrum (red) is presented along with calculations. The theoretical calculation averaged over a single transverse spatial dimension x (black-dotted) are broadened significantly more than the experiments. However, if the spatial averaging is performed over both transverse spatial dimensions x and y by simulating numerous 2-D slices, the output spectrum matches experiments more closely. The disparity may also be partially explained by the possibility of incomplete collection of extreme optical frequency components with large divergence.
In Fig. 5(a), the experimentally obtained THz spectrum is compared to theoretical calculations. The calculation presented is the spatially averaged spectrum 2 ( ) = Ω ∫ at the output facet of the crystal. The theoretical and experimental spectra are in agreement and peak at ~0.45 THz, in line with expectations from prior models [18], [21]. In Fig. 5(b), the experimentally reported optical spectrum is compared to theoretical calculations.
. This is shown in the solid black curve and is in better agreement with experiments. The reason for this is that for lower optical intensities, the extent of spectral broadening will be less as the nonlinear polarization term in Eqs.(5)-(6) would be smaller and therefore the spatial average shows less red-shift and spectral broadening. Another reason for the disparity between experiments and theory could be attributed to uncertainties in the measurement of the optical spectrum.
Extremities of the frequency spectrum may not have been collected due to their larger divergence.
Discussion of effective length in two dimensions
It is useful to understand what the effective propagation length L eff in a 2-D geometry is. In general, absorption and dispersion determine the optimal value of L eff . A longer L eff leads to more absorption and dispersion. A shorter effective length would mean less of both but would also translate to lesser THz generation. Therefore, there must exist an optimum value where the amount of THz generation is sufficiently large but absorption and dispersion are small. In a 2-D non-collinear geometry, two parameters influence the extent of absorption and dispersion. These include the beam radius in w (or out w ) and beam position h of the optical pump beam (with respect to the apex of the crystal, see Fig. 1 (b) for definitions). Therefore, the effective length parameter maps to both h and in w , i.e. that parts of the optical pump beam propagate a longer distance (since different sections of the beam propagate different distances), which would lead to a greater amount of cascading and dispersive effects. Therefore, it is reasonable to expect an optimal value of in w and h for a given set of optical pump conditions. Increasing the intensity or initial bandwidth of the optical pulse will lead to more rapid spectral broadening (w.r.t length) , which would require a readjustment of h and in w .
Implications on energy scaling
These effects are illustrated in Figs. 6(a)-(c). The fluence, bandwidth, pulsewidth and material parameters are the same as that used for Figs. (4). In Fig. 6(a), a beam with in w = 2.5 mm, is incident at h = 1.5 mm from the crystal apex. It can be seen how the THz and optical beams have good overlap which reduces absorptive effects and results in a relatively high conversion efficiency of 0.7%. In Fig. 6(b), the same beam is displaced further down the crystal. One sees that increased THz absorption as delineated in Fig. 6(b), causes the conversion efficiency to drop to 0.3%. In Fig. 6 (c), a larger beam size with in w = 10 mm is used at h=5 mm. We see that only a small portion of the optical pump beam cross-section produces THz radiation, resulting in a conversion efficiency of 0.5%. This is because, after initial THz generation, subsequent parts of the beam are spectrally broadened due to cascading effects to an extent that prevents further THz generation in the presence of GVD-AD and GVD-MD. This has an important implication on the scaling of these systems to large pump energies. In Fig. 6(d), we plot the maximum conversion efficiencies for various values of in w while keeping the peak optical pump intensity constant. The top x-axis depicts the corresponding optimal values of h. Note that the size of the beam at the input crystal face is~0.6 out in w w . In Fig. 6(d), for in w > 3.5 mm, there is a drastic drop in the maximum achievable conversion efficiency. There is an initial increase in the maximum achievable conversion efficiency for in w < 3.5mm, because of reduced absorption due to the increase in beam size (See section 6.1 for explanation). In Fig. 6(d), it is seen that optimal values of h increase with larger in w and that they are present relatively close to the apex of the crystal. The degradation of conversion efficiency can be circumvented by using an elliptical pump beam with its major axis perpendicular to the plane of tilting (i.e out of the plane of the paper).
Effects of pump intensity
In Fig. 7(a), we experimentally determine the conversion efficiency as a function of fluence for two different cases. In these experiments, a pulse with duration of 0.5ps was stretched to 1.39 ps. In the black curve, the setup is optimized to yield maximum conversion efficiency at the highest peak intensity by adjusting h. The fluence is then progressively decreased. In the red curve, the conversion efficiency is optimized for the lowest fluence and then progressively increased. The curves show a hysteresis with a cross-over at a fluence of 22 mJ/cm 2 . In Fig. 7(b), we simulate these experiments using the developed model. The simulated results show qualitative and quantitative agreement with the experiments. For lower fluences, a larger value of h is required to optimize the conversion efficiency. This is because, the smaller peak intensity in this case leads to a slower rate of spectral broadening of the optical pump, thereby causing dispersive effects to be 'delayed' in their appearance. This leads to a longer effective length or larger h for optimum efficiency. Note that the optimal values of h are larger compared to those depicted in Fig. 6(d) in Section 6.2 because of the smaller peak intensity of the stretched pulse. Thus we see that the conditions for the optimal conversion efficiency are highly sensitive to optical pump conditions. This could be one reason why, various experiments report very different saturation curves. Fig. 7(a): Experimentally obtained conversion efficiency saturation curves optimized for different pump intensities. The black curve is optimized for the maximum intensity while the red curve is optimized for the smallest intensity. The optimal experimental conditions are different for different intensities as seen in the hysteresis of the curve (b) Theoretical calculations of conversion efficiency saturation curves for experimental parameters in (a). Good quantitative agreement between experiments and theory is seen. When the intensity is lower, the optimal efficiency occurs at a larger value of h. This is because cascading effects occur at a slower rate and enable a longer effective interaction length. The optimal values of h are larger than that in Fig. 6(d) due to the use of a stretched pulse in the experiments. Fig. 8 (a). When the fluence is 10 mJ/cm 2 , the conversion efficiency is 0.5%. The THz spectrum as a function of transverse co-ordinate x, is relatively uniform with all points having a broadband THz spectrum centred at ~0.45 THz. (b) As the fluence is increased to 35 mJ/cm 2 , the conversion efficiency increases to 0.9% but the THz beam now contains a large spatial chirp and has an effectively reduced spot size. As the optical beam propagates to more negative values of x, it has been significantly broadened spectrally. Along with dispersive effects, this inhibits further coherent growth of THz radiation. Absorptive effects then dominate, leaving only the lower frequency THz components with smaller absorption intact.
Trade-offs of optimizing conversion efficiency
Finally, we highlight some trade-offs of optimizing only conversion efficiency in Figs. 8(a) and 8(b). Here, the THz spectrum as a function of transverse spatial co-ordinate (x) are shown for two different fluences. In Fig. 8(a), the optical fluence is 10 mJ/cm 2 which results in a conversion efficiency of 0.5%. The THz spectrum is virtually identical across the beam cross-section (x co-ordinate) as seen in Fig. 8(a). In Fig. 8(b), a conversion efficiency of 0.9% is achieved with a fluence of 35 mJ/cm 2 . However, the THz spectrum is spatially chirped across the beam-cross section and has a effectively smaller spot-size. As the optical pump beam generates THz, it suffers spectral broadening due to cascading effects. Phase mismatch is accentuated in the presence of this larger spectral bandwidth by GVD-AD and GVD-MD, which causes the coherent growth of THz to cease at the more negative values of x (See Fig. 2(a) : optical beam propagates towards negative x values). In the absence of coherent growth, THz absorption dominates and the THz spectrum red-shifts (since absorption is less for lower THz frequencies). Thus, while conversion efficiency is increased, a spatial chirp is introduced in the THz beam profile along with a reduction in the THz beam spot-size which may not be suitable for certain applications.
Conclusion and Future Outlook
In conclusion, a new approach to modelling THz generation via optical rectification (OR) using tilted-pulse-fronts (TPF's) was presented and discussed. The approach was formulated to consider (i) spatio-temporal distortions of the optical pump pulse, (ii) coupled non-linear interaction of the THz and optical fields in 2-D as well as (iii) selfphase-modulation and (iv) stimulated Raman scattering. The formulation was done in a way to circumvent challenging numerical issues. It was validated by comparisons to experiments and analytic calculations, with good quantitative agreement in both cases. We described the physics of OR using TPF's. In particular, we discussed the problem from transverse-momentum and time domains. It was seen that the large spectral broadening which accompanies THz generation also leads to broadening in transversemomentum since each frequency component in an angular dispersed beam has a welldefined transverse-momentum value. Thus, the optical pump as an increased frequency and transverse-momentum spread. Group velocity dispersion due to angular and material dispersion then causes a spatio-temporal break-up of the optical pump pulse which inhibits further coherent build-up of THz radiation. It is seen that THz conversion efficiency reduces for very large beam sizes. This suggests the use of optical pump beams that are elliptically shaped for high energy pumping. Guidelines to optimize the setup are provided. Imaging errors were shown to be critical and careful alignment is required to optimize efficiency. It is seen that optimal setup conditions are different for different pump conditions. Finally, we show how optimizing the conversion efficiency could lead to other trade-offs such as a deterioration of the spatial THz beam profile. This work provides an overview for optimizing such sources for various applications. | 8,624 | 2014-10-29T00:00:00.000 | [
"Physics"
] |
Early detection of fatigue based on heart rate in sedentary computer work in young and old adults
. The growing number of elderly workers calls for finding objective measures for monitoring mental state to avoid risk factors of fatigue. Heart rate is a physiological index which nowadays can accurately and reliably be measured with affordable and unobtrusive gadgets e.g. wristbands. In this study, 36 participants (17 old and 19 young adults) were recruited to perform a prolonged 40-min mentally demanding task including 240 cycles while their heart rate was measured. Each cycle began by memorizing a random pattern of connected points displayed on a computer screen following by replicating the pattern while only the points constructing the pattern were shown. The replication was performed by clicking using a computer mouse on the points to redraw the connecting lines in a sequential manner. The task performance in each cycle was calculated based on the accuracy and speed in responding. After each 20 cycles, i.e. one segment, participant rated their perceived mental fatigue on Karolinska Sleepiness Likert scale (KSS) while the task execution was paused for 5s. The mean and range of heart rate, HRM and HRR respectively, in each cycle were calculated and together with the task performance were averaged across each segment for each participant. Repeated measures analysis of variance was employed to assess the effect of time-on-task (TOT), i.e. 12 segments, on the aforementioned cardiac (HRM and HRR), behavioral (task performance) and subjective (KSS) measures considering the effect of age as the between participant factor. The statistical analysis revealed that the range of heart rate followed
<EMAIL_ADDRESS>Abstract. The growing number of elderly workers calls for finding objective measures for monitoring mental state to avoid risk factors of fatigue. Heart rate is a physiological index which nowadays can accurately and reliably be measured with affordable and unobtrusive gadgets e.g. wristbands. In this study, 36 participants (17 old and 19 young adults) were recruited to perform a prolonged 40min mentally demanding task including 240 cycles while their heart rate was measured. Each cycle began by memorizing a random pattern of connected points displayed on a computer screen following by replicating the pattern while only the points constructing the pattern were shown. The replication was performed by clicking using a computer mouse on the points to redraw the connecting lines in a sequential manner. The task performance in each cycle was calculated based on the accuracy and speed in responding. After each 20 cycles, i.e. one segment, participant rated their perceived mental fatigue on Karolinska Sleepiness Likert scale (KSS) while the task execution was paused for 5s. The mean and range of heart rate, HRM and HRR respectively, in each cycle were calculated and together with the task performance were averaged across each segment for each participant. Repeated measures analysis of variance was employed to assess the effect of time-on-task (TOT), i.e. 12 segments, on the aforementioned cardiac (HRM and HRR), behavioral (task performance) and subjective (KSS) measures considering the effect of age as the between participant factor. The statistical analysis revealed that the range of heart rate followed an increasing trend both in the young and elderly group with a significant main effect of TOT, p<0.001. The HRM was exhibited a tendency to increase as a function of TOT both in the young and elderly group, p=0.054. The performance was also significantly affected by the TOT in both the young and elderly groups, p<0.001, with an increasing trend in the elderly group and a fluctuating trend in the young group. The KSS increased in both groups as the golden standard of increasing mental fatigue with the TOT, p<0.001. No interaction between TOT segments and age groups were found in any of the measures except in the performance with the
Introduction
Aging workforce prioritize mental health into consideration of futuristic workplace design, specifically for mentally demanding tasks such as computer work. Such a futuristic design would have the flexibility to adapt based on user's health condition. This requires acquiring relevant biological data in an unobtrusive manner. One of the accessible choices is heart rate monitoring which is easily feasible using wearable heart rate sensors [1], [2].
One important problem during sustained computer work is the development of mental fatigue. Some studies affirm the usability of heart rate in the detection of mental fatigue [3]- [5]. Since heart rate is regulated by the autonomic nervous system [6], this association is seemingly plausible. However further studies have been suggested to explore the association between heart rate and fatigue development [4], especially amongst individuals from different age groups.
In this study, we aimed to analyze the changes of heart rate characteristics during a prolonged computer work both in young and old adults to see whether we can use them as biomarkers for the detection of mental fatigue development. We have employed a fitness tracker to record heart rate while individuals performed the computer work with mental demands. We statistically analyzed the association between the heart rate features and the time-on-task (TOT).
Participants
Twenty participants as a young group, nine females, aged 23 (SD 3) years, and 18 participants, 11 females, aged 58 (SD 7) as an elderly group voluntarily participated in this study. They were right-handed, had normal or corrected to normal vision, and reported no background of mental or psychological disorders, and no history of chronic fatigue. The participants were asked to obstinate alcohol for 24 h, and caffeine, smoking and drugs for 12 h prior to experimental days. Two participants (one from each group) were dropped out for missing data. The study was approved by The North Denmark Region Committee on Health Research Ethics, project number N-20160023, and conducted in accordance with the Declaration of Helsinki.
Experimental procedure
A task (WAME 1.0, [9]) was developed in a graphical user interface (MATLAB R2015b) based on standard models of computer work [10], [11]. The participants were sitting behind a desk to perform the task on a computer screen using a computer mouse. The task has been described in our previous work [11]. Briefly, the task was displayed to the participants on a 19 inch LCD monitor (1280×1024 pixels). The participants underwent 40-min of the task. The participants instructed to perform 5-min episodes of task to get familiar with the task. The task consisted of 240 cycles. After each 20 cycles, a segment, participants indicated their mental fatigue level on Karolinska Sleepiness Scale (KSS) [12] ranging from 1 up to 9 respectively corresponding to "Very alert" and "Very sleepy, fighting sleep" in five seconds. Each cycle began with memorization of a pattern of connected points in different shapes. It was followed by fixating on a single point while the pattern was not displayed. Afterwards, the points without connecting lines appeared and the participant had a limited time to click on the points in a specific order to replicate the recently memorized pattern. An extra point (distracting point) was also shown during the replication period which must not have been clicked. A new cycle with a new pattern appeared after the offset of each cycle with no pause in between. The patterns were generated randomly subject to some constraints. No loop or crossings were allowed for the lines connecting the points. The length of the lines and the angles made in the connection of two lines were limited to ensure central location of the connected pattern in the center of computer screen. The experiments were performed in the same time of the day (10-12 a.m. or 1-3 p.m.) to lower the possible effects of variation in circadian rhythms. The first two sections of each cycle took 2.34 seconds, and the third section of a cycle took 5.06 seconds.
Measurements
The heart rate was recorded during each task episode using A300 fitness tracker (Polar Electro Oy, Finland). This device provides instantaneous heart rate in one second intervals. The asserted accuracy of the heart rate measurement is about ±1% or 1 bpm in stable conditions which may also apply in our experimental setting with sedentary computer work. The validity and usability of this device have been approved in an independent study [13]. From the heart rate data, we extracted the mean and range of heart rate (respectively indicated by HRM and HRR) during each cycle and averaged them across cycles for each segment.
We also measured the performance of doing the task based on the clicking accuracy and speed. As described previously [9], the accuracy was computed based on how complete the patterns were replicated considering all clicks. The speed referred to how fast the pattern replication was drawn by the participants. The performance measure was monotonically related to how well the participants performed the task.
In addition, we acquired the subjective ratings of perceived mental fatigue on a Likert scale from zero (no fatigue at all) to 10 (extremely fatigued) before and after the task.
Statistics
Repeated-measures analysis of variance used to examine the effects of change in segments (1-12) on cardiac features (HRM and HRR), performance, and KSS. The mental fatigue scores acquired before (baseline) and after the task was compared using paired t-test. If the assumption of sphericity was not met, a Greenhouse-Geisser correction was applied. Bonferroni adjustment was used for pairwise comparison across the mental load levels.
The performance fluctuated in the young group and increased in the elderly group with the increasing TOT, Fig 1 (c), (11,374) = 4.0, < .001, 2 = .1. There was no interaction of the performance between the TOT segments and groups. There was a significant difference in the performance between groups, (1,34) = 32.8, < .001, 2 = .5.
According to Fig 1 (d), KSS increased with the increase in TOT segments in both groups, (1.8,59.9) = 12.8, < .001, 2 = .3. There was no interaction between TOT segments and groups in KSS. No significant difference between groups was observed in KSS. In addition, Mental fatigue ratings increased significantly from its baseline value in both young and elderly groups, < .001.
Conclusion
The results showed that HRR was sensitive to fatigue development. The performance and perceived mental fatigue changed in concordance with increasing TOT segments. This, in sum, lends support to the possibility of using the HRR as an index to detect fatigue development in computer work in young and elderly individuals. | 2,401 | 2018-08-05T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Weighted Boundedness of Certain Sublinear Operators in Generalized Morrey Spaces on Quasi-Metric Measure Spaces Under the Growth Condition
We prove weighted boundedness of Calderón–Zygmund and maximal singular operators in generalized Morrey spaces on quasi-metric measure spaces, in general non-homogeneous, only under the growth condition on the measure, for a certain class of weights. Weights and characteristic of the spaces are independent of each other. Weighted boundedness of the maximal operator is also proved in the case when lower and upper Ahlfors exponents coincide with each other. Our approach is based on two important steps. The first is a certain transference theorem, where without use homogeneity of the space, we provide a condition which insures that every sublinear operator with the size condition, bounded in Lebesgue space, is also bounded in generalized Morrey space. The second is a reduction theorem which reduces weighted boundedness of the considered sublinear operators to that of weighted Hardy operators and non-weighted boundedness of some special operators.
Introduction
We study weighted boundedness of certain sublinear operators in generalized Morrey spaces L p,ϕ (X ) defined on quasi-metric measure spaces (X , d, μ). We do not suppose that (X , d, μ) is homogeneous, i.e. we assume that it satisfies the growth condition μB(x, r ) ≤ cr ν , 0 < r < diam X ≤ ∞, ν > 0. (1.1) The study includes Calderón-Zygmund singular operators with standard kernel, the corresponding maximal singular operator and the standard maximal operator. In the case of the singular and maximal singular operators we obtain results on weighted boundedness in the generalized Morrey spaces, under the only assumption that the measure satisfies the growth condition (1.1).
Sublinear operators under consideration are supposed to satisfy the following two conditions: (1) they are bounded in L p (X ), (2) they satisfy a certain size condition, related to the exponent of the growth condition.
For the study of sublinear operators of singular type in the space L p (X ) under the growth condition (2.2) we refer to [25]. There are known results on the boundedness of such operators in L p (X ) under the growth condition more general than (2.2), where r ν is replaced by a given dominant λ(x, r ), see [14] and [15].
We consider the generalized Morrey spaces defined by the norm (1.2) where is any subset in X . Introduction of helps to unite local and global Morrey spaces. For a sublinear operator T satisfying the conditions (1) and (2), we study the boundedness of the weighted operators For classical and generalized Morrey spaces and their applications we refer, for instance, to the books [8,19,27,36,38,39] and the overview paper [28].
Singular-type operators under more general growth condition (in the sense of [14] and [15]) were studied in [21] and [40]. In [21] the operators T were studied in the nonweighted case, while in [40] they were considered in the weighted space L p,k (X , w) of specific form, which goes back to [20], namely in the case Note that in this case, the Morrey space L p,k (X , w) is in fact the non-weighted classical In this paper we study sublinear operators satisfying the properties (1) and (2) in weighted generalized Morrey spaces on quasi-metric measure space (X , d, μ), under the "classical" growth condition (2.2). We consider "radial" weights w(x) = v[d(x, x 0 )], x 0 ∈ X and the function v belongs to some class V + V − , see its definition in Sect. 2.3.
Our main results are as follows. First we show that the known way of transference of L p -boundedness to Morreyboundedness under the size condition, may be proved without using homogeneity of the space, see Transference Theorem in Sect. 3.1. More precisely, we show that the condition with ν from (1.1), imposed on the function ϕ(x, r ) defining the Morrey space, guarantees that any sublinear operator with the size condition, bounded in L p (X ), is also bounded in the Morrey space L p,ϕ (X ).
Moreover, under the only growth condition, we are able to efficiently estimate the Morrey modular of T f via that of f , see Theorem 3.8, which leads to the boundedness result in Morrey spaces in Theorem 3.9.
Further, we provide a certain pointwise estimate for weighted singular, maximal singular and maximal operators, with above mentioned radial weights, via non-weighted such operators plus the following operators: weighted Hardy operators, certain nonweighted operators which may be considered as a kind of hybrids of Hardy operators and potential operators, see Reduction Theorem 3.11.
Since the estimate in this theorem is pointwise, it reduces the weighted boundedness of the weighted singular, maximal singular and maximal operators in any Banach functions spaces with lattice properties to the boundedness of non-weighted operators, weighted Hardy operators with the same weight and some specific "hybrids". In this paper we use this estimate for the case of the generalized Morrey space L p,ϕ (X ).
As a separate result of interest we show that some of those hybrids are dominated by the modified maximal operator (modification concerns the use of the growth condition), see Theorem 3.5.
This reduction and the above mentioned Transference Theorem together with the L p results [25], allow us to obtain a result on the weighted boundedness of the weighted singular, maximal singular and maximal operators in the spaces L p,ϕ (X ) as given in Theorem 3.20. To this end, we obtain conditions for the weighted boundedness of Hardy operators in the spaces L p,ϕ (X ) under the only growth condition for (X , d, μ).
The paper is organized as follows. In Sect. 2 we provide necessary information on quasi-metric measure spaces (X , d, μ) together with definition of the space L p,ϕ (X ) and define the class of weights. Sect. 3 contains our main results. In Sect. 3.1 we prove the above mentioned Transference Theorem for an arbitrary sublinear operator with the size condition. In Sect. 3.2 we pass to weighted operators and prove the above mentioned Reduction Theorem containing the pointwise estimate of weighted operators. Section 3.3 starts with a result of weighted boundedness of Hardy operators in generalized Morrey spaces L p,ϕ (X ). This allows us to apply Transference and Reduction Theorems to obtain conditions on the weight and the function ϕ(x, r ), insuring the weighted boundedness of singular, maximal singular and maximal operators in the spaces L p,ϕ (X ). In Corollary 3.21, where we take ϕ(x, r ) = r λ for simplicity, we give sufficient conditions for the validity of those conditions in terms of Matuszewska-Orlicz indices of the weight. Finally, in Sect. 1 (Appendix), for reader's convenience, we provide necessary information for Matuszewska-Orlicz indices. The author thanks the anonymous referees for their careful reading of the paper, and useful comments.
Preliminaries on Quasi-Metric Measure Spaces
Basics on quasi-metric measure spaces may be found e.g. in [7] and [13]. Below we provide necessary definitions which we use in the paper.
Let (X , d, μ) be a quasi-metric measure space with measure μ and quasi-distance d: Everywhere in the sequel we suppose that the following properties of (X , d, μ) hold: (1) all balls are open sets; (2) the spheres S(x, r ) := {y ∈ X : d(y, x) = r } have zero measure for all x and r ; ( The set (X , d, μ) is said to satisfy the growth condition if there exist a constant A > 0 and exponent ν > 0, which is fractional in general, such that where x ∈ X and r ∈ (0, ). For more general notion of the growth condition, i.e. with a given dominant of measure of balls we refer to [14] and [15]. In this paper we use the growth condition of the form (2.2). We say that (X , d, μ) is regular, if the measure satisfies the lower and upper Ahlfors conditions with coinciding exponents, i.e.
Estimates of the type provided by the lemma below are known but we give its short proof for completeness of presentation.
Lemma 2.2 below provides a certain replacement of the formula of passage to polar coordinates used in the case X = R n . This lemma is a simplified version of more general estimates proved in [32].
Let be an arbitrary set of points in X . We use the uniform doubling condition In the sequel we use the abbreviations: a.i. = almost increasing and a.d. = almost decreasing Lemma 2.2 [32,Lemmas 2.5 and 2.8] Let (X , d, μ) satisfy the growth condition (2.2), L(ξ, t) be a non-negative function on × (0, ), 0 < ≤ ∞, a.i. in t uniformly in x ∈ and the doubling condition (2.5) be satisfied. Then where ξ ∈ , x ∈ (0, ), a ∈ R and 0 < r < ≤ ∞, whenever the right hand side of these estimates exists or not.
Generalized Morrey Spaces L p,' (X)
The generalized Morrey spaces are defined by the norm: The spaces defined by the norm are often called generalized local Morrey spaces. The spaces defined by the norm (2.8) are correspondingly called generalized global Morrey spaces. Both may be united in a single approach by the localization applied with respect to an arbitrary set ⊆ X , not just with respect to the case = {x 0 } of an isolated point. That is, one can estimate the Morrey-regularity of functions f at an arbitrary given subset of X , with admission of the extremal cases = X and = {x 0 }, x 0 ∈ X . The corresponding space defined by the norm will be denoted by L p,ϕ (X ). The principal estimates on which the proofs in this paper are based, are pointwise, see Sect. 3.2.
Everywhere in the sequel we suppose that ϕ(x, r ) is a positive measurable function on × (0, ), = diam X , 0 < ≤ ∞, and the following à priori assumptions hold: ( In the sequel we use the notation (2.13) For classical Morrey spaces L p,λ (R n ), as is known, |x| is global or local centered at the origin, respectively. We shall deal with the corresponding "model" function in the general setting of quasi-metric measure spaces with growth condition.
To this end we introduce the assumption that: Uniform Zygmund conditions hold: where 0 < r < , x ∈ and c does not depend on x and r .
Theorem 2.3
Let (X , d, μ) satisfy the growth condition and ϕ(x, r ) satisfy the Zygmund condition (2.14). Then On the right hand side we can apply the inequality (2.6) with L(x, r ) = ϕ(x, r ). Note that the condition (2.5) of Lemma 2.2 is satisfied, being easily derived from (2.12). By (2.6) we obtain due to (2.14). Let d(x, x 0 ) > 2kr. By the triangle inequality we have d(y, where the inequality (2.6) is applicable and we can proceed as in the previous case.
Let w(y) be an arbitrary weight on (X , d, μ), i.e. μ-a.e. positive function in L 1 loc (X ). We define the weighted generalized Morrey space L p,ϕ ,w (X ) as the space of functions with the finite norm (2.15)
Classes V + and V − of Radial Weights
The following classes of weight functions were introduced in [30], see also [26].
Note that for power weights we have The following lemma provides sufficient conditions for functions to belong to the classes V + and V − .
for some c > 0, then v ∈ V − . In particular, where it is assumed that < ∞. In the case = ∞, the statement holds with log A t replaced by log e max{t, 1 t }.
Definition 3.1 Let 1 < p < ∞. A sublinear operator T will be called p-admissible singular-type operator, if: (1) T satisfies the size condition of the form where ν comes from the growth condition (2.2) ; (2) T is bounded in L p (X , d, μ).
Remark 3.2 Usually the size condition is defined in the form
which insures (3.1). In the main theorem of this section, i.e. in the transference of L pboundedness to Morrey-boundedness, the form (3.1) of the size condition is sufficient for our goals.
First of all we keep in mind singular-type operators as p-admissible operators in view of Theorem 3.3. To be precise we define the singular operator T as where the kernel K (x, y) satisfies the conditions for some σ > 0, and assuming, as usual (see, for instance, [25]) that the operator T is bounded in L 2 (X ).
The following is known. [14] and [15] in a more general setting) Let (X , d, μ) satisfy the growth condition (2.2), 1 < p < ∞. The singular operator T and the maximal singular operator T with a standard kernel, if bounded in L 2 (X ), are bounded in L p (X ). (3.8) and the following "hybrids"
As other examples we mention the Hardy-type operators
of Hardy and potential operators, where 0 < γ ≤ ν. Note that K γ,ν γ =ν = H and K γ,ν γ =ν = H. Operators (3.9) arise in the sequel in the reduction of weighted boundedness of weighted singular operators in Morrey spaces to the boundedness of non-weighted singular operators, see Sect. 3.2. The operators K γ,ν and K γ,ν are p-admissible operators as follows from Lemmas 3.4 and Theorem 3.5, taking Remark 3.6 into account.
Lemma 3.4 Let
x ∈ X \supp f and 0 < γ ≤ ν. Then the operators K γ,ν and K γ,ν satisfy the size condition: In the theorem below we use the modified maximal operator The operator K γ,ν is dominated by the operator M N as shown in the next theorem. (X , d, μ) satisfy the growth condition (2.2) and 0 < γ ≤ ν. Then
Theorem 3.5 Let
where A is the constant from the growth condition (2.2).
Proof of Theorem 3.8 is based on the following crucial lemma. (X , d, μ) satisfy the growth condition (2.2), 1 ≤ p ≤ ∞, and ∈ R. Then
Lemma 3.7 Let
where C does not depend on f , x ∈ X and r ∈ 0, 2 .
Proof The inequality (3.13), is proved by the known trick. We have where we choose β > max 0, ν p . It is easy to check that 1 r β ≤ cβ r dt t 1+β , with c = 2 β 2 β − 1 when 0 < r < 2 and < ∞; in the case = ∞ this holds with c = 1 and ≤ replaced by = . Then (B(x,s)) Since (ν − β) p < ν, by Lemma 2.1 we then obtain (X , d, μ) satisfy the growth condition (2.2) and = diam X ≤ ∞, let 1 < p < ∞ and T be a p-admissible sublinear operator of singular-type. Then (3.14) for every f ∈ L p loc (X ), where C does not depend on x ∈ X , r ∈ (0, ) and f .
Proof
We split the function f into the parts supported in a neighbourhood of the point x and outside it, in the usual way: where r > 0, and by the sublinearity of the operator T we have ,r )) .
By the assumption 2) in Definition 3.1, we obtain (B(x,2kr)) . (3.16) To estimate T f 2 , we make use of the assumption 1) from Definition 3.1: By the triangle inequality (2.1) it is easy to check that the conditions z ∈ B(x, r ) and y ∈ X \ B(x, 2kr) imply that Therefore, where the right-hand side does not depend on z, so that d(x, y) ν and then applying Lemma 3.7, we get (3.17) The simpler direct estimate (3.16) for T f 1 L p (B(x,r )) , as can be easily seen, is dominated by the estimate of similar form: which yields (3.14).
Theorem 3.9 (Transference Theorem). Let (X , d, μ) satisfy the growth condition (2.2) and 1 < p < ∞. Let also Then any sublinear operator T satisfying the size condition Proof The proof of this theorem is prepared by the pointwise estimate of Theorem 3.8: since it remains to pass to supremum in (3.14).
Note that in [22] there was studied the boundedness of the Hardy operator H in local Morrey spaces L p,ϕ x 0 (X ) and local vanishing Morrey spaces under other assumptions on the function ϕ and the triplet (X , d, μ).
Remark 3.10
The condition (3.19) is not needed in the case where ϕ(x, r ) does not depend on x or if contains a finite number of points.
Reduction of Boundedness of Weighted Singular Integral Operators with Size Condition and the Weighted Maximal Operator to the Weighted Boundeness of Hardy Operators
In this section we consider integral operators, in general of singular type: K (x, y) f (y)dμ(y) (3.20) under the only assumption that its kernel K (x, y) satisfies the size condition |K (x, y)| ≤ cd(x, y) −ν , (3.21) where ν comes from the growth condition. Note that in this section in fact we even do not need to know that ν comes from the growth condition, since in the proof of the pointwise estimate in the theorem below we use only properties of weights of the classes V ± , the fact that the operator T has the size condition with some ν > 0 and do not use at all any information about (X , d, μ).
Our goal is to study the boundedness of such operators in weighted Morrey spaces in the non-weighted space L p,ϕ (X ). We shall study the operator wT 1 w with "radial" weights The pointwise estimate of Theorem 3.11 shows that for any Banach function space with lattice property over an arbitrary quasi-metric measure space (X , d, μ), the boundedness of the weighted operator wT 1 w with w(y) = v[d(y, x 0 )], v ∈ V + V − , x 0 ∈ X , is reduced to the non-weighted boundedness of the operator T , boundedness of the weighted Hardy operators w H 1 w , wH 1 w and non-weighted boundedness of simple operators K γ,ν and K γ,ν .
We provide also a similar reduction for the weighted maximal operator. Then and and for some C > 0 and α > 0, then
28)
when v ∈ V + and
29)
when v ∈ V − , whereᾱ is the least integer greater or equal to α and the sumᾱ −1 m=1 should be omitted in the caseᾱ = 1.
Proof
We assume that f (x) > 0, x ∈ X , without loss of generality. By the size condition we have (3.30) Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
A function v(t) positive on (0, ) is called quasi-monotone near the origin if there exist numbers α, β ∈ R such that v(t) t α is a.i. and v(t) t β is a.d. in a neighborhood of the origin. In the case = ∞ it is called quasi-monotone at infinity if there exist a, b ∈ R such that v(t) t a is a.i. and v(t) t b is a.d. in a neighborhood of infinity. Functions quasi-monotone at the origin and infinity have finite Matuszewska-Orlicz indices at the origin and infinity, respectively. These indices are defined as follows: x α is a.i. and M 0 (v) = inf β > 0 : v(x) x β is a.d. .
If v is quasi-monotone at infinity, then x a is a.i. and M ∞ (v) = inf b > 0 : v(x) x b is a.d. . | 4,845.4 | 2022-03-17T00:00:00.000 | [
"Mathematics"
] |
Assessment of the percentage of full recombinant adeno-associated virus particles in a gene therapy drug using CryoTEM
In spite of continuous development of gene therapy vectors with thousands of drug candidates in clinical drug trials there are only a small number approved on the market today stressing the need to have characterization methods to assist in the validation of the drug development process. The level of packaging of the vector capsids appears to play a critical role in immunogenicity, hence an objective quantitative method assessing the content of particles containing a genome is an essential quality measurement. As transmission electron microscopy (TEM) allows direct visualization of the particles present in a specimen, it naturally seems as the most intuitive method of choice for characterizing recombinant adeno-associated virus (rAAV) particle packaging. Negative stain TEM (nsTEM) is an established characterization method for analysing the packaging of viral vectors. It has however shown limitations in terms of reliability. To overcome this drawback, we propose an analytical method based on CryoTEM that unambiguously and robustly determines the percentage of filled particles in an rAAV sample. In addition, we show that at a fixed number of vector particles the portion of filled particles correlates well with the potency of the drug. The method has been validated according to the ICH Q2 (R1) guidelines and the components investigated during the validation are presented in this study. The reliability of nsTEM as a method for the assessment of filled particles is also investigated along with a discussion about the origin of the observed variability of this method.
Introduction
After decades of research and development, gene therapy is nowadays a mature field allowing versatility for the treatment and curation of a broad range of severe and life-threatening diseases [1]. Thanks to this versatility several gene therapy products have been approved by the European Medicines Agency (EMA) and the Food and Drug Administration (FDA) [2,3] the last few years while thousands of drug candidates are still undergoing clinical trials [4,5]. However, continuous development relating to the efficiency and safety of the strategies is still required [6], implying that there is a demand for objective characterization methodologies. Recombinant adeno-associated viruses (rAAV) represent one of the most widely used classes of vectors used in gene therapy for the encapsulation and delivery of a genetic sequence of interest [7,8]. The objective is to; replace a mutated gene with its healthy version, inactivate or repress the expression of mutated genes, edit genes in order to repair defects, or introduce new genes to gain function in cells that help in fighting disease. rAAV vectors have become a prominent vehicle for this purpose as they provide the advantages of a low immune response and are not known to be pathogenic. Clinical trials are currently ongoing for a broad range of diseases using rAAV [5]. For example, Haemophilia B represents an ideal target for which rAAV gene therapy has proven to lead to promising results [9][10][11], with several drug candidates currently undergoing clinical trials. Small and large animal models can recapitulate this monogenic disease and clinical results have shown a good correlation between the level of blood clotting factor and severity of disease [12][13][14]. Indeed, rAAV mediated expression of coagulation factor IX (FIX) of one or more percent in plasma as compared to normal already shifts the patient's Haemophilia B status from severe to moderate. Taking into account that a severe haemophilia patient profile corresponds to <1% clotting factor level [15], a clotting factor level increase of 1 to 2 percent in treated patients can be considered therapeutic. Clinical grade rAAV vectors consist of mixtures of empty and filled virions present in variable ratios, varying along the drug manufacturing process. The role of empty particles is not yet fully grasped. Some studies suggest that they can facilitate the gene transfer [16][17][18] while protocols are developed to reduce their presence as they are commonly considered as contaminants [19], with a potential immunotoxic risk [20,21] or inhibiting transduction [22][23][24]. These dichotomies highlight the need for a robust method to quantify the occurrence of each of the particle populations in a gene therapy drug to objectively elucidate their respective roles [25].
For more than two decades now, negative stain transmission electron microscopy (nsTEM) has been used as one of the methods of choice for the assessment of rAAV packaging [26]. This approach allows for direct observation of the particles and a clear discrimination between different particle populations as they appear in the image. More recent studies however have been questioning the reliability and robustness of this method [27][28][29] and are pointing out inconsistencies of the obtained results [27] due, among others, to the presence of intermediate dense particles [28]. Other analytical methods, such as optical density measurements [30], ion exchange chromatography (IEX) [28], ddPCR [31], qPCR combined with ELISA [32,33] size exclusion chromatography with multi-angle light scattering detection (SEC-MALS) [34], Analytical Ultracentrifugation (AUC) [35,36] and more recently charge detection Mass Spectrometry [37,38] have shown to provide promising results for this assessment. Each of these techniques has its unique strengths and limitations. Some even allow discrimination of subpopulations such as partially filled capsids or capsids with dimeric DNA species. No methodology allowing the separation and identification of these sub-populations has been described using Cryogenic Transmission Electron Microscopy (CryoTEM) yet, and the method introduced in this study only focuses on a binary discrimination of empty and full particle populations. However, in contrast to several of the other aforementioned methods, CryoTEM is considered a platform method, meaning that it is almost not affected by changes in serotype, packaged transgene or composition of the sample matrix. This versatility allows applying the same method without any modifications for different Gene Therapy products which is helpful in the comparative assessment of vector quality and anticipating immunogenic potential by leveraging (pre-)clinical data from Gene Therapy products at more advanced stages of development. As the method is based on a direct visualization of the internal features of particles, CryoTEM in combination with image analysis allows an unambiguous discrimination between the particles containing a genome and the particles lacking it, as recently reported for the characterization of rAAV particle packaging [39]. Table 1 provides an overview of methods commonly used for detecting full and empty capsid particles. A recent comparison of these techniques demonstrated good correlation between the listed methods [40]. Within this set of orthogonal methods, AUC is a popular technique due to its capability of discriminating the population of partially filled capsids, a population that is gaining more and more interest in the industry. This species is known to be heterogenous and is subject to further characterization typically trough next generation sequencing or even long read sequencing techniques.
However, this study focuses on analytical methods for the release of clinical grade AAV vectors. Such methods need to fulfil the stringent cGMP requirements, which demands an analysis software that ensures full data integrity and 21CFR part 11 compliance. In this respect AUC is still suffering significant limitations. Since cryoTEM is often applied in research settings, corresponding analysis software typically lack cGMP compliance. In contrast, the cryoTEM method presented here overcomes these limitations and applies a fully validated and 21CFR part 11 compliant software.
In the comparison of orthogonal methods that are commonly used to quantitate the relative content of full and empty capsids in AAV preparations in Table 1, the assessment was based on the experience the authors have gained with these techniques. Some of the parameters evaluated herein might be subjected to evolution as the techniques undergo further developments. Depending on the exact assay set-up the assessment may differ slightly between different laboratories. Due to the lack of an international reference standard for AAV with assigned relative content of full and empty particles, a statement on method accuracy is challenging. In the absence of such a reference standard, an assessment on the alignment and correlation between orthogonal techniques adds valuable information on method reliability.
In the present study, we introduce a complete analytical method based on CryoTEM for quantification of the packaging ratio of a gene therapy drug composed of rAAV8 vectors. The scope of the study is restrained to the rAAV8 serotype, with a focus on proposing a broad set of experiments on a single type of specimen. The CryoTEM images are subjected to image analysis where the rAAV particles are segmented and classified based on their internal density. A CryoTEM correlation study between the derived packaging ratio and the potency of the [41] and can be used as a quality control in a Good Manufacturing Practice (GMP) workflow. It hence provides an analytical method compatible with established production methodologies and regulatory requirements [27]. Specificity, precision, intermediate precision, linearity, dilutional accuracy, and robustness as defined in ICH Q2 (R1) are assessed. It is the only successfully validated TEM method for the assessment of the percentage of filled particles in an rAAV gene carrier used for gene therapy, hence validating CryoTEM as the method of choice for vector packaging analysis.
Materials and methods
For this study two types of sample material were investigated. Sample set 1 refers to purified ultracentrifugation fractions with varying degree of full and empty capsids that were directly withdrawn from the production process. For sample set 2 an empty capsid preparation (reference empty) produced by purifying the ultracentrifugation fraction containing primarily empty capsids was mixed at predefined ratio with a preparation containing a high degree of full capsids (reference full) to generate samples with intermediate percentage of full capsids.
Sample set 1
AAV production. For this study, HEK293 cells adapted to growth in suspension and cultivated in chemically defined serum-free media (FreeStyle™ F17, ThermoFisher, NY, USA), were used to produce rAAV8 vectors. Batch cultures were cultivated in 10 L bioreactors at 37˚C in a humidified atmosphere containing 5% carbon dioxide and with constant stirring at 235 rpm. Transient transfection of HEK293 cells with three plasmids containing Adenovirus 5 Helper genes, Rep2Cap8 and human FIX Padua sequence, respectively, was carried out with Polyethylenimine (Merck KGaA, Darmstadt, Germany) following the supplier transfection protocol. Five days after transfection, the fermentation broth containing rAAV8 particles was separated from cell debris with a clarification step, followed by ultrafiltration and diafiltration step for product concentration and buffer exchange while also reducing impurities. AEX chromatography was used to continue reducing negatively charged impurities followed by a second ultrafiltration and diafiltration step to further concentrate the product and remove protein impurities. The concentrated and diafiltrated intermediate containing rAAV8 was subsequently processed with an ultracentrifugation step according to a Takeda proprietary buffer and protocol, for separation of full and empty particles.
Sample preparation. Applying Takeda's proprietary ultracentrifugation process (WO 2018/128688 A1) full rAAV8 particles were isolated from the empty particles and collected separately: The ultracentrifugation step was performed using a core filled with 50% product + 50% TBS/sucrose buffer) for 6 hours at target rotation speed of 35000 rpm. The AAV8 particles contained in the starting material move into the TBS/sucrose buffer gradient until they reach a point at which their density matches the density of the surrounding gradient. As full and empty capsids differ in their density, this feature is used for the separation of full and empty virus capsids.
For sample set 1, the gradient after centrifugation was subdivided in several fractions with decreasing density. Each fraction was further purified by Ion exchange chromatography and subsequently analysed by cryoTEM, qPCR, and ELISA. Table 2 allows for direct comparison of full and empty capsid analysis through cryoTEM and calculation of the ratio of vector genomes (vg) as determined by qPCR and capsid particles (cp) as quantified by ELISA. However, while both methods show the same trend there is not a one-to-one correlation between the methods. This meets the expectations as both methods rely on different technologies for determining the percentage of full and empty capsids.
Sample set 2
For this study, HEK293 cells adapted to growth in suspension and cultivated in chemically defined serum-free media (FreeStyle™ F17, ThermoFisher, NY, USA), were used to produce rAAV8 vectors. Batch cultures were cultivated in 10L bioreactors at 37˚C in a humidified atmosphere containing 5% carbon dioxide and with constant stirring at 235 rpm. Transient transfection of HEK293 cells with three plasmids containing Adenovirus 5 Helper genes, Rep2Cap8 and human FIX Padua sequence, respectively, was carried out with Polyethylenimine (Merck KGaA, Darmstadt, Germany) following the supplier transfection protocol. Five days after transfection, the fermentation broth containing rAAV8 particles was separated from cell debris with a clarification step, followed by ultrafiltration and diafiltration step for product concentration and buffer exchange while also reducing impurities. AEX chromatography was used to continue reducing negatively charged impurities followed by a second ultrafiltration and diafiltration step to further concentrate the product and remove protein impurities. The concentrated and diafiltrated intermediate containing rAAV8 particles was subsequently processed with an ultracentrifugation step according to a Takeda proprietary buffer and protocol, for separation of full and empty particles. While for samples S2.1 and S2.6 ultracentrifugation fractions with high degree of full particles were collected, for sample S2.5 fractions with mainly empty capsids were pooled for further processing.
Subsequently, product fractions (S2.1, S2.5, S2.6) were further purified by Ion exchange chromatography and IEX Eluates for samples S2.1 and S2.5 were nanofiltrated. The resulting starting materials for the spiking (S2.1 "reference full" and S2.5 "reference empty") were thoroughly analysed. The mixed samples with intermediate degrees of full particles (S2.2-S2.4) were prepared based on the AAV ELISA titres as well as on an initial CryoTEM result. The "reference empty" material was prediluted to adjust for differences in particle concentration followed by the mixing with the "reference full" sample to yield the theoretical degree of full capsids with a nominal value of 40%, 60% and 70% full particles. The list of specimens analysed in this study are summarised in Table 3.
AAV8 ELISA
The commercially available enzyme-linked immunosorbent assay (ELISA; Progen AAV8 titration ELISA kit, cat. No. PRAAV8) uses a monoclonal antibody (ADK8) specific for a conformational epitope on assembled AAV8 capsids. This plate-immobilized antibody captures AAV-8 particles from the specimen. Captured particles are then detected by the binding of biotinylated anti-AAV8 ADK8 since the epitope targeted is repeatedly expressed on the assembled AAV8 capsid. Streptavidin peroxidase and a peroxidase substrate is then used for measuring bound anti-AAV8 and thus the concentration of AAV8 capsid. The color reaction was measured photometrically at 450 nm. The kit contains an AAV2/8 particle preparation as calibration standard with a labelled AAV8 particle concentration. However, an internal reference standard was used for quantification of capsid particles. This standard was assigned a μg/mL value based on densitometric analysis of a Coomassie stained SDS-PAGE. The assigned μg/mL value was then correlated against the ATCC recombinant Adeno-associated virus 8 (VR-1816) standard, which led to a conversion factor of 8.5E+13 cp/mg.
FIX-qPCR
Samples were treated with DNAse (NEB) to remove extraneous ITR target sequences. After treatment with Proteinase K (NEB) the AAV genome is released from the capsid. Subsequent restriction enzyme digest with BssHII (NEB) was performed to resolve AAV ITR T-shape structures. For quantification, a Taqman-based method with FIX-specific primers and probe was used. Plasmid encoding for FIX transgene as well as regulatory elements was linearized by ScaI (NEB) restriction digest, separated on Agarose Gel, and the band specific for full length vector genome was purified from gel (QIAquick Gel Extraction Kit) to serves as reference standard.
Biopotency assays
For the determination of in vitro biopotency, HepG2 cells were infected in duplicates with AAV8 vector carrying FIX transgene as described previously [42]. Subsequently, the chromogenic activity of FIX in the supernatant was measured using the Rox Factor IX kit (Rossix AB, Moelndal, Sweden). The results are given as biopotency unit (BPU) representing the relative FIX activity at a dose of 3.27 x 10 3 capsid particles (cp) per cell. In the course of the in vivo biopotency assay, 2.47 x 10 11 cp/kg of FIX encoding AAV8 particles were infused intravenously into seven hFIX-knockout mice (B6;129P2-F9 tm1Dws ) per group [43] that were bred by Charles River GmbH (Sulzfeld, Germany) and kept as described [44]. Human FIX activity in mouse citrate plasma drawn at day 14 was tested by a one-stage activated partial thromboplastin time (APTT) assay using human FIX-deficient plasma as substrate as already described [43]. Results refer to a human plasma standard which was calibrated against an international standard.
TEM specimen preparation and imaging
CryoTEM. 400 mesh copper grids coated with a carbon film, overlaid with a Formvar 1 film (TedPella, Inc., CA, USA) were hydrophilized using a glow discharger (Pelco EasiGlow™, TedPella Inc., CA, USA). A glow discharged grid was mounted on a vitrification robot (Vitro-bot™ Mark II, FEI, OR, USA). 3 μL of sample were placed onto the grid in the specimen chamber of the vitrification robot, under temperature and humidity-controlled conditions (16˚C, 99% relative humidity (rH)). After an adsorption time of ca. 10 sec, the grid was blotted-off and subsequently plunge-frozen in liquid ethane. The specimen was then stored in liquid nitrogen until insertion in the microscope.
The frozen specimen was mounted onto a Gatan 626 single tilt cryo holder (Gatan Inc., CA, USA) under cryogenic conditions and subsequently inserted in a CM200 electron microscope (Phillips N.V., Eindhoven, The Netherlands) equipped with a Field Emission Gun operating at 200 kV. The grid was visually assessed and several areas containing thin amorphous ice, suitable for imaging, were identified. Low-dose images were acquired in areas representative of the sample, i.e. areas containing AAV particles with minimal presence of ice contaminant and a homogenous spreading of particles in thin amorphous ice. The images were acquired at a resolution of 2048 x 2048 pixels using a TVIPS F224HD camera (Tietz Video and Image Processing Systems GmbH, Gauting, Germany).
Negative stain TEM. 400 mesh copper grids coated with a carbon film, overlayed with a Formvar 1 film (TedPella, Inc., CA, USA) were hydrophilized using a glow discharger (Pelco EasiGlow™, TedPella Inc., CA, USA). A glow discharged grid was mounted on tweezers and 3 μL of sample were placed onto the grid. After an adsorption time of ca. 10 sec, the excess of liquid present on grid was blotted-off using Grade 1 filter paper pre-wetted with 12 μL of water and immediately washed with 3 μL of distilled water. Excessive liquid was then blotted-off after ca. 10 sec and 3 μL of Uranyl acetate 2% (Electron Microscopy Sciences, PA, USA) was immediately added to the grid and blotted-off after ca. 10 sec. The grid was then inserted in a Tecnai G 2 Spirit BioTwin electron microscope (FEI, OR, USA) equipped with a tungsten filament operating at 100 kV. The grid was visually screened and areas suitable for imaging were identified. Representative images were subsequently acquired at a resolution of 2048 x 2048 pixels using a Veleta CCD camera (Olympus, Tokyo, Japan).
Image analysis
Particle detection. The acquired images were analysed using VAS (Vironova Analyzer Software, Vironova AB, Stockholm, Sweden), a 21CFR part 11 compliant validated software. The particle detection settings used are shown in S4 and S5 Tables. Briefly, potential rAAV particles within a size range of 18-28 nm and 25-35 nm for the CryoTEM and nsTEM images respectively, were detected by local ellipse detection in the gradient magnitude image. The difference in the size range used between the two analysis methods can be explained by hypotheses such as the presence of stain in nsTEM surrounding the particle, thus increasing the apparent size, by the changes in osmotic effects due to the stain, inducing a swelling of the particles or by a flattening of the AAV particles on the support in nsTEM upon embedding in the stain. A minimum of 3 images per sample were processed until at least 1500 particles were detected. Manual curation and verification of the images and detected particles were performed to remove falsely detected particles and add non-detected particles.
Particle classification. Principal component analysis (PCA) was performed on the particles' radial density profiles (RDPs), i.e. the mean intensity at each radial distance from a particle's centre. In a 2-D scatter plot of the two principal components, two separable clusters are observed, one corresponding to full particles, the other one corresponding to empty particles. A particle class ("Full" or "Empty") was attributed to each of the clusters by selecting manually the particles of one cluster on the displayed plot and assigning them the corresponding class. Particles for which the classification was not unambiguous were attributed the class "Uncertain". The corresponding data was extracted from the plot and used for the analyses. Intermediate results from the different steps of the image analysis workflow are shown in section 5 of the Supplementary information.
Characterization of the particle populations present in the specimen
Recombinant AAV vectors of serotype 8 (rAAV8) containing a gene coding for FIX were analysed by CryoTEM. Visual analysis of the rAAV preparation S2.1 images reveal that two main populations of particles can be observed in the specimen. A first population exhibits a distinct outer layer with a minute internal density, appearing as dark circles in the images, while a second population of particles displays a uniform inner density with no clear distinction between the inner core and the edges of the particles, appearing in the images as homogenous dark disks. This second population supposedly represents full particles, i.e. particles filled with material, hence the higher internal density, while the particles appearing as circles are empty particles (Fig 1a). Note that in most of the samples analysed, a small proportion of the observed particles appear as incomplete or improperly assembled (Fig 1b). Particles sharing features of both classes, which could represent intermediately filled particles, were seldom observed in this study, as shown with the high confidence interval displayed in the image analysis. As the characterization of such particles is not unambiguous, they are however classified as "uncertain" in the image analyses.
Representative sets of images were acquired for each sample and subjected to image analysis to evaluate the percentage of full particles according to the above definitions present in the specimen. A population of at least 1500 particles was included in each dataset, this value having been evaluated to provide robust statistical relevance (See supplementary section 7). For the experiments in which replicates were used, the mean value of the analyses is reported for each population. Note that the particles identified as broken or incomplete, representing a neglectable amount of the observed particles were not included in the image analysis. Indeed, adding such particles to the analysis would induce a bias towards the empty population as some particles putatively fully can lose their cargo upon breakage while empty particles remain empty upon breakage. Moreover, the discrimination between broken particles and other specimen debris might lead to erroneous interpretation. Including such particles in the analysis might be relevant for the evaluation of the stability of AAV particles or the resistance to stress between the particle types, but this is not within the scope of this study, hence the exclusion of such particles. For the same reasons, particles appearing as broken AAV doublets or triplets were not included in the analysis. of full particles observed by CryoTEM. This observation underlines the reliability of the results generated by cryoTEM and emphasizes the importance of controlling the ratio of filled particles for reaching consistent biopotency at consistent level of empty particles.
Correlation of CryoTEM results to biopotency assays and qPCR/ELISA quantification
Furthermore, good correlation of CryoTEM data with the orthogonal determination of full and empty capsids by calculation of the ratio of vector genomes and total particles was demonstrated and is shown in Fig 3. However, it should be noted that the latter approach bears the risk of inaccurate (e.g. significant over-or underestimation of the full and empty ratio) data due to the nature of the underlying methods (e.g. dependency of PCR based methods on location of amplification target or dependency on reference standards). This potential inaccuracy of qPCR and ELISA based methods is a possible explanation to that the slope of the linear regression as depicted in Fig 3 is > 1.
This indicates that the vg/cp ratio finds higher percentage of full capsids compared to cryo-TEM and that this is (more pronounced for preparations with higher degree of full capsids. Another reason for this observation may be the presence of a population of AAV vectors that are detected in PCR based techniques but not in cryoTEM. Such a population may come from AAVs packaged with incomplete transgenes of relatively small size and therefore detected as empty by cryoTEM but still carrying the PCR target sequence. Correlation between CryoTEM and qPCR/ELISA. Sample Set 2 with increasing degrees of full capsids were purified from ultracentrifugation fractions. Vector genomes per capsid particle titres were calculated from qPCR and AAV8 capsid particle ELISA results and correlated to the CryoTEM data. https://doi.org/10.1371/journal.pone.0269139.g003
Repeatability and linearity
The reliability of the CryoTEM method was evaluated by performing a repeatability study. Six grids of the same sample were consecutively prepared and imaged, aiming to assess the analytical method variance under the same operating conditions over a short interval of time. The results show a relative standard deviation of 0.6% (see S2 Table), suggesting a robust repeatability of the method with a low variance. These results are comparable to the values for AUC, where a standard deviation of 0.7% corresponding to a calculated relative standard deviation of 1.8% is reported [35].
The linearity of the method within a range was then evaluated to assess the ability to obtain test results which are directly proportional to the percentage of filled particles in the sample by CryoTEM. Two AAV preparations, used as reference were mixed to obtain a set of specimens with intermediate contents of full particles. AAV preparation S2.1 was used as the reference full specimen, packed with the best achievable ratio of full particles, containing 79.3 ± 0.8% of full particles and referred to as "reference full" while preparation S2.5, containing 1.17 ± 0.37% of full particles was used as the reference empty specimen, containing the lowest achievable ratio of full particles and referred to as "reference empty" (Fig 1, S1 Appendix). Since Cryo-TEM is the first validated method as per ICH Q2 (R1) guidelines, no orthogonal method was available for accuracy assessment, hence the linearity experiments are based on dilution series and the accuracy limited by dilutional accuracy.
A total of 6 grids were prepared for CryoTEM and imaged for each of the 5 samples (S2.1-S2.5) investigated. Fig 4 shows the correlation between the data obtained experimentally and the estimated theoretical values (see section 4 in Supplementary information). The method appears to display a robust linearity, with the two particle populations being detected to a same extent regardless of their respective concentration in solution, and is therefore deemed reliable within the range evaluated, i.e., at least up to percentages of 79% of full particles present in the Table), with the error bars corresponding to the standard deviation of each population.
https://doi.org/10.1371/journal.pone.0269139.g004 specimen. As the image analysis implies a binary discrimination between two well identified populations, with no overlap of the 99% confidence interval from the principal component analysis of the radial density profile the of the two classes, one can postulate that the linearity of this method could be extrapolated to be valid up to specimen containing exclusively full particles. This assumption, however, remains to be confirmed experimentally.
Comparison between CryoTEM and nsTEM
Negative stain transmission electron microscopy (nsTEM) has been routinely used for the assessment of the ratio of filled AAV particles for more than two decades [26]. Unlike in Cryo-TEM where the specimen is quickly plunge-frozen into a cryogen, allowing its conservation and observation in a near native hydrated state, the preparation in nsTEM involves a staining step, where a salt containing electron rich elements is added, acting both as a contrasting agent and as an embedding agent. The preparation involves drastic local osmotic and pH changes which might affect the particles integrity, and the conservation of the prepared specimen is then done at room temperature which might result in dehydration in areas of the support where the embedding agent was not efficiently spread. As this method is routinely used for the quantification of the percentage of full particles, it was therefore evaluated in the early stages of the drug development of the rAAV8 particles described in this study. When assessing results obtained by nsTEM and CryoTEM, discrepancies were observed. The results presented in Fig 5 suggest that the population of particles appearing as bright with a smooth surface, typically referred to as "full" [22], are over-represented in the nsTEM images, constituting the main population in all samples analysed, with values of 98% for compound S2.6 and 100% for compound S2.5, when CryoTEM results show that specimen S2.1 contains 83% "full" particles while compound S2.5 contains mainly empty particles, with only 6% of the particles identified as full.
Discussions
The CryoTEM results presented herein relate to characterizing particle populations, linearity and repeatability of full particles in a rAAV preparation. The proposed CryoTEM method has been validated according to the principles of Validation of Analytical Procedures, ICH Q2 (R1)) as used in a GMP workflow [41], for the assessment of the percentage of full particles in a rAAV8 preparation. As no other validated methods were available at the time of the study, the assessment of the method accuracy was challenging. However, this study can now facilitate the validation of any new methodology, as it will be possible to use CryoTEM analysis as an orthogonal method.
Observations of discrepancy between nsTEM and other orthogonal methods e.g. AUC have been reported [29], without further insights on the origin of these disparities. One hypothesis to explain the differences observed between nsTEM and CryoTEM in this study resides in the way the particles are identified and defined. In nsTEM, the particles that display an inner density either contain stain or could be collapsed particles with stain deposited on top of them, indicating that they do not contain a gene and are thereof indeed empty, as per the commonly reported definitions. On the other hand, particles with an even density do not contain any stain and are not collapsed, indicating that they are intact, are commonly referred to as full. However, their apparent intactness does not provide information about their content, for a particle could be intact and packed with a gene (i.e. full), or still be intact but empty, but with the current definitions, both would be classified as full. On the contrary, in CryoTEM, no stain is added, and the specimen is preserved in a hydrated native state [45], enabling a direct correlation of the inner density of the particles to their packaging level. An AAV particle containing a gene will be electronically denser than an empty one, hence appearing darker on the image. The proposed hypothesis is therefore that in the case of the sample analysed here by nsTEM, the evaluation of the content of full particles might be biased by an incorrect classification of the particles, resulting in an over-expression of the population of full rAAV particles, leading to significant divergences between CryoTEM and nsTEM when the particles are well embedded in stain as observed in Fig 5. For these reasons, nsTEM was not chosen as a method of choice in the case of the specimen studied herein.
The results presented in this study demonstrate the relevance of CryoTEM as a robust and reliable method for the assessment of the percentage of filled particles in a recombinant rAAV8 gene delivery drug product. The repeatability, linearity and robustness of the method demonstrates the suitability of this method as a method of choice that can be used during the different phases of the drug development process as well as validation and quality control in the production phase. Although the scope of the study was restrained to the rAAV8 serotype, available data from CryoTEM analyses of other serotypes (e.g. AAV9) have indicated that findings within the scope of this study can likely be leveraged to other serotypes as well (See S2 Fig). In the presented comparison, nsTEM proved to be a poor technique for the assessment of the content of filled particles for this sample type. Potency data are in line with the results obtained by CryoTEM, reinforcing the reliability of the method. Further studies to broaden the scope of the method to other rAAV serotypes and other types of carriers could prove valuable. More developments could also be performed using CryoTEM, exploring new approaches to allow the discrimination of several levels of particle packaging, including partially filled rAAV particles, both from a preparation and an image analysis perspective. Such advances would benefit the methods in place for the characterization of rAAV drug products. | 7,617.4 | 2022-06-03T00:00:00.000 | [
"Biology"
] |
Sub-ionospheric very low frequency perturbations associated with the 12 May 2008 M = 7 . 9 Wenchuan earthquake
The present study reports the VLF (very low frequency) sub-ionospheric perturbations observed on transmitter JJI (22.1 kHz), Japan, received at the Indian low-latitude station, Allahabad ( geographic lat. 25.41 ◦ N, long 81.93 E), due to Wenchuan earthquake (EQ) that occurred on 12 May 2008 with the magnitude 7.9 and at the depth of 19 km in Sichuan province of Southwest China, located at 31.0 ◦ N, 103.4 E. The nighttime amplitude fluctuation analysis gives a significant increase in fluctuation and dispersion two days before EQ, when it crosses 2 σ criterion. However, there was no significant change observed in the amplitude trend. The diurnal amplitude variation shows a significant increase in the amplitude of JJI signal on 11 and 12 May 2008. The gravity wave channel and changes in the electric field associated with this EQ seem to be the potential factors of the observed nighttime amplitude fluctuation, dispersion, and significant increase in the signal strength.
Introduction
The very low frequency (VLF; 3-30 kHz) signals propagate through waveguide formed below by the Earth's surface (ocean or ground) and above by the D region ionosphere called as Earth-ionosphere waveguide (EIWG).The measurement of VLF signals generated by navigational transmitters has emerged as one of the reliable tools for remote detection of the D region perturbations associated with earthquakes (e.g.Gokhberg et al., 1989;Hayakawa et al., 1996Hayakawa et al., , 2010;;Rozhnoi et al., 2007), which seems to be promising for short-term earthquake predictions.The earthquake anomalies usually happen in the D, E, and F region and may be observed 1 to 10 days prior to the earthquake and continue up to few days after it (Akhoondzadeh, 2012).The E and F region effects of the pre-seismic activity can be investigated using ionospheric electron density variations (Akhoondzadeh, 2012;Yao et al., 2012).
Wenchuan earthquake (EQ) occurred on 12 May 2008 at 06:28:01 UT in Wenchuan County in Sichuan province of Southwest China, located at 31.0 • N, 103.4 • E, with magnitude of 7.9 and a shallow depth of about 19 km.There occurred 149 to 284 major aftershocks.Tremors were felt in almost all major Asian countries.As per the tectonic observatory (http://earthquake.usgs.gov/), it occurred due to striking of thrust on the northwestern margin of the Wenchuan basin.The huge amount of energy was released by the earthquake such that the tremors were felt several thousand kilometers away from the epicenter.
There have been several studies on the effect of Wenchuan EQ on the upper ionosphere (F region) using groundand satellite-based techniques.Zhao et al. (2008) analyzed ionosonde data at Wuhan (30.5 • N, 114.4 • E) and Xiamen (24.4 • N, 123.9 • E) stations close to the earthquake's epicenter and found large enhancement in the maximum ionospheric electron density at F2 peak (N m F 2 ).They related the abnormal enhancement in N m F 2 to seismo-ionospheric signature.Liu et al. (2009) found that GPS TEC (total electron content) above the epicenter anomalously decreased in the afternoon period on days 6 to 4 before EQ and in the late evening period on day 3 before the earthquake, but enhanced in the afternoon on day 3 before the EQ.Pulinets et al. (2010) analyzed GPS TEC data from IGS network over Chinese region and found a decrease in TEC before EQ and a sharp increase just after the EQ.Xu et al. (2011a) analyzed ionosonde data from Chongqing station (29.55 • N, 106.54 • E) and found significant disturbance in f o F 2 on 9 May 2008 at 17:00 UT.The enhancement was about 67 % over the normal day values and lasted about 3 h.The French microsatellite DEMETER (The Detection of Electromagnetic Emission Transmitted from Earthquake Regions) data also had been utilized extensively to study the F region perturbations associated with Wenchuan EQ (Zhang et al., 2010;Sarkar and Gwal, 2010).Zhang et al. (2010) analyzed electron density data recorded by probes onboard DEMETER.They found reduction in electron density 3 days prior to the EQ above the northeast of the epicenter.Sarkar and Gwal (2010) estimated ion density variation and increase in vertical and horizontal component of electric field 4-8 days before the EQ.
In this paper we present the effects of the 12 May 2008 Wenchuan EQ (M = 7.9) on sub-ionospheric VLF propagation with transmitter-receiver great circle path length (TRGCP) of about ∼4800 km between JJI (22.2 kHz) VLF transmitter and Allahabad (geographic lat.25.41 • N, long.81.93 • E) receiving station.Nighttime amplitude fluctuation analysis method and diurnal variation of signal amplitude show evidence that observed sub-ionospheric VLF amplitude variations were most likely due to EQ-associated lower ionospheric changes.Our results, along with earlier results published on this EQ, conclude that entire ionosphere was perturbed by the great Wenchuan EQ.
VLF data and analysis
The JJI (32.04 • N, 130.81 • E), Japan, VLF transmitter amplitude and phase were recorded with Stanford University developed AWESOME VLF receiver (Singh et al., 2010) installed at a quiet location near Allahabad (geographic lat.25.41 • N, long.81.93 • E; geomagnetic lat.16.05 • N, long.153.70 • E), India.For the present study only signal amplitude has been used as JJI is a phase unstable transmitter.Figure 1 illustrates the location of JJI transmitter, receiving station Allahabad, TRGCP, earthquake epicenter and wave sensitive area (fifth Fresnel zone).The epicenter is about 87 km off the TRGCP.A total of 22 days (20 April 2008 to 16 May 2008) data have been used.Due to technical problem, the data after 5 days of EQ and from 24-28 April could not be recorded.
We have used nighttime fluctuation (NF) and terminator time (TT) methods to investigate the seismo-ionospheric effects of this EQ as used in various previous works (e.g.Shvets et al., 2004;Hayakawa et al., 1996).In TT method attention is paid to the time of terminator (around the sunrise and sunset of local time) and time shifts are analyzed near the terminator time before and after the EQ (Hayakawa et , 1996).The TT method is more effective for VLF signals propagating in the east-west meridian plane and for short propagation path (≤1000 km) (Hayakawa, 2007).In our JJI narrowband VLF data, we did not observe any significant change in the terminator times (as evident from Fig. 2 of daily amplitude variations plot) probably due to long propagation path ∼4800 km and time difference of 3.5 h between India and Japan local times.The NF method has been well explained by many researchers (e.g.Shvets et al., 2004;Hayakawa et al., 2010) and is suitable to study EQ effects on long propagation path (>1000 km).In this method particular attention has been given to the data during the local nighttime portion of 19:00-03:00 LT at the receiving station during which entire path (transmitter-receiver great circle path) is in dark and the mean nighttime amplitude, dispersion and nighttime fluctuation are estimated.We have used VLF amplitude for a local night (about 8 h from 19:00 LT to 03:00 LT to avoid terminator effect in VLF data) and estimated the difference dA(t) for a particular day as dA(t) = (A(t)− < A(t) >), where A(t) is the VLF amplitude at time t on that particular day and < A(t) > is the average value at the same time for 22 days from 20 April-16 May 2008.We have estimated three parameters as defined by Hayakawa et al. (2010) using difference dA(t) : (1) trend (T ), average of nighttime amplitude difference dA(t) for each day; (2) dispersion (D), standard deviation (SD) of nighttime amplitude difference dA(t) for each day; and (3) nighttime fluctuation (NF), (dA(t)) 2 over relevant night hours, which gives one datum for each day.We have also used additional statistical analysis as suggested by Hayakawa et al. (2010) where normalized values of trend (T ), dispersion (D) and fluctuation (F ) as normalized trend (NT*), normalized dispersion (ND*) and normalized fluctuation (NF*) are calculated to avoid variability in different propagation path.The normalization is defined; as for an EQ on particular date, we estimate the trend on this day and we then calculate the average < trend > over ± 15 days around this date.The normalized trend NT* = (trend-< trend > )/σ T where σ T is standard deviation for 21 selected days.In the same way, normalized dispersion (ND*) and normalized fluctuation (NF*) are calculated.
Amplitude perturbations
Sequential plot of daily amplitude variations for JJI signal received at Allahabad for 24 h (in LT) from 20 April to 16 May 2008 is shown in Fig. 2. It can be seen from Fig. 2 that there was no clear terminator time variation (no significant shift in morning and evening terminator time), but the daily amplitude variations show a clear enhancement in the amplitude on 11 and 12 May 2008.Enhancement in the amplitude on 11 and 12 May was ∼4 and 5.5 dB, respectively, above the 20-day average (excluding 11 and 12 May) and also above the standard deviation as shown in Fig. 3. On 11 and 12 May, enhancements started same time around 14:00 LT and came to normal level at around 17:00 LT on 11 May, but it was full day above the standard deviation on 12 May.The enhancement in the VLF amplitude was most likely due to additional ionization (basically increase in electron concentration) produced by this earthquake in the lower ionosphere.We have looked for four possible mechanisms for electron density enhancement in the D region and hence the enhancement in JJI VLF amplitude: solar flares (Zigman et al., 2007), geomagnetic storm (Peter et al., 2006), lower ionospheric heating due to lightning discharges causing early/fast VLF events (Inan et al., 1996), and seismic origin due to 12 May 2008 EQ.We have examined details of all possible events; solar flare events are of few minute duration whose effect can be easily identified in the VLF amplitude data.We checked and found that there were no flare events during period of data considerations (http://www.spaceweather.com/).The geomagnetic storms can have considerable effect on the lower ionosphere, which can last up to 1-2 days after onset of geomagnetic storm.We have examined geomagnetic conditions from 20 April-20 May 2008 by looking at Dst and Kp index values (http://wdc.kugi.kyoto-u.ac.jp/).The geomagnetic conditions were quiet (Dst < −20 nT) except during 23-25 April when Dst reached −40 nT.Kp values during above period were low except on 23 April and 2 May when it crossed level 4. Thus there was no significant geomagnetic activity to affect the lower ionosphere during period of the VLF data presented, so geomagnetic activity effect is ruled out.Early/fast VLF events are of small durations (from few tens to hundred seconds) (Inan et al., 1996), so their effect can also be distinguished easily and cannot be the reason for observed long time-scale amplitude enhancement (Helliwell et al., 1973).The most likely reason for amplitude enhancement is seismo-ionospheric origin caused by 12 May 2008, Wenchuan EQ.The most reasonable mechanism for observed enhancement in the amplitude is the quasi-static electric field, which can modify lower ionospheric properties.Electric field generated due to different processes during EQ preparation can penetrate into the ionosphere and can modify the ionospheric properties (Kim et al., 2002;Pulinets and Boyarchuk, 2004).But it is still an open question of how the anomalous electric field on the tectonic faults penetrates into the ionosphere.Recently, Pulinets (2009) explained coupling process in terms of global electric circuit, which provides a reasonable explanation of the existence of an up/downward vertical atmospheric electric field between the ground and ionosphere.This quasielectrostatic electric field hypothesis is supported by observations of electric field perturbations before Wenchuan EQ (Sarkar and Gwal 2010;Pulinets et al., 2010;Xu et al., 2011b).Pulinets et al. (2010) have observed anomalies in GPS TEC, change in the position and shape of equatorial anomaly crest and variations in vertical electron density profiles before and after the Wenchuan EQ.They concluded, on the basis of proper physical model, that observed anomalies were caused by additional electric field (zonal and meridional) generated during the Wenchuan EQ preparation and following the EQ occurrence.Xu et al. (2011b) have estimated about ∼2 mV m −1 enhancement in the electric field in the F2 region of ionosphere on 9 May 2008, 3 days prior to the EQ from five low-latitude, ground-based ionosondes around the epicenter.They also observed anomalous electric field on the ground of tectonic faults ∼1000 V m −1 , which was 10 times higher than fair weather ground electric field.Fuks et al. (1997) suggested that anomaly in VLF phase and amplitude passing over seismoactive region is due to increase in lower ionospheric conductivity, which is caused by electric field increase before an EQ.The proposed mechanism to explain amplitude enhancement indicates towards chemical channel (quasi-static electric field effect) of lithosphereatmosphere-ionosphere coupling process (Hayakawa et al., 2010), which suggests that atmospheric electric field generated on or near the ground surface during earthquake preparation can cause significant ionospheric anomalies.4a and d) approached the 2σ T criterion line on EQ day (12 May) but did not exceed 2σ T .The fluctuation and normalized fluctuation (Fig. 4b and e) exhibit a significant increase exceeding 2σ NF criterion 2 days before the EQ (10 May). Figure 4c and f show significant increase in dispersion and normalized dispersion on 23 April and 10 May and 2 days before the EQ (exceeding 2σ D criterion).Generally, the VLF/LF anomalies take place 5 to 7 days (approximately 1 week) before the earthquake (Hayakawa et al., 2010) and in our case the anomaly was observed 19 and 2 days before the EQ.As each EQ is different and seismo-ionospheric study is in its developing stage, it is worth comparing our results with previous case studies of individual EQ and statistical study on many years of earthquake data.As a case study we can mention work by Horie et al. (2007), who studied the effect of great Sumatra EQ on 26 December 2004 with magnitude M = 9.0 and at depth 30 km.They applied nighttime amplitude fluctuation analysis method for NWC-Japan VLF propagation path and observed significant enhancements in fluctuations, 4 days before the EQ.Our results also show similar enhancement in amplitude fluctuations but 2 days before the EQ.Recently, a statistical study using long period data (7 yr) was done by Hayakawa et al. (2010).They applied nighttime fluctuation analysis method and concluded that for shallow EQ (depth < 40 km) normalized trend showed significant decrease before the EQ, whereas normalized fluctuation and dispersion showed significant increase before the EQ.Our analysis for Wenchuan EQ, which occurred at a shallow depth of 19 km, also shows similar results for amplitude fluctuation and dispersion (significant increase) but no significant change in trend.The observed VLF propagation anomalies in trend, fluctuation and dispersion have been explained in terms of acoustic and gravity wave channel, also called atmospheric gravity wave (AGW) channel by many previous researchers (Molchanov et al., 2001;Shvets et al., 2004;Hayakawa et al., 2010), which are generated near seismo-active region and travel up to the ionosphere and modify the ionospheric properties.
Nighttime fluctuation method
Explanation for diurnal amplitude perturbation and nighttime fluctuation analysis results indicated that quasi-static electric field and AGW seem to be associated with this earthquake and we suggest that, in order to understand lithosphere-atmosphere-ionosphere coupling process, one should consider both chemical and AGW channels.
Summary
The Wenchuan EQ that occurred on 12 May 2008 with magnitude 7.9 and depth 19 km, was one of most devastating EQ events in recent years.Because of its higher magnitude and lower depth, it had its effect on upper ionosphere as reported by recent studies using ionosonde, GPS and satellite data (Zhao et al., 2008;Liu et al., 2009;Sarkar and Gwal, 2010).Our results using VLF sub-ionospheric technique show the evidence of lower ionospheric (D region) perturbations associated with this earthquake during its preparation time.The mechanism of seismo-ionosphere is still in process of development.We have tried to explain observed results for this earthquake in light of two plausible coupling mechanisms: chemical channel (quasi-static electric field) and acoustic and gravity wave (atmospheric gravity wave) channel.Our results, along with earlier published results on seismo-ionospheric effects of this earthquake, support both the hypothesis in order to understand lithosphere-atmosphere-ionosphere coupling process.Thus sub-ionospheric VLF/LF signal monitoring, combined with ground-based and satellite-based observations of ionospheric disturbances associated with seismo-electric signals, has potential to be a future powerful tool of earthquake monitoring and forecasting.
Fig. 1 .
Fig.1.The great circle path (GCP) between JJI VLF Transmitter, Japan, and Allahabad, AWESOME receiving station, India.The conjugative circles show effects of Wenchuan EQ on surrounding region with decreasing EQ intensity as one moves from epicenter.Wave sensitive area defined by 5th Fresnel zone is also plotted.
Fig. 2 .
Fig. 2. Sequential plot of 24 h amplitude data of JJI VLF transmitter, Japan, observed at Allahabad, India.Vertical arrows indicate morning and evening terminator times.The circle on 11 and 12 May 2008 indicates unusual increase in the VLF amplitude.
Fig. 3 .
Fig. 3. Comparison of 11 and 12 May 2008 VLF amplitude enhancement with average of 20 days.Arrow indicates EQ timing in UT and LT on 12 May 2008.
Figure
Figure 4a, b and c show trend, fluctuation and dispersion, and Fig. 4d, e and f show normalized trend, normalized fluctuation and normalized dispersion.The horizontal line in each panel shows two standard deviation (2σ ) criteria to define the anomalous day.The nighttime fluctuation analysis shows that the trend and normalized trend (Fig.4a and d) approached the 2σ T criterion line on EQ day (12 May) but did not exceed 2σ T .The fluctuation and normalized fluctuation (Fig.4b and e) exhibit a significant increase exceeding 2σ NF criterion 2 days before the EQ (10 May).Figure4cand f show significant increase in dispersion and normalized dispersion on 23 April and 10 May and 2 days before the EQ (exceeding 2σ D criterion).Generally, the VLF/LF anomalies take place 5 to 7 days (approximately 1 week) before the earthquake(Hayakawa et al., 2010) and in our case the anomaly was observed 19 and 2 days before the EQ.As each EQ is different and seismo-ionospheric study is in its developing stage, it is worth comparing our results with previous case studies of individual EQ and statistical study on many years of earthquake data.As a case study we can mention work byHorie et al. (2007), who studied the effect of great Sumatra EQ on 26 December 2004 with magnitude M = 9.0 and at depth 30 km.They applied nighttime amplitude fluctuation analysis method for NWC-Japan VLF propagation path and observed significant enhancements in fluctuations, 4 days before the EQ.Our results also show similar enhancement in amplitude fluctuations but 2 days before the EQ.Recently, a statistical study using long period data (7 yr) was done byHayakawa et al. (2010).They applied nighttime fluctuation analysis method and concluded that for shallow EQ (depth < 40 km) normalized trend showed significant decrease before the EQ, | 4,356.8 | 2013-09-24T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Unsupervised Learning for Stereo Matching Using Single-View Videos
This paper proposes an unsupervised approach to construct a deep learning based stereo matching method using single-view videos (SMV). From videos, a set of corresponding points are computed between images, and image patches that center at the computed points are extracted. Negative and positive samples constitute a dataset to train a similarity network that is then used as a matching cost function. In addition, we propose a local-global matching cost network that exploits the first feature maps (local features) accompanying with last feature maps (global features) as output feature of the proposed network. The concatenated features are connected to full-connected layers and the network outputs a similarity measure of an image patch pair as a matching cost. Computed matching costs are aggregated using semi-global matching and cross-based cost aggregation, followed by sub-pixel interpolation, left-right consistency check, median and bilateral filtering. We evaluate the proposed stereo matching methods using popular stereo matching datasets, including KITTI 2012 and 2015, and Middlebury. We submit the disparity maps to their benchmark servers to evaluate the performance of SMV. We also compared the generalization of SMV and baseline methods using the training sets of the three datasets. The benchmark results show that SMV is the most accurate method among unsupervised approach, and it even outperforms several deep learning based stereo matching using supervised manner. The evaluation results of generalization show that SMV is comparative with the baseline method, MC-CNN, which is trained with supervision.
I. INTRODUCTION
Stereo matching aims to reconstruct 3D information from stereo images. Given the left and right images, a stereo matching method estimates a disparity map, in which pixel intensities indicate the depth information from cameras to objects (that contains considered pixels). Figure 1 shows the illustration of stereo matching.
Stereo matching has been intensively researched for several decades because of its important applications for selfdriving cars, 3-D reconstruction, view interpolation, and robot navigation [1], [2]. Scharstein and Szeliski [3] did an excellent survey of stereo matching methods and divided them into local and global methods. Local stereo matching methods normally includes matching cost computation, cost aggregation, and disparity computation steps, whereas The associate editor coordinating the review of this manuscript and approving it for publication was Alma Y. Alanis . global correspondence methods typically consist of matching cost computation and disparity optimization steps. Disparity refinement, such as sub-pixel interpolation via parabolic fitting, a left-right consistency check [4], and image filtering, can be used to improve the quality of the disparity map.
Zbontar and LeCun [5] proposed the first deep learningbased stereo matching cost that exploits a convolutional neural network. The matching costs are processed by crossbased cost aggregation (CBCA) [6] and semi-global matching (SGM) [7], followed by post-processing techniques including sub-pixel interpolation, a left-right consistency check, and median and bilateral filtering.
Since the dawn of deep neural networks, many deep learning-based methods are proposed for matching cost computation [8]- [11], cost aggregation [12], and post-processing [13]. Other work [14]- [19] proposed stereo matching methods that unified deep learning-based components and trained in an end-to-end fashion. Recently, disparity confidence methods [20]- [23] are introduced to improve the performance of stereo matching methods.
However, current deep learning methods require domain data for training. Supervised methods require left and right stereo images and ground truth, although unsupervised training methods require just left and right stereo images. This paper proposes training a matching cost network without requiring domain data. Corresponding image patches are extracted from single-view videos and subsequently employed as the training data. Collecting stereo matching dataset for different situations is not an easy task. Therefore, our approach helps to construct a stereo matching method easily.
In this paper we propose an approach to learn a matching cost network from videos. From single-view videos, feature matching points between frames are computed and then image patches for matching points are extracted to build a dataset of corresponding patches. After that, the dataset is used as a training data. In addition, we propose a local-global matching cost network that takes advantages of local features from the first layer.
The contributions of this paper are as follows: • This paper proposes an approach to train a matching cost network by using single-view videos. This approach does not need stereo images as well as ground truth.
• A local-global matching cost network are proposed to exploit the benefit of using the first layer that can extract features similar to those of local binary patterns.
II. RELATED WORK
Traditional matching cost functions consist of the samplinginsensitive (SI) [24], absolute difference (AD), and squared difference (SD). These traditional functions suppose corresponding pixels between stereo images have the same intensity values. Therefore, they perform poorly when the stereo images are radiometrically distorted. In many cases, intensity changes between stereo images are monotonically nonlinear wherein the orders of the intensity values are preserved. Matching cost functions that exploit ordinal values rather can tolerate this kind of intensity transformation. These matching cost functions include the rank and census transforms [25], the support local binary pattern (SLBP) [26], the fuzzy encoding pattern [27], and the soft rank transform [28].
Han et al. proposed a gradient-based matching cost function [29]. Scharstein et al. [30] introduced a gradient-based measure that can operate under the differences in the camera gain and bias. Wei et al. [31] proposed an intensity-and gradient-based matching method using hierarchical Gaussian basis functions. Zhou and Boulanger [32] introduced a Gaussian weighted sum of absolute difference based on the relative gradients. P. Pinggera et al. [33] proposed dense gradient features for cross-modal stereo.
Mutual information can tolerate any global intensity changes and has been exploited as a matching cost function in stereo matching. Kim et al. [34] proposed a pixel-wise matching cost for stereo matching based on mutual information. Hirschmuller [7] introduced a stereo matching method based on semi-global matching and mutual information. Heo et al. [35] introduced a stereo matching method where the P. N. Hong, C. W. Ahn: Unsupervised Learning for Stereo Matching Using Single-View Videos matching cost function combines mutual information with SIFT descriptor [36] in log-chromaticity color space.
Heo et al. [37] proposed adaptive normalized crosscorrelation (ANCC) which is an improved version of normalized cross-correlation (NCC) and invariant to radiometric distortion. RANCC [38] is an improvement of the ANCC for the context that the effect of texture and noises on image regions. Dinh et al. [39] proposed a matching cost measure to address the non-linearity intensity transformation of pixels between the image patches.
A recent approach to compute matching cost is to use convolutional neural network to predict matching value for a patch pair. Reference [5] introduced a convolutional neural network that is trained for measuring the similarity of a patch pair. Reference [8] proposed a deep embedding model to predict matching cost which explicitly maps intensity values into an embedding feature space to estimate pixel dissimilarities. References [5] and [8] need stereo images and ground truth for training.
Reference [9] proposed a fast matching cost network that uses a product layer for a siamese architecture. Reference [10] proposed a unsupervised approach to estimate matching cost by exploiting left-right consistency check to guide the training process. Reference [11] proposed a weakly supervised techniques for training patch similarity which uses properties of the optical sensor and a rough scene knowledge. Li and Yuan [62] introduced a stereo matching method that is an unsupervised learning method and aware of occlusion problem. Joung et al. [63] proposed a stereo matching method that is trained in an unsupervised manner using confidential correspondence consistency. Tonioni et al. [61], [64] introduced stereo matching methods for domain adaptation using stereo images without ground truth.
The output of the matching cost computation step is a matching cost image space C for which C d (p) is the matching cost value of a pixel p in the reference image, e.g., the left image of a stereo pair, and at a disparity hypothesis d. From C, a disparity value for p can be obtained by using a winnertakes-all strategy, as follows: where D E is an estimated disparity map. Applying a winnertakes-all strategy is the simplest way to obtain a dense disparity map.
III. SMV A. DATASET CONSTRUCTION FROM VIDEOS
In this subsection, we present an approach to construct a dataset from videos which is then used to train a matching cost network. Given a video, we extract two frames. To reduce the scene correlation between frames, the two selected frames should not be continuous in the video. We use the SIFT to compute corresponding points between the frames, as shown in Fig. 2(a). For each pair of the corresponding points, we extract image patches whose center pixels are the corresponding points, as shown in Fig. 2 According to [45], challenges in stereo matching includes textureless regions, occlusion, illumination variations, snow, sun, rain, etc. Therefore, the extracted patches are processed to assimilate the challenges. Each patch is undergone a pipeline of common image transformation, such rotation, translation, elastic distortion, noise adding, and brightness and contrast changes, as shown in Fig. 3.
Brightness and contrast adjustment changes the brightness and contrast by setting the image patch P to P ← P · contrast + brightness. ( where addition and multiplication are element-wise operations. Rotation rotates the patch by rotation degrees, whereas translation translate the patch in the vertical direction by translation. Scaling scales the patch by scaling, and shearing shears the patch in the horizontal direction by shearing.
Elastic distortion [40] is commonly used to generate images that are feasible and label preserving in classification. Elastic distortion distorts an image patch by the intensity of transformation ED alpha and the smoothness for transformation ED sigma . Noise block addition adds a block of random values to an image patch. The position of the block is selected randomly. Foreshortening is inspired from different view-point of stereo cameras. In foreshortening, first we crop left or right side of an image patch by cropping and p lr , and following that the cropped patch is resized to the same size as the original patch. Fig. 4 shows the illustration of the elastic distortion and left and right foreshortening for an input patch.
In order to prepare a training data of positive and negative example, each image patch extracted from an image is undergone through the transformation pipeline two times with different random setting of parameters. The two transformed patches forms a synthesized pair of corresponding image patches (positive example). The negative example is created by extracted a new image patch that is far from the considered image patch at a distance, data_distance.
B. LOCAL-GLOBAL MATCHING COST NETWORK
We propose a local-global matching cost network that exploits the first convolution layer, as shown in Fig. 4. The first convolution layer extracts low-level features of an image patch which are edge-like features. Each convolution kernel in the first convolution layer often extracts different features. The features in the last layers are considered as global features that extract high-level features of the image patch.
In stereo matching, hand-crafted feature extraction, such as census, rank, slbp, have been successfully operated for stereo images in different conditions. Each of these feature extractors are designed to obtain different features that highly discriminative.
The feature maps of the first convolution layer are somewhat similar to the output of the hand-crafted feature extractors, and even can extract more number of features because the number of feature maps are set, such as 32 or 64, and computed automatically.
As a result, our idea is to combine the local feature (feature maps of the first convolutional layer) and global features (output of the last layer) to increase the discriminative power. The architecture of our proposed network is as follow: Fig. 4 shows the architecture of the proposed multi-patch matching cost network. The architecture of sub-networks consist of a number of convolution layers followed by rectified linear unit layer (RELU). The resulting four vectors are concatenated and forwardly propagated through a series of fully connected layer followed by RELU. The final output of network is fed to a non linear activation function sigmoid to produce a similarity score between the input patches. The binary cross-entropy loss is used for training. Let x denote the output of the network for one training example and y denote the class of that training example; y = 1 if the example belongs to the positive class and y = 0 if the example belongs to the negative class. The binary cross-entropy loss L for that example is defined as The hyperparameters of the proposed network are the number of fully-connected layers (num_fc_layers), and the number of units in each fully-connected layer (num_fc_units), the number of feature maps in each layer (num_fmaps), VOLUME 8, 2020 the number of convolutional layers (num_clayers), the size of the convolution kernels (ckernel_size), the size of the input patch (input_patch_size).
The hyperparameters of aggregation and post-processing methods include cbca_distance, cbca_num_iters_1, cbca_num_iters_2, which denote for similarity threshold for pixel intensities, number of iteration of cross-based cost aggregation before SGM, and number of iteration of crossbased cost aggregation after SGM, respectively. sgm_P1, sgm_P2, sgm_Q1, and sgm_Q2 stands for the first smoothness parameter of SGM, the second smoothness parameter of SGM, a factor 1 used for changing sgm_P1/sgm_P2, and a factor 2 used for changing sgm_P1/sgm_P2, respectively. sgm_V and sgm_D denote for reduction of sgm_P1 by a factor of sgm_D when considering vertical direction and pixel intensity threshold for changing sgm_P1/sgm_P2. Finally, blur_sigma and blur_threshold stand for standard deviation for a post-processing filter and threshold for a postprocessing filter.
In this paper, we have set 11 × 11 image patches as input to the network. The first convolutional layer is used to extract feature maps from the input patches that are then considered as local image features. The five convolutional layers are with 3 × 3 kernel and 112 feature maps. A 224-length vector is formed by concatenating the two 112-length feature vectors. After that, the 224-length vector is passed through three fully-connected layers with 384 units each. The final fullyconnected layer projects the output to a single number that is the similarity score. A matching cost is just a negative value of the similarity score.
C. COST AGGREGATION AND POST-PROCESSING METHODS
The outcome of the local-global matching cost network is a matching cost space that is then aggregated and postprocessed to produce the final disparity map. We follow the pipeline introduced in [41] (used later by MC-CNN [46]) as shown in Fig. 5. The pipeline suggests to use CBCA and SGM to aggregate the matching costs. Then, sub-pixel interpolation, left-right consistency check to detect invalid pixels, followed by median and bilateral filtering. Similar to MC-CNN, we use CBCA before and after SGM.
IV. EXPERIMENTAL RESULTS
We evaluated the proposed stereo matching method using KITTI 2012, 2015, and Middlebury datasets. We uploaded the results for the three datasets to their online benchmark servers.
To evaluate the generalization performance of the testing stereo matching methods, we used different datasets for training and testing steps and compared with MC-CNN, AD, and Census. All the testing methods use the same pipeline of cost aggregation and post-processing methods. We followed the parameter setting in [46] for MC-CNN, AD, and Census.
For the proposed matching cost network, we used grid search method to select parameter setting using the mixed dataset, constructed from KITTI and Middlebury training datasets. For each parameter, we first estimated a feasible range and a value step for the grid search method. After that, we chose the parameter setting that had the best performance on the mix dataset. Table 1 shows the parameter setting for the proposed stereo matching method, and the parameters were fixed for all of our experiments.
We used Cityscapes video datasets [42] for training the proposed matching cost network. Specifically, we used three single-view sequences (stuttgart_00, stuttgart_01, TABLE 2. KITTI 2012 benchmark results in error rate (%) for SMV. Out-Noc is the percentage of erroneous pixels in non-occluded regions, and Out-All is the percentage of erroneous pixels in total. Avg-Noc is the ratio between the average disparity and end-point error in non-occluded regions, and Avg-All is the ratio between the average disparity and end-point error in total.
stuttgart_02) which include about 2900 images totally with 2048 × 1024 resolution. Let i be the frame index of a video. We use a image pair of I i and I i+2 for compute corresponding points using the SIFT. Totally, about 12.5 millions of point pairs are detected and hence about 25 millions of sample patches (including positive and negative samples) are extracted.
We exploited stochastic gradient descent to optimize the cross-entropy loss of the proposed network training. The network was trained for 22 epochs with the learning rate initially set to 0.003 and decreased by a factor of 10 on the 18th. The training dataset was shuffled prior to learning for each epoch, and the batch size was set to 128.
Disparity maps were evaluated using the average proportion of erroneous pixels in all zones, except occlusions. We used the KITTI error thresholds (th = 3) pixel and Middlebury error thresholds (th = 1). The error rate (%) was calculated as where I nocc is the set of all non-occluded pixels, |I nocc | is the number of pixels in I nocc , and D G (p) and D E (p) are the ground truth and estimated disparity at p, respectively.
A. QUANTITATIVE RESULTS USING STEREO MATCHING BENCHMARKS
The KITTI 2012 and 2015 datasets [43], [44] include outdoor stereo images with sparse ground truth (approximately 50% of the pixels). The KITTI 2012 dataset has 194 stereo pairs for training and 195 stereo pairs for testing, and the KITTI 2015 dataset provides 200 stereo images for training. Middlebury provides indoor stereo images with dense ground truth.
Since the KITTI and Middlebury servers constrain the limited numbers of submissions, we used the servers to evaluate the results for the complete version of the proposed stereo matching method. Tables 2, 3, and 4 show the results of SMV in the KITTI 2012, 2015, and Middlebury benchmarks, respectively. The proposed stereo matching method significantly outperformed SGM and ELAS methods that are considered as baseline methods for traditional stereo matching approach. In addition, for all the three benchmark results, The proposed stereo matching method performed better several deep learning-based stereo matching methods, even though The proposed stereo matching method is constructed without using a single stereo pair. Figs. 6 and 7 show some disparity maps of SMV downloaded from KITTI server for the KITTI 2012 and 2015 datasets, respectively.
B. GENERALIZATION
In this subsection, we compared the performance of SMV, MC-CNN, AD, and Census methods for data generalization. In other words, MC-CNN, AD, and Census use a training data to train and/or tune parameters of a method, and then are evaluated using different data. For AD and census, the parameters of the post-processing techniques were set the same as in the MC-CNN paper. Let MC-CNN_K15, MC-CNN_K15, and MC-CNN_MB denote MC-CNN with accurate architecture and being trained using the KITTI 2015, KITTI 2015, and Middlebury training sets, respectively. In addition, to evaluate the effective of the multi-patch matching cost network in SMV, we designed a version of SMV that the number of input patch is set to 1, denoted SMV(-). Except the cropped size of 9 × 9, proposed method(-) parameters were set the same as those of SMV. Figs. 8 and 9 show the quantitative results of the testing stereo matching methods for the first 100 stereo pairs for KITTI 2012 and 2015 training sets, respectively. SMV performed better AD and census significantly, and had a comparative performance with the MC-CNN variants that require training data. Because of using multi-patch network, SMV performed much more robustly than SMV(-). In addition, we computed the average performance for the testing stereo matching methods over the KITTI 2012 and 2015 training data, respectively. Fig. 10 shows the average error rates of the testing stereo matching methods. AD and census had the largest error rates, whereas SMV and the MC-CNN variants had similar performance. SMV(-) that does VOLUME 8, 2020 not use the multi-patch network performed poorly, with error rates approximately double those of SMV. In all the cases, even though SMV did not use training data, its error rate is nearly as good as MC-CNN variants, with slightly larger error rates.
C. USING LOCAL BINARY PATTERNS
In this subsection, we evaluate the performance of combining the handcrafted features and the feature maps of convolutional networks. Specifically, instead of combining feature maps from the first and last convolutional layers, we computed census, rank, and SLBP transforms for input images and then concatenate them with the last convolutional feature maps. We denoted this method as SMV_LBP. Figure 10 shows an illustration of census and rank transforms for an image.
We used window size (3 × 3) for both census and rank transforms. We normalized the transformed images before concatenating with feature maps, computed from the last con-volutional layer. Figure 11 shows the quantitative results of SMV_LBP using the KITTI 2012 and 2015 training datasets. SMV_LBP had marginally better performance than SMV(-) and performed worse than SMV. The reason is that census and rank transforms are just two matrix instances of a (3 × 3) convolution matrix and their weight values are fixed. In contrast, SMV extracted 112 feature maps using 112 convolution matrices, in which weights were selected optimally for a training dataset.
D. SMV EVALUATION
We evaluated the stereo matching methods using their raw matching costs on the KITTI 2012 and 2015 datasets. In addition, we trained SMV in a supervised manner using KITTI 2012, KITTI 2015, and Middebury training datasets, denoted as SMV_K12, SMV_K15, and SMV_MB, respectively. For a fair comparison, we used the same data augmentation as in MC-CNN. Table 6 shows the error rates for KITTI 2012 (K12) and KITTI 2015 (K15) training datasets. AD and Census had the worst performance, and AD even outperformed Census. These performance of AD and Census in our experiments are similar to those in [46]. The supervised versions of SMV outperformed MC-CNN for all corresponding datasets. That validates the effectiveness of the use of the local and global CNN features in SMV.
E. SENSITIVITY ANALYSIS
In this subsection, we present the way to select parameter values and analyze the effect of different parameter configurations to SMV. As shown in Table 1, the SMV network has six parameters, including input_patch_size, num_clayers, num_fmaps, ckernel_size, num_fc_layers, and num_fc_units.
For the kernel size ckernel_size, using two 3 × 3 kernels have the same receptive field with a 5 × 5 kernel. Therefore, these days, a 3 × 3 kernel size is commonly used for CNN. SMV and MC-CNN share the three common parameters, which are num_fmaps, num_fc_layers, and num_fc_units. In our work, we have selected the values for the three parameters, as recommended by the MC-CNN work. There are two reasons for this. The first reason is that the three parameters were carefully selected by using the grid search in the MC-CNN work. The second reason is that using the same values could show the effectiveness of exploiting the local-global features in SMV.
V. CONCLUSIONS
This paper proposed an approach for stereo matching method that uses single-view videos in an unsupervised manner. In addition, we proposed a matching cost network that exploits explicitly local and global features. The proposed stereo matching method was evaluated using commonly used datasets in stereo matching, including KITTI 2012, KITTI 2015, and Middlebury. Experimental results the benchmarks showed that the proposed method had the best performance among unsupervised methods and outperformed several supervised methods. It also performed well cross different datasets.
In future work, we plan to investigate deeply image similarity functions in traditional approaches as well as learning based ones. Applications of similarity functions in computer vision and ways to construct them in case of datasets available in different domains. | 5,576.2 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Evaluation of a smartphone human activity recognition application with able-bodied and stroke participants
Mobile health monitoring using wearable sensors is a growing area of interest. As the world’s population ages and locomotor capabilities decrease, the ability to report on a person’s mobility activities outside a hospital setting becomes a valuable tool for clinical decision-making and evaluating healthcare interventions. Smartphones are omnipresent in society and offer convenient and suitable sensors for mobility monitoring applications. To enhance our understanding of human activity recognition (HAR) system performance for able-bodied and populations with gait deviations, this research evaluated a custom smartphone-based HAR classifier on fifteen able-bodied participants and fifteen participants who suffered a stroke. Participants performed a consecutive series of mobility tasks and daily living activities while wearing a BlackBerry Z10 smartphone on their waist to collect accelerometer and gyroscope data. Five features were derived from the sensor data and used to classify participant activities (decision tree). Sensitivity, specificity and F-scores were calculated to evaluate HAR classifier performance. The classifier performed well for both populations when differentiating mobile from immobile states (F-score > 94 %). As activity recognition complexity increased, HAR system sensitivity and specificity decreased for the stroke population, particularly when using information derived from participant posture to make classification decisions. Human activity recognition using a smartphone based system can be accomplished for both able-bodied and stroke populations; however, an increase in activity classification complexity leads to a decrease in HAR performance with a stroke population. The study results can be used to guide smartphone HAR system development for populations with differing movement characteristics.
Background
Mobile health monitoring using wearable sensors is a growing area of interest. As the world's population ages and locomotor capabilities decrease, the ability to monitor a person's mobility activities outside a hospital setting becomes valuable for clinical decisionmaking. Human Activity Recognition (HAR) systems combine wearable sensor and computing technologies to monitor human movement in the person's chosen environment.
HAR systems typically use accelerometer and gyroscope sensors since these are small, affordable, and generally unobtrusive [1]. Other HAR systems combine sensor types, such as accelerometer and ECG [2], or use multiple sensor locations, such as sternum and thigh [3], or thigh and chest [4]. However, multiple sensors can be cumbersome and inconvenient for reliable implementation in everyday life. Smartphones are ubiquitous, carried by most individuals on a daily basis, and many devices contain integrated accelerometer and gyroscope sensors, which are commonly used to measure posture and movement [5].
HAR systems typically follow a machine learning structure [6]. Raw sensor signals are collected, preprocessed, and segmented into time windows. Feature extraction is then performed to retrieve relevant information from sensor signals over each window. Features are abstractions of raw data; such as statistical calculations (mean, variance etc.) or frequency domain features that describe the signal's periodic structure. Since many features could be used in a model, a selection process is typically used to reduce the data's dimensionality. Feature selection methods may be filter-based, which evaluate features characteristics without a classifier, or wrapper based, which use classifier accuracy to evaluate features [7]. Finally, a classifier is constructed using training data and evaluated on testing data. The literature has previously focused on offline human activity recognition, although recent work is moving towards algorithms that can be implemented in real time using the onboard sensors and computational power of a smartphone [8].
Many HAR systems have been developed for ablebodied participants; however, few systems have been tested on the elderly or people with disabilities [9]. A recent study showed that an activity classification model trained on an older cohort and tested on a younger sample performed better than model training with the younger cohort and testing on the older sample. This suggested that a model trained on elderly participants may be more generalizable and result in more a robust classifier [10], since younger people may perform activities of daily living with more intensity than older or disabled people. Stroke is a leading cause of disability among adults and can lead to limited activities of daily living, balance and walking problems, and a need for constant care [11]. For a clinician, reliable data about a patient's activity is important, particularly information about the type, duration and frequency of daily activities (i.e., standing, sitting, lying, walking, climbing stairs). This information can help therapists design rehabilitation programmes and monitor progress of patients outside of the hospital. An objective record of a patient's daily activities can avoid mistaken or intentionally misleading self-reporting. Mobility monitoring could provide large datasets with information about the mobility habits of people who have suffered a stroke, guiding future research in the field of healthcare and intervention.
The current research compared the performance of a smartphone-based wearable mobility monitoring system (WMMS) between able-bodied participants and people who had suffered a stroke. By studying differences in classifier performance between populations, we addressed the hypothesis that a WMMS developed using sensor data from able-bodied participants would perform worse on a population of stroke participants due to differences in walking biomechanics. This research also identified where the classifier performed poorly, thereby providing guidance for future research on HAR for populations with mobility problems.
Methods
Population A convenience sample of 15 able-bodied participants (age 26 ± 8.9 years, height 173.9 ± 11.4 cm, weight 68.9 ± 11.1 kg) and 15 stroke participants (age 55 ± 10.8 years, height 171.6 ± 5.79 cm, weight 80.7 ± 9.65 kg) participated in this study. Stroke participants were recruited at the University Rehabilitation Institute in Ljubljana, Slovenia, and able-bodied participants were recruited at the Ottawa Hospital Rehabilitation Centre in Ottawa, Canada. Stroke participants were identified by a physical and rehabilitation medicine specialist as capable of safely completing the mobility tasks and able to commit to the time required to complete the evaluation session (approximately 30 min). Six stroke patients had left hemiparesis and nine had right hemiparesis. Thirteen stroke patients had ischemic stroke, one subarachnoid hemorrhage and one had impairment because of a benign cerebral tumor. Six stroke patients used one crutch, two had one arm in a sling, and one used an ankle-foot orthosis. The stroke event averaged 9.6 months before the study and the average FIM score was 107 points. The study was approved by the Ottawa Health Science Network Research Ethics Board and the Ethics Board of University Rehabilitation Institute (Ljubljana, Slovenia). All participants provided informed consent.
Equipment
Accelerometer, magnetometer, and gyroscope data were collected with a Blackberry Z10 smartphone using the TOHRC Data Logger [12] in both the Ottawa and Ljubljana locations. Smartphone sampling rates can vary [13], therefore the Z10 sensors were sampled at approximately 50Hz, with a mean standard deviation of 15.37Hz across all trials. The WMMS used the Blackberry's gravity and linear acceleration output to calculate features. Linear acceleration is the Z10 acceleration minus the acceleration due to gravity. On the BlackBerry Z10, the inertial measurement unit fuses the accelerometer, gyroscope, and magnetometer sensors and splits acceleration components into applied linear acceleration and acceleration due to gravity (the gravity signal); however, the device manufacturer does not report how this is accomplished.
Since the phone's orientation on the pelvis can differ between individuals due to a larger mid-section or different clothing, a rotation matrix method was used to correct for phone orientation [14]. Ten seconds of accelerometer data were collected while the participant was standing still and a 1-s data segment with the smallest standard deviation was used to calculate the rotation matrix constants. The orientation correction matrix was applied to all sensor data.
While the WMMS application can run entirely on the smartphone, for the purposes of this research, the raw sensor output was exported as a text file and run in a custom Matlab program to observe WMMS algorithm performance in detail and calculate outcome measures.
WMMS algorithm
Raw sensor data from the smartphone were converted into features, over 1 s data windows. Data interpolation was not used and, since the results remained acceptable, this method was not sensitive to within window sampling rate variability with a standard deviation of 15.37 Hz.". The features were used to classify movement activities. The features derived from acceleration due to gravity, linear acceleration, and gyroscope signals are displayed in Table 1. Features were selected based on the literature and observing feature behaviour from pilot data with the target activities.
A custom decision tree used these features to classify six activity states: mobile (walk, stairs) and immobile (sit, stand, lie, and small movements). The decision tree structure is shown in Fig. 1.
The WMMS has three activity stages. The first stage used a combination of three features (L-SMA, SOR, SoSD: Table 1) to identify if the person was mobile (walking, climbing stairs) or immobile (sitting, standing, lying down, or small movements). All thresholds were determined using a separate experimental set of able-bodied participant data, collected for this purpose. Figure 2 shows plots of L-SMA, SOR, and SoSD that demonstrate how these features change during immobile and mobile activities.
In stage 2, if the person was in an immobile state, trunk orientation was examined using the "difference to Y" signal feature (Table 1). Based on thresholds, the classifier determined if the person was upright (standing), leaning back (sitting), or horizontal (lying down). If the person was standing, a weighting factor was calculated based on how many of the stage 1 features passed thresholds. If the weighting factor exceeded 1 for two consecutive data windows and the person was standing for more than 3 s, the person was considered to be performing a small movement (i.e., standing and washing dishes at a sink, etc.). Figure 3 shows how the DifftoY feature changes when a person walks to a bed, lies down, and stands up again to continue walking.
In stage three, the default classification was walking. If the participant walked for more than 5 s and the slope of G-SMAvar feature passed a threshold, then the activity was classified as climbing stairs. Figure 4 shows how G-SMAvar changes when a person is walking and when they are climbing stairs. The set of stairs used in this example had a landing in the middle, corresponding to the downward slope in G-SMAvar.
Protocol
Data collection took place under realistic but controlled conditions. Participants follow a predefined path in The Ottawa Hospital Rehabilitation Centre or University Rehabilitation Institute, including living spaces within the rehab centres, and perform a consecutive series of mobility tasks: standing, walking, sitting, riding an elevator, brushing teeth, combing hair, washing hands, drying hands, setting dishes, filling the kettle with water, toasting bread, a simulated meal at a dining table, washing dishes, walking on stairs, lying on a bed, and walking outdoors [15] Appendix.
Before the trial, participant characteristics were recorded (i.e., age, gender, height, weight). Participants wore the smartphone in a holster attached to their right-front belt or pant waist, with the camera pointed forward. Trials were video recorded using a separate smartphone for activity timing comparison and contextual information. Video time was synchronized with the smartphone sensor output by shaking the phone at the beginning and end of the trial, providing a recognizable accelerometer signal and video event. Table 1 Features derived from smartphone sensor signals. Acceleration due to gravity = (Xgrav, Ygrav, Zgrav), linear acceleration = (Xlin, Ylin, Zlin), SD = standard deviation Signal Feature Formula Abbreviation Simple moving average of sum of range of linear acceleration (4 windows) Sum of range of linear acceleration (range(Xlin i )) + (range(Ylin i )) + (range(Zlin i )) SOR Sum of standard deviation of linear acceleration (SD(Xlin i )) + (SD(Ylin i )) + (SD(Zlin i )) SoSD Maximum slope of simple moving average of sum of variances of gravity (2), SMA var (4)-SMA var (3))
G-SMAvar
Gold-standard activity event times were manually identified from the video recordings. Each 1 s window was considered an occurrence. For example, sitting for 5 s was considered 5 occurrences. When segmenting the data, a 1 s window on either side of a change of state was considered part of the transition; to reduce error from inter-rater variability in identifying the start of an activity. Transitions were not considered when calculating outcomes. The number of 1 s instances (class distribution) of each activity is shown in Table 2. Since this is a realistic data sample representing activities of daily living, class imbalances occur. For example, there were more instances of walking or sitting than climbing stairs or lying down.
Data analysis involved calculating the number of true positives (TP), true negatives (TN), false positives (FP) and false negatives (FN) in Matlab. Sensitivity, specificity, and F-scores were calculated for each individual, and the average and standard deviation of all participants were calculated for each activity. F-score was calculated as F = 2TP/(2TP + FP + FN). Results for each data window were compared to the gold-standard results from the video recording using descriptive statistics. Descriptive statistics and t-tests (p < 0.05) were used to compare sensitivity, specificity, and f-scores between ablebodied and stroke groups.
Results
The WMMS performed similarly with able-bodied and stroke populations when detecting immobile and mobile states (stage 1), with all sensitivity and specificity results greater than 0.92 and F-scores greater than 0.94 (Table 3). No significant differences were found between groups for stage 1, although sensitivity and F-score for the stroke population were lower for immobile states and specificity was higher for mobile states.
In stage 2, specificity and F-scores for stroke participants were significantly lower for stand detection, but specificity was greater than 0.94 for both groups (Table 4). Specificity for lie detection was significantly greater for stroke participants, but results for both groups were greater than 0.97. Sitting sensitivity and F-Score were lower than the other activities, with results for both groups less than 0.68.
In stage 3, stand F-scores for stroke participants were significantly lower than the able-bodied group (Table 5). Lie specificity was significantly greater for stroke participants, but outcomes for both groups were greater than 0.98. For the stroke group, walk sensitivity and F-score were lower. Specificity was significantly lower for stair recognition and sensitivity and small movement recognition was poor for both groups.
Discussion
This research demonstrated that a smartphone-based HAR approach can provide relevant information on human movement activities for both able-bodied and stroke populations, at a broad level of detail; however, sensitivity and specificity decrease as the classification tasks become more complex. Thus, our hypothesis that the WMMS would perform worse for stroke participants was valid at higher detail levels, but invalid at a broad classification level.
For stage 1, mobile and immobile activity states were well classified for both able-bodied and stroke populations. From the accelerometer-based HAR literature, activity classification accuracy ranged from 71 to 97 % [6,16], with studies in the past two years typically reporting results from 92 to 96 % for able bodied [17,18] and 82-95 % for older people [19]. Since this stage has only 2 classes, and the feature differences are large, thresholds can be set such that variability between people and populations has less of an effect on classification accuracy. Classification errors at stage1 may not be purely due to WMMS issues. For example, annotating gold-standard video can be difficult for small movements, such as washing dishes, since the person may move their body enough to be classified in a mobile state but human interpretation of the video could indicate an immobile state. The WMMS may provide a more consistent method of assessing an appropriate movement threshold for daily activity assessment since human raters could In stage 2, classification algorithm performance decreased when identifying if an immobile person was standing, sitting, or lying down. Specificity and F-score were significantly lower for stand detection and the algorithm performed poorly for sit identification, for both populations. Classification was based on static thresholds from a single feature (DifftoY). Since stroke can cause posture asymmetry during standing [20] and the stroke population was much older than the able-bodied sample, with posture changing with age [21], the DifftoY feature and threshold may not be sufficient to identify standing across populations, and could benefit from a combination of multiple features. In addition, inaccurate results could occur if the phone shifted or changed orientation during the trial. The therapist manually repositioned the phone during the trial for two stroke participants, one stroke participant unintentionally moved the smartphone with her paretic hand, and another participant intentionally re-adjusted his phone. The changed position may have affected application performance for activities that require a consistent phone orientation (i.e., standing, sitting, lying).
Inclination angle is typically used to classify posture when using a single accelerometer location [22]. In this case, sit identification relies on the pelvis tilting slightly back while sitting, which was not always the case in this study. For example, when a person sits at the dinner table they often lean forward to reach for objects or when eating. If the person did not sit back enough to pass the threshold before leaning forward, sitting was not identified. In many cases, stroke participants were detected as standing during the dinner table sequence. This reduced sitting sensitivity and standing specificity. Improvements in sit detection from one pelvis-worn sensor location could be achieved by using additional features or expanding the duration of sit analysis beyond the 1-s data window to compensate for forward-back transitions when sitting and performing daily activities (eating, office work, etc.). The DifftoY threshold setting was also attributed to classification problems for three of the able-bodied participants, for whom some sit periods were classified as lie. This outcome also demonstrated the importance of assessing HAR systems across a range of daily activities since the results would have been much better if only "pure" sit, stand, and lie tasks were included.
In stage 3, lower walk detection sensitivity and F-score were observed for the stroke group. The smartphone was worn on the right side of the pelvis and nine of the participants had right hemiparesis, thereby reducing pelvis movement on the right side and affecting sensor and feature output. In most cases, the people with right hemiparesis had slightly lower outcomes than those with left hemiparesis (<0.18 % difference in sensitivity and specificity), however the differences were not significant (p < 0.05). Many stroke participants wore the phone with cotton pants that had an elastic waist strap, which may have provided an inferior anchor point for the phone's holster (i.e., as compared with a leather belt or fitted pants). This may have increased sensor signal variability for stroke participants. All able-bodied participants had a belt or more rigid pant waist. When used in practice, a viable HAR system must deal with mounting inconsistencies.
Stair specificity for the stroke group was significantly lower than the able-bodied group, and the algorithm performed poorly for stair recognition for all participants. F-score was low for both populations due to the high number of false positives detected, lowering the precision of classification. For five able-bodied people, the WMMS briefly detected "stairs" when lying down, then correctly re-identified the state as lie. This occurred because the feature used to detect stair climbing (covariance) increased during the stand-to-lie movement. Interestingly, this did not occur for stroke participants, perhaps due to a difference in bed height or a difference in mobility techniques when transitioning into a supine position. As with sitting, error correction over a longer duration would eliminate incorrect stair classification during the stand-to-lie transition. Stroke participants tended to rely more on the railing while climbing stairs. Multiple threshold settings for differing the stair ascent methods, or user-specific thresholds for stair identification, could be explored as a means of improving classification results. For example, one stroke participant ascended and descended the stairs in a step-by-step fashion that placed both feet on a single stair, thereby changing the sensor signals and hence affecting stair recognition. This is a common stair climbing strategy for the stroke population and persons with other mobility and walking limitations.
Small movements were not well classified for either population, resulting in a sensitivity of 0.09 for ablebodied participants and 0.15 for stroke participants. The small movements included in the trial (making toast, washing dishes, eating a meal etc.) did not always cause pelvis accelerations. Thus, accelerometer and gyroscope sensors located on the hip were not appropriate for detecting all activities. Other small movements, such as Table 3 Average, standard deviation (in brackets), and differences between able-bodied and stroke groups for sensitivity, specificity, and F-score at stage 1 Activity Sensitivity Specificity F-score Table 4 Average, standard deviation (in brackets), and differences between able-bodied and stroke groups for sensitivity, specificity, and F-score at stage 2 washing dishes or brushing teeth, caused the person to move their hips enough for the WMMS to classify a mobile state. The poor performance related to the difficulty in categorizing daily living human movements and difficulty setting small movement onsets when labeling gold-standard video. In future work, better methods are needed for gold file annotation, taking into account individual differences in how small movements are performed. These results show that, while mobile and immobile classifications can be achieved with a relatively similar degree of accuracy for able-bodied and stroke participants, the WMMS had more difficulty with classification as the activity detail level increased, especially for the mobility affected stroke population. More research with pathological movement populations are required to understand how HAR algorithms need to be modified to accommodate for group and individual differences when performing activities of daily living.
Limitations in the current work include a moderate sample size from each population (15 people). The stroke group was not age matched to the able bodied group; therefore, age-related differences may have accounted for some differences in WMMS performance. However, the average ages for both groups were less than 60 years, which is not considered a senior population, thereby minimizing potential age effects. Stroke participants were in the sub-chronic phase and were capable of completing 30 min of walking. In the community, post-stroke populations may have lower mobility levels that could introduce greater movement variability, thereby decreasing WMMS performance. Since this study only used one smartphone model for testing. future work could evaluate algorithm performance with other smartphone based systems.
Conclusions
In this paper, it was demonstrated that human activity recognition using a smartphone based system can be accomplished for both able-bodied and stroke populations. However, an increase in activity classification complexity leads to a decrease in WMMS performance with a stroke population. This validates the hypothesis that a HAR system developed using only able-bodied sensor data would perform worse when used to classify activities in a stroke population.
Sensor data and features produced by the different populations affected WMMS performance. The algorithm performed reasonably well for both stroke and able-bodied participants when differentiating between sit, stand, lie, and walk and between mobile and immobile states. When stair climbing and small movements were added to the classification, algorithm performance decreased. Additional features are recommended to more accurately identify sitting, standing, and lying, as well as stair identification, since stair signals are similar to level walking for many individuals. These features should be selected using data from people with Table 5 Average, standard deviation (in brackets), and differences between able-bodied and stroke groups for sensitivity, specificity, and F-score at stage 3 | 5,412.4 | 2016-01-20T00:00:00.000 | [
"Computer Science"
] |
Micro- and nanometric characterization of the celestite skeleton of acantharian species (Radiolaria, Rhizaria)
We clarified the specific micrometric arrangement and nanometric structure of the radiolarian crystalline spines that are not a simple single crystal. A body of the celestite (SrSO4) skeleton of acantharian Acanthometra cf. multispina (Acanthometridae) composed of 20 radial spines having four blades was characterized using microfocus X-ray computed tomography. The regular arrangement of three types of spines was clarified with the connection of the blades around the root of each spine. The surface of the spines was covered with a chitin-based organic membrane to prevent from dissolution in seawater. In the nanometric scale, the mesocrystalline structure that consists of nanoscale grains having distorted single-crystal nature was revealed using scanning- and transmission electron microscopies, electron diffraction, and Raman spectroscopy. The acantharian skeletons have a crystallographically controlled architecture that is covered with a protective organic membrane. These facts are important for penetrating the nature of biogenic minerals.
In nature, organisms produce various inorganic materials with precisely controlled morphologies from a limited selection of ubiquitous elements, such as calcium, silicon, carbon, and oxygen, under ambient conditions. Generally, morphological design is a critically important aspect of biological mineralization processes with regard to the emergence of specific functions. Celestite (SrSO 4 ) is remarkably observed as a skeleton of Acantharia, a marine unicellular holoplanktonic protist. Since the utilization of celestite as a skeleton is exclusively known in acantharians in the living world, the structure and property of biological celestite are attracting attention in the fields of biology and material science. In the present study, we characterized the micrometric morphology and nanometric structures of the celestite skeleton of the acantharian species to provide an essential information for clarification of the specific biological crystal.
Acantharia species are identified based on their skeletal architecture, cytological structure and characters of algal symbionts 1,2 . The examined specimens are Acanthometra cf. multispina Müller (Acanthometridae, Clade F3) and Phyllostaurus siculus 3 (Acanthostauridae, Clade F3) 1 . In Clade F, the celestite skeletons consist of 20 radial spines that geometrically extend from a central point. This central point is constructed by tightly connected fletching roots of the 20 radial spines 1 . These fletching roots are combined with cytologic fibers named "myoneme" 4 . As these radial spines are embedded in the cytoplasmic membrane, they are endoskeletons. We can observe a particularly ordered spatial arrangement of spines for the acantharian skeleton.
The geometric arrangement of radial spines is called Müller's law 5 and is composed of two quartets of polar radial spines alternating with two quartets of tropical radial spines and one quartet of equatorial radial spines 1,3,5 . The biology of Müller's law is poorly understood, although several models have been proposed 6,7 . The crystal structure of celestite was investigated using X-ray diffraction 8 , electron diffraction, and transmission electron microscopy (TEM) techniques 9 . These papers concluded that each spine is a single crystal of celestite with a www.nature.com/scientificreports/ specific crystallographic orientation, but uncertainty still remains regarding the spine arrangement of the celestite skeleton of the selected acantharian specimens, which was characterized using various techniques, including microfocus X-ray computed tomography (CT), TEM, scanning electron microscopy (SEM), and Raman scattering spectrometry. Micrometric and nanometric studies on acantharean skeletons are needed to understand the specific nature of the biogenic products. Biominerals have been revealed to have hierarchical architectures that are built up of nanoscale grains incorporated with organic polymers, regardless of their polymorph [10][11][12][13][14][15][16] . The specific crystal structure consisting of nanometric building units aligned in the same crystallographic orientation is called mesocrystal. The specific mesoscopic textures provide the excellent mechanical properties of biominerals. Thus, studies of hierarchical architectures of various biominerals would contribute to the development of emergent materials [17][18][19][20][21][22][23] .
The present article focuses on the micrometric and nanometric morphologies of the celestite skeleton of A. cf. multispina. Our research achieved an accurate description of the specular arrangement and the mesocrystal nature of biological celestite. Here, we study the essence of biological crystals as a skeleton of marine protists.
Results and discussion
Arrangement of spines. Figure 1 shows an optical microscope image and a microfocus X-ray CT image of a whole skeleton of Acanthometra cf. multispina. There are 20 radial spines ("spine" hereafter) radiating from the skeletal center, as illustrated in Fig. 1c. The arrangement of four equatorial spines (e), four diametric (= 8) tropical spines (t), and four diametric (= 8) polar spines (p) agrees with Müller's law 5 (Fig. 1b). Given a unit sphere (Fig. 1d) whose origin is defined at the skeletal center and whose x-and y-axes are designed to the equatorial spines, the deviation angles of tropical and polar spines from the equatorial plane are 30° and 60°, respectively. The equatorial plane is defined by the plane with x-and y-axes.
A more detailed spine arrangement is displayed in the enlarged microfocus X-ray CT images (Fig. 2). On the polar-view images (Fig. 2b), four equatorial spines and four diametric (= 8) polar spines are at the same positions . From the top views of the spines, we characterized the connecting modes of the spines around the skeletal center. A polar spine is connected to two other polar and two tropical spines (Fig. 2c). A tropical spine is linked to two polar and two equatorial spines (Fig. 2d). An equatorial spine is directly attached to four tropical spines (Fig. 2e). The connecting angles around the equatorial and polar spines are almost the same with a twofold rotational symmetry (Fig. 2c, e). The arrangement of the four blades agrees with that reported in a previous study 7 . On the other hand, the blades of a tropical spine are arranged in a mirror symmetry (Fig. 2d).
Micrometric morphology and structure of spines. We revealed the morphology of a spine from several cross-sectional images produced by microfocus X-ray CT. Figure 3 shows cross sections of the polar and tropical spines of small and large specimens. Basically, the shape of the cross section of a spine is rectangular around the root but ellipsoidal in the remaining parts, including the distal end. As mentioned above, four blades are attached around their root. By comparing the fletching root of a small specimen with a large specimen, the bladed parts are roughly equal in size regardless of the different total lengths of the spines. This suggests that the spines elongate from the distal end with ontogenetic growth. The equatorial spines are 1.1-1.3 times longer than the tropical and polar spines (Table S1 in the Supporting Information (SI)). Figure 4 exhibits SEM images of the partly broken central part, showing the connection among the fletchingroots of the spines. The spines with blades are found to be separated at the skeletal center. This indicates that the spines are indirectly connected through the blades. The junction planes of the blades are teardrop-shaped.
We immersed the skeletons in an aqueous solution containing 250 mmol/dm 3 ethylenediaminetetraacetic acid (EDTA) to dissolve inorganic crystals. Figure 5 is the specimens before and after the EDTA treatment. By using EDTA, skeletons collapsed upon immersion. The removal of the solid skeleton by this treatment was confirmed by elemental analysis using an energy-dispersive X-ray spectrometer (EDS) (Fig. S2 in the SI). This indicates the dissolution of celestite in the skeleton. After dissolution of the solid skeleton, we confirmed an organic membrane covering the spines. From an SEM image of a cross section of a broken spine (Fig. 5e), the www.nature.com/scientificreports/ thickness of the membrane is estimated to be approximately 100 nm. We characterized the organic membranes with a deposition of silver particles to observe surface-plasmon-enhanced Raman spectra (Fig. 5f) and Calcofluor White Stain (Sigma Aldrich) that binds with cellulose and chitin contained cell walls. According to the Raman spectra, the membranes are deduced to be mainly composed of chitin ((C 8 H 13 O 5 N) n ). Moreover, we confirmed that the spines yielded fluorescence after addition of one drop of Calcofluor White Stain (Fig. S3 in the ESI). Thus, the spines are enveloped by a chitin-based organic membrane. As shown in Fig. 5, the chitin-based envelopes collapsed after dissolution of celestite cores. This suggests that the spines are not controlled by the organic membranes as a template.
Nanometric and crystallographic structures of spines. Figure 6 shows SEM and TEM images of spines with a typical SAED pattern. From the diffraction spots, the spines are assigned to celestite that is elongated in the a-axis direction and that has a single-crystal nature. The blades are suggested to expose the {110} planes. These facts about the crystallographic structure are in agreement with the assignment in a previous work 26 . www.nature.com/scientificreports/ The spines were also assigned to celestite using Raman spectroscopy (Fig. 7a). Interestingly, however, we observed a slight shift in the signal due to the S-O asymmetric stretching vibration to a lower wavenumber. This suggests the presence of the lattice strain of a celestite crystal in the skeleton. The strain was recovered after calcination at 600 °C for 20 h in air. Figure 7b, c shows SEM images of spines after removal of the organic matter by calcination at 600 °C for 4 h in air and subsequent etching with pure water for 2 h. Although the associated organic matter was removed with mild calcination for 4 h, the strain remained in the crystalline lattice. We observed fibrous units ∼ 100 nm wide on the spine surface. Tilted faces are assignable to the (210) plane by comparing the shape of artificially produced celestite crystals 27 . These results suggest that the spines are not a homogeneous single crystal but a bundle of fibrous units elongated in the a-axis direction. Finally, we www.nature.com/scientificreports/ conclude that the acantharian skeleton is composed of a celestite mesocrystal consisting of nanoscale units that are arranged in the same crystallographic orientation. As shown in Fig. S3 in the SI, we succeeded in producing celestite mesocrystals consisting of fibrous units. The bundled structure was formed through precipitation in a supersaturated solution containing poly(acrylic acid). The fibrous units that are elongated in the a-axis direction are arranged in the same orientation. The lattice strain of the artificial celestite mesocrystal is similar to that of the acantharian spines. Since the organic content of the products was estimated to be ca. 3 wt%, almost the same amounts of organic molecules are deduced to be included in the biological celestite.
Conclusion
The macroscopic arrangement, micrometric morphology, and nanometric structures of the celestite (SrSO 4 ) skeleton of acantharian Acanthometra cf. multispina (Acanthometridae) were completely characterized using various techniques. Here we clarified the specific micrometric arrangement and nanometric structure of the radiolarian crystalline spines that are not a simple single crystal. Three types of spines covered by a chitin-based organic membrane were regularly arranged with the connection of the wings around the center of the skeleton. The celestite spines have a mesocrystalline structure that consists of nanoscale grains having a distorted singlecrystal nature. Finally, the acantharian skeletons were found to have a crystallographically controlled architecture that is covered with a protective organic membrane. The specific architecture would provide characteristic properties of the biogenic products.
Experimental
Plankton samplings were conducted at 35° 09.45′ N, 139° 10.00′ E in the western part of Sagami Bay in southern Japan on R/V Tachibana of the Manazuru Marine Center for Environmental Research and Education, Yokohama National University. Individuals of acantharians were collected by plankton nets (diameter: 80 cm, side length: 3 m, mesh size: 100 μm or diameter: 45 cm, side length: 1.8 m, mesh size: 180 μm). The living specimens were immersed in deionized water, and water freeze-drying equipment (FD-6500; Kyowa Corporation) was used to obtain freeze-dried samples. Cross sections of the dried samples exposed by crushing were observed www.nature.com/scientificreports/ by microfocus X-ray CT, SEM and optical microscopy. Shell morphometry of acantharian was performed by microfocus X-ray CT (ScanXmateD160TSS105, Comscantechno Co., Ltd.) equipped in Japan Agency for Marine-Earth Science and Technology (JAMSTEC). A high-resolution setting (X-ray focus diameter: 0.8 µm; X-ray tube voltage: 90 keV; X-ray tube current: 37 µA; detector array size of 1024 × 1024 pixels; 2000 projections in 360° rotations) was applied. The geometric resolution of the isotropic voxel size was from 0.28 to 0.46 µm/voxel. We used the ConeCTexpress (White rabbit Corp.) software for correction and reconstruction tomography data and the general principle of Feldkamp cone beam reconstruction was followed to reconstruct image cross sections based on filtered back projections. The surfaces and cross sections of the samples were coated with osmium for detailed observation using a scanning electron microscope (SEM, FEI Helios G4 UX, JEOL JSM-7100) operated at 2.0-15.0 kV. The compositions were identified using Raman scattering spectroscopy and energy-dispersive X-ray analysis (JEOL JED-2300). Micro-Raman spectroscopy was performed using a laser confocal microscope (inVia, Renishaw). The 532 nm excitation laser was focused on the sample surface with a 100 × objective of the microscope. The size of the laser spot was approximately 1 μm in diameter. Chitin standard ((C 8 H 13 O 5 N) n ) was purchased from Kanto Chemical. Crystalline parts in the spines were characterized by transmission electron microscopy (TEM, FEI Tecnai G2). The samples were dropped with water on a copper grid and crushed with a needle to release crystalline parts from the main body. A suspension containing crystals was quickly dried for a few minutes on a copper grid for TEM observation. Crystalline parts were dissolved to observe the frameworks by immersing specimens in deionized water for several hours. After a series of these treatments, the examined specimens are taxonomically identified under the modern classification concept and named following the International Code of Zoological Nomenclature. | 3,178 | 2022-02-18T00:00:00.000 | [
"Geology"
] |
Tropical Geometry and Five Dimensional Higgs Branches at Infinite Coupling
Superconformal five dimensional theories have a rich structure of phases and brane webs play a crucial role in studying their properties. This paper is devoted to the study of a three parameter family of SQCD theories, given by the number of colors $N_c$ for an $SU(N_c)$ gauge theory, number of fundamental flavors $N_f$, and the Chern Simons level $k$. The study of their infinite coupling Higgs branch is a long standing problem and reveals a rich pattern of moduli spaces, depending on the 3 values in a critical way. For a generic choice of the parameters we find a surprising number of 3 different components, with intersections that are closures of height 2 nilpotent orbits of the flavor symmetry. This is in contrast to previous studies where except for one case ($N_c=2, N_f=2$), the parameters were restricted to the cases of Higgs branches that have only one component. The new feature is achieved thanks to a concept in tropical geometry which is called stable intersection and allows for a computation of the Higgs branch to almost all the cases which were previously unknown for this three parameter family apart form certain small number of exceptional theories with low rank gauge group. A crucial feature in the construction of the Higgs branch is the notion of dressed monopole operators.
Introduction and Summary
Brane webs of five branes in string theory are well known to capture the dynamics of five dimensional gauge theories and their UV fixed points [1,2]. Many of the studies which followed were focused on the Coulomb branch of the 5d theories, and this paper is devoted to study H ∞ [3,4], the Higgs branch (or branches) which arise as the gauge coupling is tuned to infinity. As is usual in brane systems of the type of [5], the most useful way of studying the Higgs branch is to introduce (p, q) seven branes as in [6], with a possibility to end multiple 5-branes on a single 7-brane as in [7]. With the seven branes attached at ends of five branes, the Higgs branch H ∞ is realized as the gauge coupling is tuned to infinity and the five brane webs are maximally divided into sub webs that move freely between seven branes, thus providing the Higgs branch moduli, one (quaternionic) modulus per each such sub web. While such a picture, already identified in [2], correctly captures the dimension of the Higgs branch, it was not sufficient until now to determine the Higgs branch in full detail. This paper is devoted to fix this problem and to correctly identify the Higgs branch H ∞ from the brane realization.
Higgs branches, being hyperKähler, are typically constructed by providing combinatorial data, as encoded in a quiver gauge theory [8]. There is a corresponding Lagrangian and for a hyperKähler quotient one uses the F and D term equations to construct the Higgs branch as the space of gauge invariant vacuum equations. If in addition one sets the FI terms to zero, the Higgs branch becomes a hyperKähler cone, or a symplectic singularity [9]. HyperKähler quotient is the traditional way of computing Higgs branches at finite coupling in any dimension, and in particular in five dimensions. In three dimensions, there is an alternative way of constructing a symplectic singularity by taking the same combinatorial data in the form of a quiver (more precisely a graph) and construct the Coulomb branch as the space of dressed monopole operators, as in [10] (this notion has been further developed throughout the mathematical literature [11][12][13]). Hence we are faced with two ways of constructing symplectic singularities and one wonders which way is more suitable for the purpose of solving the problem of finding H ∞ .
Incidentally, there are other ways of constructing symplectic singularities like the space of dressed instanton operators as described in [3,4]. This is a very interesting direction of research, but will not be pursued in the present paper.
It should be stressed that the construction of the moduli space as a space of dressed monopole operators is not restricted to 3d. All the features which are required for such a construction exist in all higher dimensions 4, 5, and 6. The monopole operators are still localized in 3 dimensions, but contribute to the chiral ring just as they do in 3 dimensions. The topological symmetry is still conserved by the Bianchi identity, but due to boundary conditions remains associated with a 1 form current. This is consistent with [14] that finds that there are no higher form currents in 5 dimensional SCFTs. The combinatorial data is hence not an exclusive feature of 3d, but rather of any dimension that includes 3! It is a feature of a co-dimension 3 object. This is further supported by the brane picture of type [5] in which a Higgs branch modulus is characterized by a Dp brane in between two D(p + 2) branes where the boundary conditions remove a vector multiplet, but leave a compact scalar which, together with one of the 3 real scalars, exponentiate to form monopole operators in the construction of the moduli space.
As a result, it makes more sense to call the construction of such a moduli space a space of dressed monopole operators rather than a 3d Coulomb branch, since the former name is more general and does not commit to a particular dimension -it can be in 3, 4, 5, or 6 dimensions. Furthermore, the latter name commits to 3d and may lead to confusion as coming from some sort of compactification and 3d mirror symmetry [15]. With this comment in mind, below we use the names 3d Coulomb branch or space of dressed monopole operators interchangeably.
To proceed, we recall that the Higgs branch is given by sub webs of five branes which are ending on seven branes, and notice that five branes ending on seven branes carry magnetic charges and are naturally associated with magnetic monopole solutions. This property favors the construction of the Higgs branch at infinite coupling as the space of dressed monopole operators and we proceed by taking this direction. One first needs the combinatorial data in the form of a quiver. Once this is given, one can use a host of techniques which are developed in papers that follow [10] and are still under intensive study.
To be more specific we now introduce the class of theories which are discussed in this paper. The class is parametrized by three integer numbers N c , N f , k and goes under the name of 5d SQCD. The gauge group is SU (N c ) with a CS level k and N f flavors of fundamental matter. N c > 1 for a non Abelian gauge theory, N f ≥ 0 is non negative, and the CS level k is any integer or half-integer number satisfying k − N f 2 ∈ Z, but it turns out that the resulting Higgs branches depend only on its absolute value. For generic N c , the brane webs put an additional restriction that k ≤ N c − N f 2 + 2 [16]. For certain small values of N c , other values of k may lead to 5d fixed points [17,18]. Among such cases, we also study the case k = N c − N f 2 + 3 with N c = 3 in this paper. As for the remaining exceptional cases, the corresponding 5-brane web diagrams are known to include orientifold 5-plane for some cases [19], which makes the analysis of the Higgs branch doable but more involved, while the corresponding 5-brane webs are not even known for other cases. We do not study such cases in this paper. The theory has one gauge coupling which is set to infinity at the UV fixed point, and adjoint valued real masses that transform under the flavor symmetry. For simplicity all these masses are set to 0. The classical flavor symmetry is SU (N f )×U (1) B ×U (1) I where U (1) B is the baryonic symmetry which acts on the Higgs branch for N f ≥ N c and U (1) I is the instanton symmetry. The flavor symmetry at infinite coupling for N c ≥ 3 is summarized in table 1 [20][21][22][23].
We are led to look for combinatorial data in the form of a graph with nodes and edges. Such an approach is taken in [4] and successfully describes H ∞ for low enough values of k and high enough values of N f . The answer is given in a form of exceptional sequences and is derived by applying two methods, one of which is looking at maximal sub algebras of exceptional algebras and generalizing to higher values of N c by using the symmetry properties of the associated quivers. The other method for some of the cases uses compactification to three dimensions, and mirror symmetry, but is restricted to cases where the process is understood. All the results of [4] are included below, with some different way of presentation, since they constitute a subset of the full set of results of the present paper.
A striking feature of the solutions given in [4] is that they are all given by a single graph and the Higgs branch at infinite coupling admits a single cone structure. There is, however, a phenomenon which already shows up at finite coupling [24] where for some gauge theories the Higgs branch is a union of two cones. Henceforth, a cone on the Higgs branch will be also called a component of the Higgs branch. It should be stressed that for lower supersymmetry this phenomenon is rather frequent, but for 8 supercharges it is a rather unstudied feature of the Higgs branch, and relatively rare.
The techniques in [4] form a beautiful set of new results, and have shortcomings due to two reasons. The first is that they are covering special regions of the three parameter Parameter region Global symmetry at UV Global symmetries on components family N c , N f , k for which the Higgs branch H ∞ is a single cone, and do not cover cases in which the Higgs branch is a union of two or more cones. The second is that they do not cover cases with high value of k and low values of N f . This is due to a new feature of quivers with edge multiplicities that is not taken into account in [4]. In fact, all edges in the quivers of [4] have a multiplicity of either 1 or 0. Both of these issues are resolved in the present paper and we give a complete description of H ∞ for all values of k ≤ N c − N f 2 + 2. As mentioned before, the results of this paper are fully consistent with those of [4] and provide a complete generalization of them, as well as another (third) consistency check for the validity of the whole approach of representing H ∞ as the space of dressed monopole operators.
The simplest case where H ∞ is a union of 2 cones is for N c = 2, N f = 2 [25,26] and has a brane realization which is discussed in subsection 2.1. The corresponding five brane web has two inequivalent ways of dividing into sub webs. This physically translates to having a Higgs branch with two components, each is a cone, and both intersect at the origin. Even more surprising, and certainly new in the physics of 5d gauge theories and their UV fixed points, is the behavior of the generic case in this 3 parameter family. There are 3 different components to the Higgs branch with non trivial intersections that are discussed in detail below. This again features by having 3 inequivalent ways of sub dividing a brane web.
The appearance of sub division of brane webs leads to a new question in five dimensional physics, and in brane physics. Given a web that characterizes a five dimensional fixed point, we ask in how many inequivalent ways can it be sub divided. The answer to this question is the answer to the number of components that the Higgs branch has at the fixed point. There is a corresponding toric diagram and we ask what are the possible ways to get such a toric diagram by using Minkowski sums of smaller toric diagrams. This is an interesting question which is to be addressed in future studies. The notion of Minkowski sums, taken from [27,28], is applied in [29] for the study of supersymmetry breaking in branes at singularities, with particular emphasis on toric diagrams for theories with N f = 0, the so called Y p,q in the paper. It should be stressed that the methods of [27,28] are restricted to cases where 7-branes at ends of 5-branes are not needed for generating more Higgs branch moduli. Hence a generalization to include the effects of 7-branes is in order.
It is crucial to point out that parallel external legs in the brane web are not presenting any particular complication in the approach of this paper. This is due to having a 7-brane at the end of each external 5-brane. Any previous claims that parallel legs lead to some 6d states which may or may not couple to the system are avoided in this work by having finite 5-brane webs between 7-branes, none of them extend to infinity in the plane of the brane web, hence lead to 5d fixed points.
Multiple edges in a quiver are not common in gauge theories with this amount of supersymmetry. In addition they do not show up in perturbative open strings, again with this amount of supersymmetry. Nevertheless, such quivers do show up in various non perturbative cases. In [30] Philip Boalch discusses the notion of complete graphs in the context of studies of Painlevé equations. This is applied to Higgs branches of Argyres Douglas theories in [31] and are used in [32] for an evaluation of the Higgs branch as an algebraic variety, using the construction of the space of dressed monopole operators [10]. It should be pointed out that while complete graphs have all edges with the same multiplicity, in the present paper only one or two edges have multiplicity greater than 1, while the other edges are the usual edges with multiplicity 1.
So how does one get an edge with multiplicity in the quiver? This follows from the association of a quiver node to a brane web, where the rank of the node is equal to the number of copies of such a brane web. Given two different such brane webs, they have an intersection number, which is called stable intersection in the tropical geometry literature [33]. As a reminder, the physical object which we call a brane web, is the mathematical object which is called a tropical curve. To explain this point further, we need to recall the usual way an edge shows up in perturbative open strings. Given two D branes, represented by nodes in the quiver, one can stretch an F1 between them and this gives rise to a bi fundamental hyper multiplet that is represented by an edge in the quiver. By using standard S and T dualities, we find that a D3 brane stretched between two five brane webs, each represented by a node in the quiver, leads to such an edge connecting the two nodes. There is however, a crucial difference. Two five brane webs can have multiple intersection points and hence we can have one D3 brane stretched per such point, leading to one edge of the quiver per each intersection point. The resulting edge of the quiver, has a multiplicity which is equal to the intersection number between the two five brane webs. With this rule in hand, we are now able to compute all of the missing cases that were not possible with the techniques in [4]. Furthermore, we are able to check that all the cases computed in [4] are consistent with having multiplicity 1 to all edges in the quiver, and consist of one component on the Higgs branch. Thus we have a third way of deriving the results of [4]. The computation is demonstrated below with a collection of explicit examples.
Classical Higgs branches are known to be trivial for N f < 2, but as one tunes the gauge coupling to infinity, new flat directions show up and the Higgs branch emerges from these new moduli. A simple example of such a phenomenon is 5d SYM with gauge group G with CS level k = 0 as discussed in [3]. There are no matter fields, hence no classical moduli space, but there are new flat directions due to the appearance of massless instantons at infinite coupling. The resulting moduli space is C 2 Z h where h is the dual Coxeter number of the gauge group G. Restricting G to be SU (N c ) with level k = 0 we find that there is a new Higgs branch at infinite coupling, contrary to the classical intuition which associates Higgs branches to matter fields (alternatively, one should start thinking about instanton operators as generating new matter degrees of freedom, which differ from free hypermultiplets). The results of this paper show that there is another N f = 0 theory with a non trivial H ∞ . If we set k = N c we find a new moduli space of the form C 2 Z 2 for any N c . Similar observations arise for N f = 1. For example there is an interesting pattern of Higgs branches for SU (3) with 1 flavor. For CS levels 1 2, 3 2, 5 2, 7 2 and 9 2 we find that H ∞ is trivial for k = 3 2, is C 2 Z 3 for k = 1 2, 9 2, and is C 2 Z 2 for k = 5 2, 7 2. A very rich pattern of H ∞ as the CS level is varied.
The paper is organized as follows. In section 2 we start our analysis by re deriving some known results from the brane webs, thus establishing tools which allow the computation of the combinatorial (quiver) data for cases that were not known so far. In section 3 we establish the general conjecture that explains how to obtain the quiver data from the 5-brane webs. In section 4 we explore new notions derived from the application of the conjecture to 5d brane webs whose corresponding H ∞ was not previously known. These are the union of three cones and the intersection of several cones. In section 5 we provide a full classification of the Higgs branch at infinite coupling H ∞ of the three parameter family of 5d N = 1 SQCD theories with gauge group SU (N c ), N f fundamental flavors, and CS level k. These results were not known before and have been obtained by applying the conjecture defined in section 3. Section 6 contains some concluding statements. An appendix A has been included with an example that illustrates a detailed computation.
E 3 -Union of Two Cones
Let us discuss a well known example: N c = 2, N f = 2. The level is not crucial here, as the CS density is identically 0 for an SU (2) gauge theory. The Higgs branch at infinite coupling is the union [25,26]: figure 1) corresponding to the 5d SQCD theory with SU(2) gauge group and N f = 2 before and after taking the gauge coupling g to infinity and all the masses m i as well as the VEV a of the adjoint scalar field to zero. The horizontal lines represent D5-branes, the vertical lines represent NS5-branes and the diagonal lines represent (1, −1) five branes. Each circle represents a seven brane of the same type as the five brane that ends on it.
Where min A k denotes the closure of the minimal nilpotent orbit 1 of sl(k + 1, C), and both cones min A 2 and min A 1 intersect at the origin. Physically, this moduli space is the moduli space of 1 E 3 instanton. The single instanton can either be an SU (3) instanton or an SU (2) instanton, but not both. Hence this leads to the union structure 2.1. For each of these cones there is a different 3d N = 4 quiver, for which the cone is the Coulomb branch. Note that the cone min A 2 (resp. min A 1 ) is isomorphic to the reduced moduli space of one A 2 (resp. A 1 ) instanton on C 2 . Hence, the 3d quivers are just the corresponding affine Dynkin diagrams [15]: The closure of the minimal nilpotent orbit of sl(k + 1, C) is normally denoted asŌ (2,1 k−1 ) in the mathematical literature [34], or also as a k in [35]. We started using the notation min A k in [36,37], since it can be extended to exceptional groups which don't have partition data like the partition (2, 1 k−1 ) in O (2,1 k−1 ) , and it can also be extended to non minimal orbits of small dimensions, i.e. the closure of the next to minimal orbit is denoted as n.min A k . where C 3d () denotes the 3d Coulomb branch. One can see that both cones are recognizable as different phases on the brane diagram. In order to obtain the brane system, remember that the toric diagram [2] can be obtained, and it is represented 2 in figure 1. Figure 2 represents the five brane web corresponding to this theory (dual to the toric diagram). On the left of figure 2 the gauge coupling is finite and the masses and the VEV of the adjoint scalar field are different from zero. On the right of the same figure the gauge coupling is taken to infinity and all the masses as well as the VEV of the adjoint scalar field is set to zero (i.e. at the origin of the Coulomb branch). Before taking this limit, there is a single web that can move along the 7-branes, and hence the Higgs branch is trivial, remembering to factor out overall position moduli. After taking this limit, there are new possibilities of breaking the web into sub webs. In particular, there are two possibilities, represented in figure 3, where different colors correspond to different sub webs. In phase (a) there are three different segments that move along the perpendicular directions to the paper, spanned by the seven branes. In phase (b) there are two different sub webs. The transition from (a) to (b) can only take place when all sub webs realign and combine into a single web. This web corresponds to the origin of the cones, indicating that the intersection between the two cones is at a single point -the origin.
The new perspective that was missing until the present work is that the quivers in equations 2.2 and 2.3 can be read directly from the brane webs. The goal of this paper is to establish the tools that allow such reading and to put them to use in the analysis of the three parameter family of 5d N = 1 theories with gauge group SU (N c ), number of fundamental flavors N f and Chern Simons level k.
For the current example one can deduce the following: 1. Each separate brane sub web corresponds to a different gauge node with group U (1) in the quiver.
2. The links between the nodes in the quiver (corresponding to hypermultiplets of the 3d N = 4 theory) are given by the intersection numbers between the branes. Let us discuss the second point in more detail. For phase (a) the intersection number I between two 5-branes (p 1 , q 1 ) and (p 2 , q 2 ) has already been defined as the absolute value of the determinant 3 : Hence, the intersection between any pair of 5-branes from the set (1,0), (0,1) and (1, −1) is always of value 1. Therefore, the 3d quiver corresponding to phase (a) is a complete graph [30] with three nodes, and edge multiplicity 1, as depicted in equation 2.2. Phase (b) is more complicated because one needs to compute the intersection between the two sub webs. This can be done by introducing an idea from tropical geometry [33]. In tropical geometry each of the brane sub webs can be seen as a tropical curve. Then, their stable intersection can be defined as in [33]. Let us review the idea of stable intersection by computing it in the case of the sub webs of phase (b). This is represented in figure 4. The left diagram in figure 4 depicts the two curves. The diagram in the center represents the same curves after they have been moved a small distance apart from each other. Now, the points at which the curves intersect can be treated as the intersection of two different 5-branes of the form (p 1 , q 1 ) and (p 2 , q 2 ). This can be computed by using the determinant in equation 2.4. The stable intersection is defined as the sum over all such intersection numbers. Note that a different deformation of the initial brane system, where the sub webs are moved apart in a different direction, always results in the same value for the stable intersection. Alternatively, the intersection numbers can be computed from the dual toric diagram. The dual toric diagram of the displaced sub webs is displayed at the right of figure 4. This toric diagram has four different polygons: two squares and two triangles. The stable intersection is given by the total area of all the polygons that have edges of both colors. In this case, the triangles do not contribute, since their edges are of a single color. The two squares contribute, and the sum of their areas is 2.
Hence, the stable intersection between the two sub webs in phase (b) has value of 2. This value corresponds to the number of edges (or hypermultiplets) between the corresponding gauge groups of the quiver in equation 2.3. Equivalently one can think of 1 D3 brane which is stretched between the sub webs per each intersection point. (2) gauge group and N f = 3 before and after taking the gauge coupling g to infinity and all the masses m i as well as the VEV a of the adjoint scalar field to zero. Note that coincident 5-branes are depicted slightly apart in the right-hand diagram. This is done to make the diagram easier to read, but the branes at the center of the diagram that look parallel to each other should be considered as fully coincident.
On this particular example, we have found a way to read the 3d quivers that describe (via equations 2.1, 2.2 and 2.3) the H ∞ of the 5d theory. Let us explore some more examples where the H ∞ is already known in the next sections. After that we provide the final answer on reading the 3d quiver for the whole 3 parameter family of 5d SQCD theories, an answer which is actually more general and applies for any five brane web.
E 4 -Necklace Quiver
Let us now turn to N c = 2, N f = 3. The corresponding toric diagram is represented in figure 5. The brane system at finite and infinite coupling is depicted in figure 6. This case is different from the previous example, in the sense that there is a unique way of maximally dividing the system at the infinite coupling limit into sub webs. Correspondingly, the Higgs branch has one component. The subdivision is depicted in figure 7. It is known [25] that the Higgs branch of the theory at infinite coupling is the closure of the minimal nilpotent orbit of sl(5, C): This space is isomorphic to the reduced moduli space of one A 4 instanton on C 2 and can also be written as the Coulomb branch of a 3d N = 4 quiver gauge theory where the quiver In the sub division of branes of figure 7 there are 5 different sub webs that can move between seven branes, generating the Higgs branch at infinite coupling (represented with different colors: red, blue, green, orange and purple). Once again, each of them corresponds to a different gauge node in equation 2.6, each with multiplicity 1 as there is one copy of each sub web. Equation 2.6 indicates that each of the sub webs needs to be connected to two and only two other sub webs. Let us see how it works. The orange segment can only connect with the red web and the blue segment, and the purple segment can only connect with the red web and the green segment. This leaves only one possibility: the green segment and the blue segment need to be connected by a single link, and the red web needs to be disconnected from the blue segment and the green segment. The stable intersection between any pair (red, blue), (red, green) and (blue, green) is of value 1, see the corresponding dual toric diagrams in table 2. The new feature introduced in this example is the possibility of different sub webs ending on the same 7-brane. We see that if two sub webs end on the same 7-brane from opposite sides, we consider the corresponding gauge nodes connected by a link (i.e. the connection between orange and blue, or between orange and red). However, if two sub webs end on the same 7-brane on the same side, this contributes with −1 to the number of links between the corresponding gauge nodes. This effect removes the link between red and blue (similarly, red and green) that would arise due to their stable intersection, leaving their corresponding gauge group nodes disconnected. One can summarize these observations in the following: Summary 1 (Quiver edge multiplicity -Stable Intersection) The number of edges between two gauge nodes corresponding to two different brane sub webs is equal to the stable intersection between the sub webs plus the contribution from the 7-branes. The contribution from a 7-brane is positive if the sub webs end on it from opposite sides, and negative otherwise.
With this rule, the 3d quiver as read from the diagram in figure 7 is: Hence, one recovers the quiver that describes the H ∞ via equations 2.5 and 2.6. It is interesting to see that even though the brane system does not resemble a necklace, the intersection numbers between brane webs do form a necklace quiver. This quiver has the feature that all edges have multiplicity 1 and the brane system has only one way to maximally divide it into sub webs, hence is part of the quivers which were computed in [4] in full agreement.
E 5 -Node Multiplicity
Let us study the SU (N c ) 5d SQCD theory with parameter N c = 2 and number of flavors N f = 4. The toric diagram is depicted in figure 8 and the brane system at finite and infinite coupling is depicted in figure 9. The Higgs branch at infinite coupling is the reduced moduli space of one D 5 instanton on C 2 . This space is a hyperKähler cone, isomorphic to the 1 g 2 , m i , a → 0 Figure 9: Five brane webs corresponding to the 5d SQCD theory with SU(2) gauge group and N f = 4 before and after taking the gauge coupling g to infinity and all the masses m i as well as the VEV a of the adjoint scalar field to zero. closure of the minimal nilpotent orbit of so(10, C): This space can be found as the Coulomb branch of a 3d N = 4 quiver which is the affine Dynkin diagram of D 5 [15]: The new feature of this example that does not occur in the previous two examples is the appearance of nodes of rank higher than 1 in the 3d quiver of 2.9. This is easy to relate to the brane web: a number of n identical sub webs corresponds to a single node of rank n. Figure 10 depicts the subdivision of the brane system into the maximal number of sub webs. There are two segments in blue that are identical and correspond to one of the nodes of rank 2, and two segments in green that are identical and correspond to the other rank 2 node. In order to establish the links in the 3d quiver, consider a single copy of each sub web. For example, a single blue segment has stable intersection of 1 with a single green segment, therefore there is a single link with multiplicity one between them (there are no extra contributions to the number of links since they do not share any common 7-branes). Similarly, a single blue segment ends on the same 7-brane as the orange segment, and they do it on opposite sides, hence there is a single link between them (just as between the blue and orange segments in figure 7). Therefore, the quiver as read from the brane system in figure 10 is: This quiver indeed describes the correct Higgs branch, according to equations 2.8 and 2.9. (2) gauge group and N f = 5 before and after taking the gauge coupling g to infinity all the masses m i as well as the VEV a of the adjoint scalar field to zero.
E 6 -7-Brane Contributions without Stable Intersection
The next example considers the case of N c = 2, N f = 5. In this case there is no need to compute stable intersections between sub webs, since all the contributions to the number of links between two gauge nodes in the 3d quiver are given by the ending of the different sub webs on shared 7-branes. The toric diagram is depicted in figure 11 and the brane system is represented in figure 12. The Higgs branch at infinite coupling is isomorphic to the reduced moduli space of one E 6 instanton on C 2 (alternatively, it can also be described as the closure of the minimal nilpotent orbit of e 6 ): Hence the H ∞ has a single component, as can be seen from the fact that there is a unique maximal subdivision of the brane system, figure 13. This space can be found as the Coulomb branch of a 3d N = 4 quiver which is the affine Dynkin diagram of E 6 [15]: According to the rules developed in the previous examples, the quiver can be read directly from the brane configuration in figure 13: This is precisely the expected result. Note that once more, n multiple identical copies of the same sub web correspond to a single gauge node with a rank n. In this case there is no need to compute stable intersections, since none of the different sub webs intersect. The number of links between the different gauge nodes of equation 2.13 are only determined by different sub webs ending on the same 7-brane from opposite directions.
Super Yang-Mills -Edge Multiplicity
Another well known result [3] is the Higgs branch at infinite coupling of Super Yang-Mills SU (N c ) with no flavors, N f = 0, and CS level k = 0. The brane system is depicted in figure 14. The Higgs branch at infinite coupling is [3] 4 : This can be written as the Coulomb branch of a 3d quiver with two gauge nodes of rank 1 and a number of N c hypermultiplets between them. Let us write it as the complete graph: The same quiver is read from the brane picture in figure 15, since there are only two segments, corresponding to the two different nodes, and the stable intersection between them is just their intersection number I: Hence, the 3d quiver read directly from the branes is: Equipped with the examples above, we are now ready to generalize to the main result of this paper for the 3 parameter family of SQCD theories.
Conjecture
In this section we present the main conjecture that contains all the information on reading the combinatorial data for the Higgs branch of the 5d theory from the brane web. Note that this technique is not restricted to infinite coupling, or to 5d theories where the gauge group is a single factor. Furthermore, it can also be used to obtain subspaces of the Higgs branch, if the subdivision of the brane web is not maximal. In section 5 we use this conjecture to obtain the Higgs branches at infinite coupling for the 3 parameter family of 5d N = 1 SQCD theories with gauge group SU (N c ), number of colors N f and CS level k.
Before stating the conjecture, let us define three quantities that can be computed for any pair of brane sub webs in a given five brane web. The first thing that can be computed is the stable intersection (which in this section is denoted as SI), as explained in section 2 and in [33], by slightly displacing the sub webs with respect to each other, and adding the area of polygones with two colors in the toric dual diagram (see the example in figure 4). The next quantity that can be computed is the contribution to the number of hypermultiplets from 5-branes ending on the same 7-brane. Let the 7-branes shared by two different brane sub webs be denoted as A i (i = 1, 2, 3, ...). For each A i , one can compute the following two quantities: • X i = number of combinations of two 5-branes from the different brane sub webs which are attached to A i on opposite sides.
• Y i = number of combinations of two 5-branes from the different brane sub webs which are attached to A i on the same sides.
Please see appendix A for an explicit computation of the quantities SI, X i and Y i in a given example.
Conjecture 1 Given a five brane web divided into sub webs that can move along the directions spanned by the 7-branes placed at the end of each (p, q) brane, the moduli space generated by this motion is given as the moduli space of dressed monopole operators of a quiver. The quiver can be obtained in the following way. Each set of m identical sub webs corresponds to a different gauge node with group U (m) in the quiver. Given a pair of gauge nodes in the quiver, the number of edges E between them is determined by selecting two sub webs, one corresponding to each node, and computing for them their stable intersection SI, as well as the quantities X i and Y i (defined above) for all the 7-branes A i shared by the different sub webs. The number of edges E is given by In order to connect this conjecture to the previous examples, note that the contribution X i gives rise to edges between the nodes in the quiver 2.13. On the other hand, the contribution Y i makes sure that the red node in quiver 2.7 remains disconnected from the blue node and the green node.
Comments
Here, we make several comments on this conjecture.
First, we discuss that the combination in equation 3.1 can be interpreted as a natural generalization of the stable intersection defined in the tropical geometry literature. In this paper, we have generalized the tropical curve by introducing 7-branes so that the 5-branes are terminated at the 7-branes instead of extended to infinity. In this case, we can deform the sub web in such a way that the stable intersection discontinuously changes by moving a 7-brane across a 5-brane. As an example, we consider figure 15. As discussed before, the stable intersection for the (0,1) 5-brane and the (−N c , 1) 5-brane is given by N c . However, when we move one of the (0,1) 7-brane as depicted in figure 16 until it goes across the original (−N c , 1) 5-brane, we find that the naively computed stable intersection changes from SI = N c to SI = 0 in a sense that the two 5-brane sub webs are not intersecting any more if we ignore the contribution from the (0,1) 7-branes. Since this is not the desired property for "stable" intersection, we need a generalized quantity which is kept invariant even after the Hanany-Witten transition. The web diagram after Hanany-Witten transition is given in figure 17. Part of the of the original (−N c , 1) 5-brane is changed to (N c , N c − 1) 5-brane by going across the monodromy cut created by the (0, 1) 7-brane. The important point is that new (0,1) 5-branes are created by Hanany-Witten transition. In general, when (p, q) 7-brane goes across the (p ′ , q ′ ) 5-brane, the number of (p, q) 5-branes created by the Hanany-Witten effect is given by the intersection number of (p, q) and (p ′ , q ′ ) [40], which, in our case, is N c . Due to this effect, the contribution from the 7-branes is given by figure 17, which makes the combination in equation 3.1 invariant under the Hanany-Witten transition. Necessity of the term Y i in equation 3.1 could be analogously understood. If n(≤ N c ) (0,1) 5-branes are attached to the (0, 1) 7-brane before the Hanany-Witten transition, these will disappear and new N c − n (0, 1) 5-branes will be created after the Hanany-Witten transition. Again, the combination in equation 3.1 is invariant also in this case. Therefore, we claim that this combination is the natural generalization of the stable intersection number in a sense that it is invariant under the deformation of the web diagram after including the Hanany-Witten transition. We expect this property to be true for any 5-brane web diagram.
Next, we discuss the physical interpretation of this stability. It is often claimed that the Higgs branch is "stable" in a sense that it does not receive any quantum corrections and is independent of any gauge coupling. However, this statement should be interpreted in a careful way because new Higgs branch directions can open up at infinite coupling. Since the generalized stable intersection in equation 3.1 does not change by continuous deformations of the brane webs once we fix the sub webs, it indicates that the quiver is stable under deformation of the external parameters like masses and inverse gauge couplings. We expect that the stability of the Higgs branch can be correctly interpreted in terms of this stability
New Notions
We proceed with examples which demonstrate new features that arise after utilizing Conjecture 1 in the analysis of 5d Higgs branches.
Union of Three Cones
Let us study the case of 5d N = 1 SQCD with gauge group SU (5), with N f = 6 fundamental flavors and CS level k = 1. The corresponding toric diagram is depicted in figure 18. The brane web of the theory and the limit where the coupling is taken to infinity are depicted in figure 19. The Higgs branch at infinite coupling H ∞ was not known before the present paper. Now we are able to compute it as the moduli space of dressed monopole operators, by maximally dividing the brane system into sub webs and then applying Conjecture 1. We see in the example of SU (2) with two flavors that there are two different ways of maximally subdividing the brane system (figure 3), and this implies that H ∞ is the union of two cones (it has two components). In the present case, there are three different ways of maximally subdividing the brane web, see figure 20. This means that the Higgs branch at infinite coupling is a union of three cones: The three cones C 1 , C 2 and C 3 are computed utilizing Conjecture 1 on each different maximal subdivision, to obtain a particular quiver. These quivers are depicted on the In this way, Conjecture 1 can be utilized to derive new properties of 5d theories at infinite coupling that were not understood before. Figure 20: Different components of the Higgs branch at infinite coupling of 5d SQCD with gauge group SU (5), N f = 6 flavors and CS level k = 1. None of the three web sub divisions can be a sub division of the other two. Note that the red and the blue sub webs in the rightmost web cannot be further subdivided due to s-rule. The quiver which is obtained by applying Conjecture 1 is depicted underneath each phase. Note that in table 3 these three different phases receive labels III, I and II from left to right.
The Intersection of Several Cones
In fact, Conjecture 1 can also be used to specify the intersection between any pair of cones in equation 4.1. Given two sub divisions, for example the first and the second from the left in figure 20, find the maximal subdivision S such that both brane systems are subdivisions of S. S is depicted in figure 21. The intersection of both cones is then the moduli space of dressed monopole operators given by the quiver associated to S via Conjecture 1.
Hence, we have: Note that this space is the closure of the next to minimal nilpotent orbit of sl(6, C): Therefore, we obtain the result: We believe that this result nicely illustrates the power behind Conjecture 1. Similarly, the triple intersection between all three components C 1 ∩ C 2 ∩ C 3 can also be identified. The maximal subdivision S ′ such that all brane systems in figure 20 are subdivisions of S ′ is depicted in figure 22. The corresponding quiver can be obtained via Conjecture 1, such that the triple intersection is defined as the space of dressed monopole operators: Figure 21: Intersection of two cones. Figure 22: Intersection of the three components depicted in figure 20.
The space of dressed monopole operators in equation 4.8 can be identified with the reduced moduli space of one A 5 instanton on C 2 [15], or equivantley with the closure of the minimal nilpotent orbit of sl(6, C):
Computation of H ∞ for SQCD
In this section we present a classification of the Higgs branch at infinite coupling of the class of 5d N = 1 SQCD theories with gauge group SU (N c ), number of flavors N f and CS level k, represented by the following 5d quiver: x as defined in 5.8, makes a distinction between even and odd N f . For generic N c , the brane web construction of such theories imposes the restriction: We have applied the techniques developed in this paper to all theories that satisfy condition 5.2, and have found that the 3 parameter family of theories is divided into 4 different regions, given by: These four regions were previously identified as regions which differ by the pattern of symmetry enhancement. It turns out that these regions are also consistent with the pattern of different Higgs branches which is computed in the present paper. In each of the regions the Higgs branch at infinite coupling contains several components, with non trivial intersections. Let us express the components and their intersections by providing their construction as spaces of dressed monopole operators, computed from quivers obtained employing Conjecture 1 (alternatively one can say that the different components are constructed as Coulomb branches of 3d N = 4 quivers, as it was done in the examples of sections 2 and 4).
First Region
General case: k > 1 2 . In this region there are degenerate cases for k = 0 and k = 1 2 . Let us first consider the general case where: The 5-brane web for this case is depicted in figure 23, with the corresponding dual toric diagram in figure 24. The length of each edge is written in figure 24 supposing that the distance between each dot is one. (Therefore, the number of dots included in the corresponding edge is one more than the written number.) The triangulation of this toric diagram is omitted since it does not affect the analysis of the Higgs branch. In the Higgs branch, the brane system in figure 23 decomposes into sub webs depicted in figure 25. Here it is convenient to introduce a variable x defined as There are three different patterns of sub dividing the original 5-brane web, corresponding to three different components that form the Higgs branch, which we denote I, II and III, respectively.
Each different pattern gives rise to a different quiver, obtained via Conjecture 1. These are depicted in table 3. The rightmost column of Table 3 contains the quaternionic dimension of each component. The global symmetry of the 5d Higgs branch at the UV fixed point is: Only some factors of the global symmetry act non-trivially on each phase 5 . This is also specified in a column of table 3.
Note that component I is present for values: Component II is present for the regime: which is the typical regime where baryons exist. Such a phase is known from classical physics, sometimes called the baryonic branch, containing both mesons and baryons. The Phase Quiver Global Symmetry Dimension Table 3: The three dots in the quivers indicate a chain of balanced gauge nodes (i.e. the number of colors that each node sees is twice its rank). For example, in phase I, the ranks of the gauge nodes in the main chain start with 1, then 2, then keep increasing until Intersections of the different components When specifying different components of the moduli space, it is important to specify how these components intersect. This is crucial, for example, for the purpose of counting operators on the chiral ring, making sure that each operator is counted exactly once. We notice that since Exceptional cases: k ≤ 1 2 . The brane system and the toric diagram for the exceptional cases does not change. They are still depicted by figures 23 and 24. Let us consider the first case: The global symmetry does not change: For this case, the sub web (IIIa) in Figure 25 is decomposed into (Ia) and (Ib). Therefore, phase III is fully included in phase I, so there are only two distinct components in the Higgs branch, given in table 5.
The second case is for: The global symmetry does not change: For this case phase III is also fully included in phase I, so there are only two distinct components in the Higgs branch, given in table 6.
Second Region
General case: k > 1. Let us consider the general case, which happens for: Figure 26: The global symmetry of the 5d Higgs branch at the UV fixed point is enhanced to: The brane system is depicted in figure 26 and the toric diagram is represented in figure 27. There are two different maximal sub divisions of the brane system, which means that the Higgs branch H ∞ is the union of two components. The sub webs that make the two different sub divisions are depicted in figure 28. They arrange in the two possible subdivisions called phase I and phase III (due to the similarity to the previous case). They are: The two components of the Higgs branch are identified with the space of dressed momopole operators of the quivers obtained from the different sub divisions via Conjecture (2) factor. The intersection of the two components is given also as the space of dressed monopole operators of a quiver, following the same procedure as in section 4.2, and it is also included in table 7.
Exceptional cases: k ≤ 1. The exceptional cases share the same brane system and toric diagram depicted in figures 26 and 27. They differ from the previous cases in the fact that the maximal subdivisions of the brane system are slightly different. The first exceptional case appears for: k = 1. (5.21) In this case the global symmetry enhancement is the same: However, the phase III does not present the extra node that contributes to the SU (2) factor of the global symmetry in the generic case (see table 7). This can be seen in the brane web in the fact that supersymmetry prevents from breaking loose the segment that would correspond to this node in the general case. That is, we need to combine (IIIa) and (f) to form the minimal sub web in order to avoid breaking the s-rule. The corresponding quivers are given in table 8.
The next case is: For this case the global symmetry enhancement does not change: The phase III is a subset of phase I, reflecting the fact that (IIIa) is further divided into (Ia) and (Ib). It givies the single phase, consistent with [4]. The quiver is depicted in table 9.
Phase Quiver
Global Symmetry The next case is: The global symmetry enhancement is different: The web diagram at infinite coupling in this case is given in the form in figure 29 while corresponding toric diagram is in figure 30. Due to the fact that the 7-branes labelled by C and D are both (0,1) 7-branes, the maximal subdivision of the brane web is different from the generic case. There is only a single phase [4], we denoted it as I ′ , and the corresponding quiver is given in table 10.
Third Region
Let us consider the general case with: Phase Quiver Global Symmetry The global symmetry is enhanced to: 28) The brane system at finite coupling is depicted in figure 31 as discussed in [16,41], Phase Quiver Global Symmetry while the corresponding toric diagram 6 is in figure 32. When the coupling is taken to be infinite, the web diagram is represented in figure 33. There are only two possible maximal subdivisions of the brane system ( figure 33). The different sub webs that make up these two different phases have been collected in figure 34. The two phases have been labelled I and III due to the similarities with the previous analysis. They are: The quivers are depicted in table 11. Note that the number of nodes in the balanced chain of the quiver is extended with respect to the previous cases from N f − 1 to N f .
Exceptional cases: k ≤ 3 2 . The exceptional cases still have the same brane systems and toric diagram as those in figures 31, 33 and 32. Let us consider: The global symmetry is enhanced to: In this case, the sub web (IIIa) is not allowed due to s-rule. Thus, there is a unique maximal subdivision of the brane system, denoted as phase I, consistent with [4]. The quiver obtained via Conjecture 1 are given in table 12. Figure 31 The next case is: The global symmetry is the same as in the general case: The phase III from the general case is contained in phase I in this case, reflecting the property that (IIIa) can be further divided into (Ia) and (Ib). Hence, there is a single phase, consistent with [4], given in table 13.
Phase Quiver Global Symmetry The next case is: .
N f + 1 2 Figure 36: Toric for Phase Quiver Global Symmetry Phase Quiver Global Symmetry The global symmetry is enhanced to: In this case, the web diagram at infinite coupling is given as in figure 35 while the corresponding toric diagram is in figure 36. There is a single phase I ′ , consistent with [4], given in table 14. The next case is: The global symmetry is enhanced to: The web diagram and the toric diagrams are given in figure 37 and in figure 38, respectively. There is a single component I ′ , consistent with [4], given in table 15. .
. . (1, 1) General case: k > 2. We first concentrate on the former case. The global symmetry is: The corresponding toric diagram is in figure 42. The final brane system after taking the gauge coupling to infinity for is depicted in figure 43.
We classify the Chern-Simons level k into four classes as Figure 42: In order to treat these four cases in a unified way, we introduce the variables y, z, y ′ and z ′ given as while v and w are defined only for integer k and given as The different sub webs that conform the maximal web sub divisions are in figure 44.
There is a phase IV that only exists for N f (≥ 2) even, and a different phase V that changes from N f even and N f odd: Especially for N f = 0, Phase V does not exist if k is odd (α = 1), while it exists as for generic half integer k. The corresponding component of H ∞ should be understood accordingly.
The different phases are depicted in table 16. Note again, that the intersection between the even components is a closure of a nilpotent orbit of height 2 of the global symmetry SO(2N f ). Physically this means that the operators that are shared everywhere on the moduli space are mesons only, while baryons and other exotic objects are specific to individual components. It should be crucially noted that chiral ring relations do get corrections from instanton effects even at intersections. The mesons are no longer interpreted as the usual quark bilinears, hence the nilpotecy condition of at most N c is no longer correct, and becomes maximal or near maximal.
Exceptional cases: k ≤ 2. The 5-brane web diagrams at strong coupling for k ≤ 2, obtained from figure 41, are given in figure 45. The corresponding toric diagrams are given in figure 46. The different sub webs that conform the maximal web sub divisions are given analogous to figure 44. However, the counterpart of (IV a) does not exist in this case. For k = 2, this is due to the s-rule. For k = 1, it is simply because w < 0. The sub webs (V a) and (V b) are replaced as in figure 47 depending on the value of k. The sub webs (c) and (d i ) are identical except that for k = 1 2, A N f and A N f −1 are replaced by E and A N f , respectively.
Let us consider: The global symmetry is the same as in the general case: There is a single phase V. The phase IV does not exist since (IV a) is not allowed due to s-rule as stated above. The maximal subdivision is given as in equation 5.42. The corresponding quiver is depicted in table 17, which can be seen as the special case of Table 16.
Phase Quiver Global Symmetry The next case is: Phase Quiver Global Symmetry This case is not an exception. The global symmetry is the same as in the general case: The maximal subdivision is given as in equation 5.42 and the Higgs branch is also given by the general case, with a single component V, consistent with [4]. It is depicted in table 18. The next case is: The global symmetry is not the same as in the general case: There is a single phase V ′ , The maximal subdivision is given as Phase Quiver Global Symmetry The next case is: The global symmetry is enhanced to: There is a single phase V ′ . The maximal subdivision is given as The quiver is depicted in table 20. The last case is: In this case the theory has a 6d fixed point.
Excetional
Case: Fifth Region with k = N c − N f 2 + 3 and N c = 3 Exceptionally for N c = 3, it is proposed [17,18] that the theory with k = N c − N f 2 + 3 also has a UV fixed point. This case can be understood as a special case of 5d SU (N c ) gauge theory with one hypermultiplet in antisymmetric tensor representation and N f − 1(≤ 8) hypermultiplets in fundamental representation and with k = . This class of theories is believed to be dual to 5d Sp(N c − 1) gauge theory with one antisymmetric tensor and N f − 1 flavors, whose UV fixed point is known to be rank N c − 1 SCFT with global symmetry E N f . However, for N c = 3, the antisymmetric tensor representation is equivalent to fundamental representation, leading to N f flavor. The corresponding 5-brane web diagram can be depicted as in figure 48 with the introduction of 7-branes, where we set a = 0 for N f = 0 and a = 1 for N f ≥ 1. By moving the 7-branes as in figure 48, we obtain the 5-brane web in figure 49 for N f = 0 and the one in figure 50 for N f ≥ 1 after Hanany-Witten transition. It is straightforward to see that the 5-brane web diagram in figure 50 is identical to the one suggested in [42] for 5d Sp(2) gauge theory with 1 second rank antisymmetric tensor and N f − 1 flavors at their UV fixed point 7 . These theories are known to be rank 2 SCFTs with E N f global symmetry.
Indeed, by moving 7-branes in figure 50, taking into account their monodromy as well as the Hanany-Witten transitions, we see that the diagrams can be equivalently rewritten as in figure 51. These diagrams are almost identical to the ones proposed in [7] as the diagram for rank 2 E N f SCFTs. The only difference is the extra edge with a single 5brane, which corresponds to the decoupled singlet. The corresponding toric diagrams are depicted in figure 52.
From figure 51, it is straightforward to read off the Higgs branches. They are identified as a 3d quiver with affine E N f Dynkin diagram whose ranks are twice the ranks of the minimal choice for the affine E N f quivers, which are discussed in section 2, and with an extra U (1) node attached to the null node. See tables 3 and 4 of [43] for k = 2 in these tables. The moduli space is, as expected, the 2 E N f instanton moduli space on C 2 [43] and its Hilbert Series is computed in [44].
Conclusions
The conclusion of this paper is that tropical geometry holds the key to solve a problem in supersymmetric quantum field theory that had not been probed with the tools that were developed in the past. The problem concerns the vacuum structure of 5d theories when their gauge coupling is taken to infinity. In particular we have postulated a conjecture that is able to recover the Higgs branch of such theories. The Higgs branch at infinite gauge coupling is found to be a union of several components (hyperKähler cones), and each of such components is given as a space of dressed monopole operators. The structure of such spaces can be easily encoded in graphs (quivers), and our proposal explains how to read such quivers directly from the brane system of the 5d theory. The technique of obtaining such quivers, embodied in or Conjecture 1, is completely novel and we believe that its striking simplicity makes it particularly appealing. Furthermore, the technique itself is by no means restricted to the vacuum at infinite coupling, and it can be applied to any 5d N = 1 gauge theory that has an embedding into Type IIB superstring theory, via five brane webs ending on seven branes. It can also be used to study any hyperKähler variety that exists as a subset of the full Higgs branch. In this paper for example, we have employed it to obtain the precise description of the intersections of the different components of the Higgs branch at infinite coupling. We believe that both results, the classification of Higgs branches at infinite coupling of the three parameter family of 5d SQCD theories, and the conjecture on how to obtain the components of the Higgs branch from the brane system, are extremely relevant. The former provides an answer for a question that had been left unsolved for far too long. The latter constitutes an exceedingly simple new technique that can easily be implemented in many other analyses, and it is bound to open the door to a myriad of exciting results in the study of moduli spaces of 5d supersymmetric gauge theories. Furthermore, it gives a more robust shape to an idea that has been emerging in these types of studies during the last two years: the use of spaces of dressed monopoles to study vacuum moduli spaces is not restricted to 3d Coulomb branches, and it is in fact a property of any theory with eight supercharges in 3, 4, 5 or 6 dimensions.
First, let us compute the stable intersection SI. In order to do this we focus on the local properties of the system near the intersection point in the center of figure 53. We depict the neighborhood of this point in figure 54 (note that the 7-branes are taken to be very far away), with the difference that the sub webs have been slightly displaced with respect to each other. Now it is possible to obtain a dual toric diagram of the system, represented in figure 55. The SI is the sum of the areas of all polygons with edges of different colors. In this case there are two such polygons: one rectangle of area 2 and one square of area 2. Hence, SI = 2 + 2 = 4 (A.1) Now, let us compute the X i contributions. These come from the 7-branes that are shared by both sub webs. There are 3 such branes, denoted in figure 53 as A 1 , A 2 and A 3 . For each brane A i we compute how many possible combinations exist of branes from different sub webs ending on it from opposite directions. This number is zero for A 1 and A 2 . For A 3 the blue (0, 1) 5-brane that ends on it from below can be paired with two different red (0, 1) 5-branes that end on it from above, giving a total number of 2: In order to compute the Y i contributions we look again at the A i 7-branes. In each 7-brane we ask how many pairs of 5-branes from different sub webs exist such that both 5-branes end in the 7-brane coming from the same direction. For A 1 there is only one pair, and the same is true for A 2 . For A 3 there are two blue (0, 1) 5-branes that end on A 3 from above, and there are also two red (0, 1) 5-branes that end on A 3 from above, so there is a total of four different combinations: According to Conjecture 1, each sub web gives rise to a different U (1) gauge node on the corresponding quiver; the number of edges E between both nodes is then given by: and the Higgs branch of the system is isomorphic to the 3d Coulomb branch (or alternatively, the space of dressed monopole operators): | 15,808.8 | 2018-10-02T00:00:00.000 | [
"Mathematics"
] |
QPCR: Application for real-time PCR data management and analysis
Background Since its introduction quantitative real-time polymerase chain reaction (qPCR) has become the standard method for quantification of gene expression. Its high sensitivity, large dynamic range, and accuracy led to the development of numerous applications with an increasing number of samples to be analyzed. Data analysis consists of a number of steps, which have to be carried out in several different applications. Currently, no single tool is available which incorporates storage, management, and multiple methods covering the complete analysis pipeline. Results QPCR is a versatile web-based Java application that allows to store, manage, and analyze data from relative quantification qPCR experiments. It comprises a parser to import generated data from qPCR instruments and includes a variety of analysis methods to calculate cycle-threshold and amplification efficiency values. The analysis pipeline includes technical and biological replicate handling, incorporation of sample or gene specific efficiency, normalization using single or multiple reference genes, inter-run calibration, and fold change calculation. Moreover, the application supports assessment of error propagation throughout all analysis steps and allows conducting statistical tests on biological replicates. Results can be visualized in customizable charts and exported for further investigation. Conclusion We have developed a web-based system designed to enhance and facilitate the analysis of qPCR experiments. It covers the complete analysis workflow combining parsing, analysis, and generation of charts into one single application. The system is freely available at
Background
Amongst other high throughput techniques like DNA microarrays and mass spectrometry, qPCR has become important in many areas of basic and applied functional genomics research. Due to its high sequence-specificity, large dynamic range, and tremendous sensitivity it is one of the most widely used methods for quantification of gene expression. Moreover, due to the adoption of robotic pipetting stations and 384-well formats, laboratories generate a huge amount of qPCR data demanding a centralized storage, management, and analysis application.
Most software programs provided along with the qPCR instruments support only straightforward calculation of quantification cycle (Cq) values from the recorded fluorescence measurements. However, in order to get biological meaningful results these basic calculations need to undergo further analyses such as normalization, averaging, and statistical tests [1].
To this end, a variety of different methods have been published describing the normalization of Cq values. The simplest model (termed ΔΔ-Cq method) was developed by Livak and Schmittgen [2] which assumes perfect amplification efficiency by setting the base of the exponential function to 2 and uses only one reference gene for normalization. The model proposed by Pfaffl [3] considers PCR efficiency for both the gene of interest and a reference gene and is therefore an improvement over the classic ΔΔ-Cq method. Nevertheless, it still uses only one reference gene which may not be sufficient to obtain reliable results [4]. Hellemans et al. [5] proposed an advanced method which considers gene-specific amplification efficiencies and allows normalization of Cq values with multiple reference genes based on the method proposed by Vandesompele et al. [4]. It should be noted that these methods could differ substantially in their performance, because of the different assumptions they are based on.
Available software tools often cover only single steps in the analysis pipeline compelling researchers to use multiple tools for the analysis of qPCR experiments [5][6][7][8]. However, these tools do not share a common file format making it difficult to analyze the experimental data. Additionally, no standardization of methodology has been established that would be needed for relatable comparison between laboratories [9]. Recently, the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines [10] were published which are intended to describe the minimum information necessary for evaluating and comparing qPCR experiments. Based on a subset of these guidelines the XMLbased Real-Time PCR Data Markup Language (RDML) [11] was proposed which tries to facilitate the exchange of qPCR data and related information between qPCR instruments, analysis software, journals, and public repositories. These efforts could allow a more reliable interpretation of qPCR results if they were accepted in the qPCR community.
The lack of complete or partial assessment of error propagation throughout the whole analysis pipeline may result in an underestimated final error and could therefore lead to incorrect conclusions. Moreover, the analysis of experiments using tools that make invalid biological assumptions can cause significantly wrong results as reported in [8].
To the best of our knowledge, there is no single tool available which integrates storage, management, and analysis of qPCR experiments. Hence a system enabling comparison of results and providing a standardized way of analyzing data would be of great benefit to the community. We have therefore developed QPCR, a web-based application which supports: a) technical and biological replicate handling, b) the analysis of qPCR experiments with an unlimited number of samples and genes, c) normalization using an arbitrary number of reference genes, d) inter-plate normalization using calibrators, e) assessment of significant gene deregulation between sample groups, f) generation of customizable charts, and g) a plug-in mechanism for easy integration of new analysis methods.
Implementation
The QPCR system was implemented in Java, a platform independent and object-oriented programming language [12]. The application is based on the Java 2 Enterprise Edition (J2EE) three-tier architecture consisting of a presentation-, business -, and database-layer. A relational database (PostgreSQL or Oracle) is used as the persistence backend. The business layer consists of Enterprise Java Beans (EJB) and is deployed on a JBoss [13] application server. The presentation layer is based on the Model-View-Controller (MVC) framework Struts [14] and uses Java Servlets and Java Server Pages.
In order to enhance usability current web technologies have been extensively used in this application. AJAX functionality has been incorporated into the application using the open-source library DWR [15]. This technology allows asynchronous loading of data without the need to reload the page thus providing a desktop like application behavior. Multiple JavaScript libraries (Prototype [16], JQuery [17]) have been used that allow executing functions on the client side and therefore remarkably improve the usability of the application. Charts are generated using the open-source Java library JFreeChart [18] and all charts are created either in the lossless PNG format or as a scalable vector graphic (SVG).
All algorithms, calculation methods, and data file parsers used by the application are integrated through a plug-in mechanism which allows simple extension with additional qPCR data formats and analysis approaches. For each class that uses the plug-in mechanism a specific interface needs to be implemented in order to support another vendor or implement an additional analysis method. The new Java classes are then automatically detected by the QPCR application.
Currently the data file parsers support files generated by Applied Biosystems (ABI 7000, ABI 7500, ABI 7900) and Roche LightCycler (LightCycler 2.0, LightCycler 480) [19] systems as well as a generic file format based on comma separated values (CSV). Since not all fluorescence measurements can be extracted from data files created by the qPCR instrument systems, additional export files are required to parse all relevant data.
Analysis methods that calculate Cq and amplification efficiency values are computationally expensive and are therefore executed asynchronously and do not interfere with the QPCR web interface. They are designed to operate on a per well basis and report the current progress of the calculation. Normalization methods and statistical tests are not time consuming processes and are therefore executed in real time.
The QPCR application has been designed using the Unified Modeling Language (UML) [20]. The use of a UML representation improves maintainability as the application architecture is outright visible and provides an important part of the system documentation. We used the AndroMDA framework [21] to create basic EJB and presentation tier source code as well as configuration files based on the UML model. AndroMDA minimizes repetitive coding tasks, allows to easily extend or edit the architecture of the application, and helps maintaining the consistency between design and implementation.
The stored data is secured by a user management system which allows the definition of several fine grained user access levels and offers data sharing and concurrent access in a multi-centric environment [22]. Moreover, the application provides two configurations which assign the ownership of objects either to the submitter or to the submitter's institute. The latter setup provides the possibility to edit and analyze experiments by all users of an institute without the need to explicitly share objects.
Results
QPCR is an application which integrates storage, management, and analysis of qPCR experiments into one single tool. Implemented as a web application it can be accessed by a web browser from every network connected computer and therefore supports the often decentralized work of biologists. It parses files generated by qPCR instruments, stores data and results in a database, and performs analyses on the imported data. Moreover, it allows conducting of statistical tests and provides several ways to visualize and export the calculated results ( Figure 1).
Parsing files and calculation of Cq/efficiency values
Data files are uploaded into the application using a single file upload dialog or an integrated Java applet which supports uploading of multiple files at once. An upload zone lists all available files and allows querying and downloading of data previously uploaded. All files are stored in a user defined directory facilitating the backup of project critical files.
After uploading the exported files into the QPCR application, a list of all files which have not yet been processed is shown. The user can select single or multiple files for parsing. Moreover, Cq and amplification efficiency values can be automatically calculated after the files have been parsed using one or several different methods.
During parsing all relevant data is extracted, including plate setup, fluorescence measurements, and qPCR instrument specifications and stored in the database. In contrast to many available analysis tools the application is able to import qPCR data files without the need for additional file manipulations and therefore reduces error-prone and cumbersome manual work. In addition to the already existing data file parsers the application can be easily extended to support other vendors due to the modularity of the platform and the used plug-in mechanism.
Once the data is parsed and stored in the database, Cq and amplification efficiency values are calculated based on the fluorescence measurements. Several published and widely-used algorithms were implemented; two different algorithms to calculate Cq together with efficiency values, three different algorithms to calculate solely the amplification efficiency, and one method to calculate the Cq value are available (see Table 1).
The progress of all active parser or analyzer background tasks is displayed on a view that automatically updates the current status. As soon as a process has finished a message is shown at the top of the page. For each process a log file is created which informs the user about the outcome of the performed job. A color scheme helps to quickly identify the jobs that have not finished successfully.
During parsing of uploaded files a Run is created in the application which is a direct representation of the performed qPCR run. It stores information about the hardware, software, thermocycler profile, and category.
Each Run contains a plate which consists of multiple wells that store information about the sample, target, passive reference, task, and omitted status. The plate layout can be displayed in a list and each well can be edited to correct inconsistencies or to omit it from further analysis.
Additionally, QPCR provides a graphical representation of the plate layout by showing a grid which displays sample, target, and status information of each well. By selecting an arbitrary number of wells, charts of amplification (raw and background subtracted) and dissociation (raw and derivative) curves are displayed ( Figure 2). This view is helpful to evaluate the performance of the PCR for each well and is useful to perform a quick quality check of the conducted qPCR run.
Analysis of experiments
After Cq and efficiency values have been determined, experiments consisting of one or multiple runs are subjected to subsequent analysis steps. Several plates can be combined into one experiment. In order to support a flexible and adaptable analysis of experiments, the applica-tion allows selecting of specific samples and genes to be used in subsequent analysis steps. Moreover, the Cq calculation method, the efficiency method, and the reference genes can be defined.
Four different ways to consider amplification efficiencies in the analysis have been implemented: (1) setting a single efficiency value for all targets, (2) manually defining the efficiency for each target, (3) using efficiencies derived from dilution series for each target, and (4) using calculated efficiencies for each well. Several different efficiencies values for a target, calculated by serial dilution series, can be stored in the database.
Normalization of experiments is based on a method proposed by Hellemans et al. [5] and includes averaging of Analysis pipeline Figure 1 Analysis pipeline. This figure illustrates the analysis pipeline implemented in the QPCR application. The method operates on raw fluorescence data and fits a five parameter Richard's curve.
Next it calculates the tangent of the inflection point and intersects it with the abscissa axis which results in the Cq value.
LinReg
The method operates on background corrected data and uses log transformed values to construct a slope with the highest correlation to the original curve. The parameters of this slope are used to calculate the amplification efficiency.
Miner
The method operates on raw fluorescence data and fits a four-parameter logistic curve to determine the Cq value by using the second derivative maximum. The efficiency is calculated by using a weighted average of a fitted exponential curve.
Zhao et al. [28] ✓ ✓ RutledGene The method operates on raw fluorescence data and fits a four-parametric sigmoid function to calculate efficiency values.
SoFar
The method operates on raw fluorescence data and fits exponential or sigmoid function on smoothed data to calculate Cq and efficiency values.
TAQ
The method operates on raw fluorescence data and performs a linear regression on log transformed data to determine the efficiency values.
Ostermeier et al. [31] -✓ Graphical representation of the plate layout Figure 2 Graphical representation of the plate layout. The tabbed bar at the top is used to switch between different chart types. The chart itself features tool tips and provides a legend. Beneath the chart is a representation of the plate layout that is adapted to the plate size (96/384 wells, linear layout). Selected wells are colored in red, omitted and empty wells in blue.
technical replicates, normalization against reference genes, inter-run calibration, and calculation of quality control parameters. Technical replicates are averaged either within one plate or over all plates of the experiment depending on the analysis setting. In the next step all samples of one gene are referenced to the arithmetic mean Cq value across all samples for this gene. Thereafter the user selected type of efficiency is considered for each target and the samples are normalized to the selected reference genes. If reaction specific efficiency has been selected the efficiency is averaged for each target. Depending on the analysis setting the application supports spreading of reference genes across multiple runs or uses reference genes for each run independently. Finally, inter-run calibrators are automatically detected and are used to normalize results between different qPCR runs.
Quality control parameters for reference genes are calculated based on a method described by Vandesompele et al. [4]. When multiple reference genes are selected the coefficient of variation and the gene stability value M are calculated. These parameters are helpful for selecting and evaluating reference genes. Additionally, QPCR performs outlier detection by calculating the difference in quantification cycle value between technical replicates and allows highlighting those that have a larger difference than a user defined threshold. Moreover, quality control checks are performed to test if a no template control (NTC) is present for each target.
Fold change ratios of the calculated normalized Cq values can be calculated by referencing them to one or multiple samples. All analysis setup parameters are automatically stored in the database and are loaded when the experiment is analyzed again. Additionally, each analysis setup can be stored under a user defined name. Throughout the whole analysis process proper error propagation is performed using methods described in [5,23].
During the development of the QPCR application special attention was laid on the accurate and user-friendly visualization of calculated results. Therefore, the application allows to display and export results of every important analysis step. The generated figures are highly customizable and are designed to be usable in publications without further manipulation. Among other parameters QPCR allows to define color, labeling, sort sequence, and data type to be used in histogram charts. Cq values normalized by reference genes and calibrators are presented as histograms displaying results of one gene or multiple genes at once (Figure 3). Every result throughout the analysis pipeline can be exported in tab-delimited or spreadsheet format (txt, csv, xls) to be used in external applications.
Conducting statistical tests
The final step in the analysis pipeline is the comparison of samples using statistical tests (e.g.: biological replicates, samples of a time series). The application allows to group samples into an arbitrary number of classes which are tested for their significant difference against one defined reference class. QPCR includes several statistical tests to compute p-values such as ANOVA, student's t-test, and a permutation based test which makes no assumption on the distribution of the data. Tests can be conducted on either untransformed or log2 transformed values. The application allows adjusting the calculated p-value by supporting several established correction methods for multiple testing [24].
Calculated test results are displayed for each class and can be exported for further analysis. Moreover, the fold changes of samples are displayed in histogram charts in which samples of each class are grouped together. Every class is assigned to a specific user defined color or shape that is used in different shades to group the samples of one class (Figure 4).
General data entry and query
The application provides views of every entity to (1) manually enter data and (2) list available items. Entry views consist of mandatory and optional fields and use drop down selection lists to specify references to other entities. Entered data is checked for validity and the user is informed about erroneous inputs. List views present the data in tabular form and support paging, sorting, and querying for any combination of the available attributes. Moreover, queries can be stored in the database for later use.
Discussion
We have developed an integrated platform for the analysis and management of qPCR experiment data using state-ofthe-art software technology. The uniqueness of the application is defined by the support of various qPCR instruments, multiple data analyzers, and statistical methods, as well as the coverage of the complete analysis pipeline including proper error propagation. Moreover, it provides a flexible plug-in mechanism to incorporate new parsers and methods and allows generation of highly customizable charts. A comparison of features between QPCR and several other popular qPCR analysis tools is provided in Table 2.
The capability to import and parse data without the need for further file manipulations is an integral part of the application which avoids errors during the analysis and reduces the time to analyze the experimental data. As most of the available qPCR software tools rely on special formatted input files it was a prerequisite of the platform to be able to directly parse files generated by the qPCR instruments software suits. Moreover, the system is not confined to a specific manufacturer and can therefore be used in laboratories equipped with qPCR instruments from different vendors.
QPCR includes established and widely used methods for the calculation of Cq and amplification efficiency values and supports an easy integration of new algorithms. This framework does not limit the researcher to one specific approach and allows incorporation of newly developed analysis methods. Furthermore, it is of great value as different experimental situations need to be considered separately and it remains up to individual researches to identify the method most appropriate for their experimental conditions [25]. QPCR allows to store several different analysis settings for each experiment and calculates quality control parameters which help to evaluate the performed analysis. Incorporating several different methods to include the amplification efficiency enhances the flexi-bility of the application and allows adapting the analysis to the experimental conditions or laboratory practices. Particularly, supporting the widely used calculation of efficiency based on serial dilution series increases the acceptance in the qPCR community.
An often underestimated drawback of using multiple tools to analyze qPCR experiments is the lack of support for assessment of error propagation. Therefore the final error is often based solely on the standard deviation of biological replicates which can lead to false biological interpretations. The QPCR application addresses this problem and includes assessment of error propagation throughout the whole analysis pipeline covering technical replicate handling, normalization, inter-run calibration, referencing against samples, and biological replicate handling. The implemented method is based on Taylor series expansion which allows direct calculation of the full probability distribution and is in contrast to Monte Carlo based methods computationally inexpensive [26].
Visualization of normalized relative quantities Figure 3 Visualization of normalized relative quantities. The tabbed bar is used to switch between views that display multiple targets at once, one target at a time (displayed), or quality control parameters. On the left side the user can define various parameters including the displayed target, the specific result, the presented error, and the reference samples. The list of displayed samples can be reordered using drag and drop, samples may be excluded from the chart, and for each sample an alternative name and an individual color can be assigned.
Special focus was laid on the presentation of analysis results. QPCR provides an interface which uses state-ofthe-art software technologies to generate highly customizable charts that are designed to be ready for publication. Since many available tools do not provide a suitable graphical representation of the calculated results, Microsoft Excel is often used to create figures which require manual import and/or conversion of data. QPCR combines the calculation and presentation of results into one single tool which reduces analysis time and avoids additional potential error-prone steps. A flowchart displaying each analysis step and its suggested method is included into the user guide.
The recent developments of data exchange formats (RDML) and guidelines describing the minimum information about qPCR experiments (MIQE) could become an important part in standardizing qPCR experimental data. QPCR already integrates the suggested nomenclature and RDML support will be implemented as soon as the relevant Java libraries are available. Once established in the qPCR community these initiatives will allow a standardized exchange of data between software tools and facilitate the comparison of qPCR experiments.
Using three-tier software architecture that separates the presentation, the business, and the database layer enables not only easy maintenance but also allows distribution of the computing load to several servers. As more and more data needs to be analyzed this design may be very valuable in the future.
The use of a database allows easy querying and comparing of data and guarantees data integrity. The implemented plug-in framework, which is used for including data file parsers, analysis methods, and statistical algorithms, ensures that the application is adaptable to new developments and allows the effortless integration of innovative scientific methods.
Conclusion
We have developed QPCR, a system for the storage, management, and analysis of qPCR data. It integrates the complete analysis workflow, ranging from Cq determination over normalization and statistical analysis to visualization, into a single application. The analysis time is significantly reduced and complex analyses can now be compared within a single or across multiple laboratories. Optimal usability has been ensured by involving biologists throughout the entire development process and by extensive tests in a laboratory setting. Given the incorporation of several analysis methods and the flexibility due to the use of standard software technology and plug-in mechanism, the developed application could be of great interest to the qPCR community. Figure 4 Visualization of a statistical test result. The statistical test was used to test two classes of biological replicates for their significant differences, whereas class "fasted m" was used as reference class. The Installation of the application is provided through an installer and should be completed within one hour provided the necessary database access rights are granted. We recommend installing the application on a central server by a system administrator.
Visualization of a statistical test result
Step-by-step instructions are provided at the projects web site together with the installer file. The reference installation of QPCR is running on a SUN Fire™ X4600 M2 6 × dual core Opteron server (Sun Microsystems Ges.m.b.H, Vienna, Austria) with 24 GB of memory running Solaris and using a dedicated Oracle 10 g database server. Attached is a Storage Area Network (EVA 5000, Hewlett-Packard Ges.m.b.H., Vienna, Austria) with 9.5 TBytes net capacity. | 5,777.8 | 2009-08-27T00:00:00.000 | [
"Biology",
"Computer Science"
] |
A phase I trial of the pan-ERBB inhibitor neratinib combined with the MEK inhibitor trametinib in patients with advanced cancer with EGFR mutation/amplification, HER2 mutation/amplification, HER3/4 mutation or KRAS mutation
Purpose Aberrant alterations of ERBB receptor tyrosine kinases lead to tumorigenesis. Single agent therapy targeting EGFR or HER2 has shown clinical successes, but drug resistance often develops due to aberrant or compensatory mechanisms. Herein, we sought to determine the feasibility and safety of neratinib and trametinib in patients with EGFR mutation/amplification, HER2 mutation/amplification, HER3/4 mutation and KRAS mutation. Methods Patients with actionable somatic mutations or amplifications in ERBB genes or actionable KRAS mutations were enrolled to receive neratinib and trametinib in this phase I dose escalation trial. The primary endpoint was determination of the maximum tolerated dose (MTD) and dose-limiting toxicity (DLT). Secondary endpoints included pharmacokinetic analysis and preliminary anti-tumor efficacy. Results Twenty patients were enrolled with a median age of 50.5 years and a median of 3 lines of prior therapy. Grade 3 treatment-related toxicities included: diarrhea (25%), vomiting (10%), nausea (5%), fatigue (5%) and malaise (5%). The MTD was dose level (DL) minus 1 (neratinib 160 mg daily with trametinib 1 mg, 5 days on and 2 days off) given 2 DLTs of grade 3 diarrhea in DL1 (neratinib 160 mg daily with trametinib 1 mg daily). The treatment-related toxicities of DL1 included: diarrhea (100%), nausea (55.6%) and rash (55.6%). Pharmacokinetic data showed trametinib clearance was significantly reduced leading to high drug exposures of trametinib. Two patients achieved stable disease (SD) ≥ 4 months. Conclusion Neratinib and trametinib combination was toxic and had limited clinical efficacy. This may be due to suboptimal drug dosing given drug–drug interactions. Trial registration ID: NCT03065387. Supplementary Information The online version contains supplementary material available at 10.1007/s00280-023-04545-4.
Neratinib, a potent oral, irreversible, pan-ERBB inhibitor targets EGFR, HER2, and HER4 at the intracellular tyrosine kinase domains [14]. Neratinib reduces EGFR and ERBB2 autophosphorylation, and their downstream signaling, and inhibits the growth of EGFR-and HER2-dependent cell lines [14,15]. Despite compelling clinical benefit from neratinib in patients with the appropriate genetic alterations, patients frequently develop resistance resulting in cancer progression and/or relapse [4,10].
The MAPK pathway, the prominent downstream effector of ERBB signaling, is comprised of rat sarcoma virus (RAS), rapidly accelerated fibrosarcoma (RAF), mitogenactivated protein kinase kinase (MEK) and extracellular signal-regulated kinase (ERK). This pathway is involved in proliferation, survival and differentiation and as a result, deregulation of this pathway contributes to cancer [4,16,17]. Preclinical studies have shown that in HER2-overexpressing cells, combining MEK inhibition with neratinib reduced phosphorylated ERK more than either single agent [18]. Further, combination therapy suppressed tumor growth and reduced expression of the Forkhead box transcription factor M1 (FOXM1) in HER2-overexpressing breast cancers resistant to trastuzumab and lapatinib, and suppressed tumor growth and increased progression free survival in patientderived xenografts of breast, colorectal and esophageal cancers with HER2 mutations [18,19].
Additionally, the mutant RAS pathway amplifies the ERBB kinase activity both in vitro and in vivo [20]. Broad inhibition of kinases of the ERBB family by neratinib has been shown to suppress KRAS G12D mutant-driven lung tumors and enhance the potency of MEK inhibition by trametinib in a cre-inducible immunocompetent mouse model of autochthonous lung cancer [20]. Similarly, another pan-ERBB inhibitor, afatinib, was shown to reduce the progression of KRAS G12D driven lung cancer in preclinical mouse models [21]. Both studies have shown that cancers with KRAS mutations rely on resistance mechanisms that involve signaling through the ERBB network. These mechanisms can be effectively targeted either using a combination of neratinib and trametinib or using a single agent afatinib, irrespective of the expression or mutation of EGFR/ERBB [20,21]. Herein, we report the feasibility and safety results of a single-center study of neratinib and trametinib in patients with EGFR mutation/amplification, HER2 mutation/amplification, HER3/4 mutation and KRAS mutation.
Study design and dosing
This study was part of an ongoing open-label, non-randomized, single-center, phase I dose-escalation trial, conducted in patients with advanced or metastatic cancer with EGFR mutation/amplification, HER-2 mutation/amplification, HER-3/4 mutation or KRAS mutation (NCT03065387). Neratinib was provided by Puma Biotechnology and trametinib was commercially obtained. Neratinib was orally administered in a continuous 28-day cycle and trametinib was given orally once daily or followed a 5 day on and 2 days off schedule. Compliance was determined through review of pill diaries and unused drugs returned at the end of every cycle. The study was sponsored by Puma Biotechnology and was approved by the Institutional Review Board (IRB) in accordance with the Declaration of Helsinki, Good Clinical Practice, and all federal, state and local regulatory guidelines. Consent was obtained from all patients prior to study enrollment.
The protocol followed a standard 3 + 3 design [22]. Adverse events (AE) were graded based on the National Cancer Institute (NCI) Common Terminology Criteria for Adverse Events, version 4.0 (CTCAEv4.0). Dose limiting toxicity (DLT) was defined as any treatment-related grade ≥ 3 non-hematologic toxicities (except for: grade 3 nausea and vomiting lasting < 72 h with adequate antiemetic and supportive care; grade 3 diarrhea lasting < 48 h with optimal medical therapy; alopecia; and, electrolyte imbalances that resolved with supportive care); any grade 4 hematologic toxicity lasting more than a week; grade 3 thrombocytopenia with bleeding; neutropenic fever; all other grade 3 non-hematologic toxicity and any study drugs related severe or life-threatening conditions not defined in the CTCAEv4.0. DLT evaluable patients were defined as patient that had received at least 75% of the study drugs in the first cycle (28 days). A patient who discontinued therapy without completing the first tumor assessment (radiographic evaluation approximately 8 weeks after baseline) would be in-evaluable for response assessment. The maximum tolerated dose (MTD) was defined as the highest dose at which no more than 1 of 6 evaluable patients had a DLT.
Safety assessments
All patients were evaluated for new or worsening adverse events by investigator or qualified designee. The assessments were conducted weekly during the first cycle (28 days), and then monthly from cycle 2 onwards. Evaluation could be conducted at higher frequency if required by the patient's clinical condition. The assessment included a complete physical examination, eastern cooperative oncology group (ECOG) assessment, and the recording of vital signs. All toxicities were carefully evaluated in terms of grading, seriousness, action taken regarding trial agents and causality to each study agent.
Study oversight
The study was conducted in compliance with the Declaration of Helsinki, Good Clinical Practice, and all relevant federal, state and local regulatory guidelines and was approved by the MD Anderson cancer center Institutional Review Board (IRB). To ensure adherence to the study procedures and patient safety, the study was monitored by Investigational New Drug (IND) office at MD Anderson Cancer Center. All treatment-related toxicities experienced by the patients, including any cases of early termination, were reported to and reviewed by the IND office before approval of enrollment to subsequent cohorts.
Eligibility criteria
Key inclusion criteria included patients with advanced solid tumors (not hematologic malignancy), either relapsed after standard therapy or without standard therapy available. Patients must have had one of the following pre-identified somatic molecular aberrations predicted to be activating or pathogenic as performed in the Clinical Laboratory Improvement Amendments (CLIA) environment and suitable for enrollment: EGFR mutation/amplification, HER-2 mutation/amplification, HER-3/4 mutation or KRAS mutation; age 18 years or older; measurable disease by Response Evaluation Criteria in Solid Tumors (RECIST) v1.1; ECOG status ≤ 1; with adequate organ functions including absolute neutrophils > 1500 cells/uL, platelets ≥ 100,000/uL, hemoglobin ≥ 9 g/dL, total bilirubin ≤ 1.5 × upper limit of normal (ULN), serum creatinine < 1.5 × ULN and alanine transaminase (ALT) ≤ 2.5 × ULN (≤ 5 × ULN if with liver metastases); patients that are of child-bearing potential must agree to use adequate contraception. Key exclusion criteria included concurrent chemotherapy treatment; uncontrolled illness such as active infection requiring intravenous (IV) antibiotics; any clinically significant heart conditions or gastrointestinal abnormalities that may alter absorption or inability to swallow pills; skin rash > grade 1; albumin < 3 gm/dL; or, history of retinal disorder, dry eye syndrome or blurry vision not evaluated and cleared by ophthalmology prior to starting treatment.
Genomic eligibility
All genomic alterations in eligible genes were reviewed by the MD Anderson Precision Oncology Decision Support (PODS) prior to patient enrollment. Alterations were researched within the published literature for any known effect on function, stability, expression, or therapeutic sensitivity. Alterations were then classified for their functional significance and variant-level actionability, as previously described [23].
Assessment of tumor response
Baseline radiographic imaging (e.g., computed tomography (CT) scan or magnetic resonance imaging (MRI)) was performed within four weeks of the start of treatment. Tumor measurements were performed on patients at baseline and at the end of every two cycles (three cycles after 24 weeks). Measurable target lesions were evaluated for response using RECIST v1.1. For purposes of this report, prolonged stable disease (SD) was defined as lasting ≥ 16 weeks.
Patient characteristics
Between the period of November 2017 and March 2020, a total of 20 patients with advanced solid tumor were enrolled to receive combination therapy with neratinib and trametinib. The demographic and clinical characteristics of the patients are summarized in Table 1. The median age of patients was 50.5 years (range 26-71 years) with the majority being female (60%). The median number of prior systemic therapies was 3 (range 1-11). Five (25%) patients had received prior therapy with either EGFR (n = 1) or HER-2 targeted therapies (n = 4). The most common malignancies were colorectal cancer (n = 10, 50%), ovarian cancer (n = 3, 15%) and esophageal cancer (n = 2, 10%). Table 1 summarizes the pre-identified somatic molecular alterations that were deemed activating or pathogenic and enrolled in the study, including KRAS mutations (n = 14; 70%) and pan-ERBB alterations (n = 15, 75%). ERBB2 amplification (n = 7, 35%) was the most common alteration in enrolled patients with pan-ERBB alterations followed by ERBB2 mutation (n = 4, 20%). Co-occurring alterations were reported in three patients including an ovarian patient with somatic EGFR G724S and ERBB2 Y590C mutation; a colorectal patient with HER2 amplification and ERBB2 D769Y mutation; and a colorectal patient with co-existing EGFR amplification by next generation sequencing (NGS) and EGFR L861R mutation. Among the ERBB2 amplified Table 1 Baseline demographics and clinical characteristics N number, D day, Ner neratinib, TRA trametinib, EGFR epidermal growth factor receptor, ERBB epidermal growth factor, IHC immunohistochemistry, FISH fluorescence in situ hybridization, NGS next generation sequencing * EGFR targeted therapy includes cetuximab; HER2 targeted therapies include trastuzumab, trastuzumab emtansine, pertuzumab and two investigational agents (a HER-2/4-1BB bispecific and a HER-2 tyrosine kinase inhibitor) co-occurring alterations include the following: EGFR G724S and ERBB2 Y590C (n = 1); HER2 amplification by NGS and ERBB2 D769Y (n = 1); EGFR amplification by NGS and EGFR_L861R (n = 1)
Toxicity assessment and adverse event
Twenty patients received study drugs and were evaluable for safety. A total of 9 patients were treated on dose level 1, with continuous oral administration of 160 mg neratinib in combination with 1 mg of trametinib daily. Three of 9 patients treated on dose level 1 discontinued treatment prematurely and failed to complete 75% of study drug due to clinical progression (n = 1), consent withdrawal after 7 doses (n = 1) and hospitalization due to anemia unrelated to study drugs (n = 1). Six of 9 patients completed at least 75% of study drug during first cycle and were evaluable for DLT. Grade 3 diarrhea were reported as DLT in 2 patients at dose level 1. Therefore, dose level minus 1 (160 mg neratinib daily with 1 mg trametinib 5 days on and 2 days off) was selected as maximum tolerated dose (MTD) per protocol guidance. Eleven patients were treated at dose level minus 1 with 8 patients completing at least 75% of the study drug during the first cycle. Three patients failed to complete 75% of study drug in cycle one due to hospital admission for disease-related conditions (n = 2) and withdrawal of consent to seek other treatment options (n = 1). Safety assessments are summarized in Table 2. Treatment-related adverse events (TRAEs) were observed in 95% of patients on study. Seven patients (35%) experienced at least 1 or more grade 3 adverse events that were attributed to study drugs. No patients experienced grade 4 or higher TRAEs. There was a higher frequency and grading of TRAEs reported for neratinib as compared to trametinib. Nineteen of 20 patients (95%) had TRAEs that were attributed to neratinib and 80% of patients had TRAEs that were attributed to trametinib. After the completion of cycle 1, three patients discontinued study treatment in subsequent cycles secondary to toxicity. This includes 2 patients coming off trial due to diarrhea attributed to both study drugs during cycles 4 and 7, respectively, and the third patient withdrew consent due to nausea and vomiting attributed to both study drugs during cycle 2.
To evaluate whether combination therapy of neratinib with trametinib increased toxicity, we compared the TRAEs of all grades (frequency > 10%) between the monotherapies of neratinib or trametinib reported in previous studies, and the TRAEs observed in our combination therapy study. The single agent neratinib taken at 180 mg, the most comparable dose level, has similar toxicity profile and frequency as combination therapy [27]. Trametinib monotherapy (2 mg) has overlapping TRAEs with combination therapy, albeit at a Table 3 shows the frequency of grade 3 TRAEs for monotherapies of trametinib or neratinib, and for combination therapy. Combination therapy had increased toxicity with increased instances of grade 3 AEs of diarrhea, nausea and vomiting compared to either single agent neratinib or trametinib.
Pharmacokinetic (PK) analysis
PK parameters were assessed for neratinib and trametinib from available samples obtained from 8 patients that were dosed at dose level minus 1 (160 mg neratinib daily with 1 mg trametinib 5 days on and 2 days off). The PK parameters are summarized in Table 4 and plasma concentration profiles in Fig. 2. After oral administration, trametinib
Antitumor activity
Among the 20 treated patients, 17 (85%) were evaluable for efficacy due to having RECISTv1.1 measurable disease and having completed post-baseline tumor assessment by radiographic imaging studies or physician-determined clinical progression. Two patients withdrew from the study during the first cycle to seek other treatment options. One patient was deemed non-evaluable for follow-up tumor re-assessment due to lack of contrast in target lesions to accurately assess the response. Figure 1a is a waterfall plot depicting best response of these 17 patients. SD ≥ 4 months was observed in 2 patients Both patients eventually withdrew consent from study due to treatment-related diarrhea. Figure 1b shows the treatment duration of patients by swimmer plot. The median treatment duration for all patients was 1.8 months (95% CI 1.69 to 2.40 months). Four patients had stable disease by RECIST1.1, however, 2 patients failed to reach prolonged stable disease which we defined as greater than or equal to 4 months duration. As of data cut for analysis (March 1, 2020), only 1 patient from study remained alive and all patients were off study, including 15 patients (75%) due to progressive disease (6 clinical PD and 9 PD by RECIST) and 5 withdrew consent from study (3 patients due to toxicity and 2 patients decided to seek other treatment options).
Discussion
Aberration in the function or expression of ERBB receptor tyrosine kinases contributes to tumorigenesis. EGFR or HER2 targeting agents are widely used and have shown substantial clinical efficacy. However, drug resistance often arises from aberrant or compensatory mechanism from downstream signaling proteins. Neratinib and trametinib have demonstrated clinical benefits when used as monotherapy or in combination with chemotherapy [26][27][28][29][30][31]. Furthermore, preclinical data have revealed synergistic effects of combination therapy with neratinib and trametinib including enhanced tumor inhibition [18].
Here, we are the first to examine the safety, toxicity and preliminary anti-tumor efficacy in patients with advanced solid tumors treated with neratinib and trametinib.
Unfortunately, continuous daily dosing of neratinib and trametinib was poorly tolerated. Two patients experienced DLTs of diarrhea at the initial dose level resulting in dose de-escalation to dose level minus 1 which was neratinib at a dose of 160 mg once daily in combination with trametinib at a dose of 1 mg once daily on a 5 days on and 2 days off schedule. This cohort was expanded to include a minimum of six patients treated and evaluable for DLT to further evaluate safety and tolerability. In total, 8 patients were DLT evaluable, and dose level minus one was determined to be the MTD. The TRAEs observed in this study are consistent with previous studies and were mainly gastrointestinal toxicities such as diarrhea, nausea and vomiting [27,32]. Monotherapy with neratinib at 180 mg daily displayed a similar toxicity profile and frequency with combination therapy on this study. Supp. Table 2 summarizes the TRAEs of all grades with neratinib or trametinib, and in combination. However, combination therapy revealed increased instances of grade 3 AEs including diarrhea, nausea, and vomiting as compared to either single agent neratinib or trametinib. This is likely due to overlapping toxicities causing additive side effects (Supp. Table 3). A previous study with dabrafenib and trametinib combination therapy has shown improvement in treatment tolerance when patients transitioned from continuous to intermittent dosing schedule [33]. Further exploration with intermittent dosing schedules of neratinib in combination with trametinib should be considered for future studies to help alleviate overlapping toxicity.
In our PK cohort of 8 patients treated at dose level minus 1 (160 mg neratinib daily with 1 mg trametinib 5 days on and 2 days off), trametinib pharmacokinetics, when co-administered with neratinib, appears to have a much lower oral clearance (1.55 L/h) leading to higher exposure (AUC 0-24 ) compared to published single agent data results of 3.81 L/h [32,34]. In fact, in our cohort, the mean trametinib exposure was 732 ng*h/mL ranging from 197 to 1054 ng*h/mL which was closer to the single agent dosing of trametinib at ≥ 2.5 mg [32]. Furthermore, at day 15, the accumulation ratios (AR) in our patient cohort are 2-3 times those reported by Infante et al. [32].
Studies of trametinib and neratinib have shown that each agent has different metabolism pathways. Neratinib is metabolized via cytochrome P450-3A4 (CYP3A4) pathway [35]. In our PK cohort, there was no observed changes in neratinib PKs compared to existing single agent PK data. However, neratinib is a known P-glycoprotein (P-gp) inhibitor and was shown to increase digoxin (a P-gp substrate) plasma concentrations (Cmax) by 54% and the AUC by 32% [36,37]. Prior human study has shown that trametinib undergoes a non-cytochrome-mediated metabolism, involving deacetylation via hydrolytic enzymes alone or in combination with glucuronidation [38]. Trametinib is also a substrate of drug transporters, P-gp and breast cancer resistant protein (Bcrp) suggesting a P-gp and/or Bcrp substrate or inhibitor can modulate trametinib clearance and exposure [39]. Trametinib may be a low-level CYP3A4 inducer and, therefore, could lead to higher clearance of agents that are mainly metabolized by this same isoenzyme [38]. Fortunately, recent data revealed concomitant administration of trametinib and oral contraceptives (OC), well-known to be metabolized by CYP3A isoenzymes, showed no significant differences in the PK disposition of OC when compared to OC alone, and without any changes in clinical efficacy of OC [40].
We believe the observed drug-drug interaction from our PK analysis is due to combination therapy with neratinib and trametinib, with neratinib inhibiting the clearance of trametinib via the P-gp efflux pathway leading to a very high exposure of trametinib. There may be other factors such as saturation of deacetylation and glucuronidation enzyme pathways which may contribute to additional lowering of trametinib clearance resulting in increased total exposure. With trametinib as a substrate for P-gp, the use of P-gp strong inhibitors (e.g., ketoconazole, itraconazole, clarithromycin, erythromycin, ritonavir, verapamil) or P-gp inducers (e.g.,: phenytoin, rifampin, St. John's wort, corticosteroids, efavirenz, nevirapine) should be avoided whenever possible when taking trametinib [41]. If clinically necessary, the use of known P-gp inhibitors or inducers when taking trametinib therapy, requires close monitoring and evaluation to avoid potential adverse events or decrease in clinical efficacy.
Limited anti-tumor efficacy was seen across all treated patients. The lack of meaningful response may be due to suboptimal dosing of both neratinib and trametinib. No partial responses were reported, and SD ≥ 4 months was seen in only 2 patients. Both patients were treated at dose level 1. No patients obtained SD ≥ 4 months or PR at dose level minus 1, the declared MTD.
Interestingly, both patients with SD ≥ 4 months harbored EGFR aberrations, including an ovarian cancer patient with KRAS G12D and EGFR G724S mutations and a salivary gland cancer patient with EGFR amplification by IHC. Zhao et al. showed that combination therapy with neratinib and trametinib induced reduction of ERK1/2 phosphorylation but failed to trigger robust anti-apoptotic activity. Zhao et al. also found that trametinib treatment resulted in downregulation of proteins involved in the MAPK and AKT pathways but increased total levels of EGFR [18]. This is in line with a prior study that correlated EGFR aberrations with favorable tyrosine kinase inhibitor treatment outcomes in lung cancer [42]. Taken together, these data may suggest that EGFR aberrations could be a response predictor for neratinib and trametinib treatment but the numbers in our study are too small to make any true assessment.
There are several limitations to this study. First, we were unable to dose escalate to meaningful doses of either agent in combination due to overlapping toxicity resulting in poor patient tolerance. Second, there was heterogeneity in the molecular aberrations allowable for enrollment onto study. Five of 11 (45.5%) patients enrolled on dose level minus 1 had only activating KRAS mutations with no co-occurring pan-ERBB aberrations. It is unclear if the lack of response at this dose level is due to suboptimal dosing versus the type of molecular aberration enrolled. Third, the sample size is small with patients being heavily pre-treated with a median of 3 lines of prior therapy and consisting of multiple solid tumor types making analysis challenging.
In conclusion, neratinib and trametinib combination therapy was not tolerable at dose level 1. MTD was declared as dose level minus 1 (neratinib 160 mg daily with trametinib 1 mg, 5 days on and 2 days off) and had limited clinical activity. This may be due to suboptimal drug dosing of trametinib and neratinib. The increased plasma exposure of trametinib may have contributed to the toxicity observed. Based on the results from our PK cohort, there was significant drug-drug interaction with an increase in trametinib plasma concentrations and exposure due to a decrease in clearance of trametinib. We hypothesize that this could be due to neratinib-induced inhibition of trametinib clearance via P-gp efflux mechanisms. Further work is needed in determining the best dose and/or schedule for treating patients with both agents in combination to improve tolerability. Additionally, further work needs to be done to determine the appropriate tumor types and molecular aberrations for enrollment.
Supplementary Information
The online version contains supplementary material available at https:// doi. org/ 10. 1007/ s00280-023-04545-4. Consent to publish A signed informed consent was obtained from each participant prior to any study procedure.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 5,597.2 | 2023-06-14T00:00:00.000 | [
"Medicine",
"Biology",
"Chemistry"
] |
THz porous fibers : design , fabrication and experimental characterization
Porous fibers have been identified as a means of achieving low losses, low dispersion and high birefringence among THz polymer fibers. By exploiting optical fiber fabrication techniques, two types of THz polymer porous fibers—spider-web and rectangular porous fibers— with 57% and 65% porosity have been fabricated. The effective refractive index measured by terahertz time domain spectroscopy shows a good agreement between the theoretical and experimental results indicating a lower dispersion for THz porous fiber compared to THz microwires. A birefringence of 0.012 at 0.65 THz is also reported for rectangular porous fiber. © 2009 Optical Society of America OCIS codes: (260.3090) Infrared, far; (060.2280) Fiber design and fabrication; (230.7370) Waveguides; (260.2030) Dispersion; (260.1440) Birefringence. References and links 1. S. Atakaramians, S. Afshar Vahid, B. M. Fischer, D. Abbott, and T. M. Monro, “Porous fibers: a novel approach to low loss THz waveguides,” Opt. Express 16, 8845–8854 (2008). 2. L.-J. Chen, H.-W. Chen, T.-F. Kao, J.-Y. Lu, and C.-K. Sun, “Low-loss subwavelength plastic fiber for terahertz waveguiding,” Opt. Lett. 31, 308–310 (2006). 3. S. Afshar Vahid, S. Atakaramians, B. M. Fischer, H. Ebendorff-Heidepriem, T. M. Monro, and D. Abbott, “Low loss, low dispersion T-ray transmission in microwires,” in CLEO/QELS, p. JWA105 (Baltimore, Maryland, 2007). 4. W. Withayachumnankul, G. M. Png, X. Yin, S. Atakaramians, I. Jones, H. Lin, B. Ung, J. Balakrishnan, B. W.H. Ng, B. Ferguson, S. P. Mickan, B. M. Fischer, and D. Abbott, “T-ray sensing and imaging,” Proc. IEEE 95, 1528–1558 (2007). 5. J.-Y. Lu, C.-P. Yu, H.-C. Chang, H.-W. Chen, Y.-T. Li, C.-L. Pan, and C.-K. Sun, “Terahertz air-core microstructure fiber,” Appl. Phys. Lett. 92, 064105 (2008). 6. K. Nielsen, H. K. Rasmussen, A. J. L. Adam, P. C. M. Planken, O. Bang, and P. U. Jepsen, “Bendable, low-loss Topas fibers for the terahertz frequency range,” Opt. Express 17, 8592–8601 (2009). 7. B. Bowden, J. A. Harrington, and O. Mitrofanov, “Silver/polystyrene-coated hollow glass waveguides for the transmission of terahertz radiation,” Opt. Lett. 32, 2945–2947 (2007). 8. A. Hassani, A. Dupuis, and M. Skorobogatiy, “Porous polymer fibers for low-loss Terahertz guiding,” Opt. Express 16, 6340–6351 (2008). 9. A. Hassani, A. Dupuis, and M. Skorobogatiy, “Low loss porous terahertz fibers containing multiple subwavelength holes,” Appl. Phys. Lett. 92, 071101 (2008). 10. S. Atakaramians, S. Afshar, B. M. Fischer, D. Abbott, and T. M. Monro, “Low loss, low dispersion and highly birefringent terahertz porous fibers,” Opt. Commun. 282, 36–38 (2008). 11. H. Han, H. Park, M. Cho, and J. Kim, “Terahertz pulse propagation in a plastic photonic crystal fiber,” Appl. Phys. Lett. 80, 2634–2636 (2002). #111989 $15.00 USD Received 1 Jun 2009; revised 24 Jul 2009; accepted 24 Jul 2009; published 29 Jul 2009 (C) 2009 OSA 3 August 2009 / Vol. 17, No. 16 / OPTICS EXPRESS 14053 12. M. Cho, J. Kim, H. Park, Y. Han, K. Moon, E. Jung, and H. Han, “Highly birefringent terahertz polarization maintaining plastic photonic crystal fibers,” Opt. Express 16, 7–12 (2008). 13. C. S. Ponseca, R. Pobre, E. Estacio, N. Sarukura, A. Argyros, M. C. J. Large, and M. A. van Eijkelenborg, “Transmission of terahertz radiation using a microstructured polymer optical fiber,” Opt. Lett. 33, 902–904 (2008). 14. T. M. Monro and H. Ebendorff-Heidepriem, “Progress in microstructured optical fibers,” Annual Review of Materials Research 36, 467–495 (2006). 15. A. Dupuis, J.-F. Allard, D. Morris, K. Stoeffler, C. Dubois and M. Skorobogatiy, “Fabrication and THz loss measurements of porous subwavelength fibers using a directional coupler method,” Opt. Express 17, 8012–8028 (2009). 16. G. Barton, M. A. van Eijkelenborg, G. Henry, M. C. J. Large, and J. Zagari, “Fabrication of microstructured polymer optical fibres,” Optical Fiber Technology 10, 325–335 (2004). 17. H. Ebendorff-Heidepriem and T. M. Monro, “Extrusion of complex preforms for microstructured optical fibers,” Opt. Express 15, 15,086–15,096 (2007). 18. H. Ebendorff-Heidepriem, T. M. Monro, M. A. van Eijkelenborg, and M. C. J. Large, “Extruded high-NA microstructured polymer optical fibre,” Opt. Commun. 273, 133–137 (2007). 19. S. H. Law, M. A. van Eijkelenborg, G. W. Barton, C. Yan, R. Lwin, and J. Gan, “Cleaved end-face quality of microstructured polymer optical fibres,” Opt. Commun. 265, 513–520 (2006). 20. M. Wächter, M. Nagel, and H. Kurz, “Metallic slit waveguide for dispersion-free low-loss terahertz signal transmission,” Appl. Phys. Lett. 90, 061111 (2007). 21. D. Grischkowsky, “Optoelectronic Characterization of Transmission Lines and Waveguides by Terahertz TimeDomain Spectroscopy,” IEEE Journal on Selected Topics in Quantum Electronics 6, 1122–1135 (2000). 22. S. Atakaramians, S. Afshar Vahid, B. M. Fischer, H. Ebendorff-Heidepriem, T. M. Monro, and D. Abbott, “Low loss terahertz transmission,” in Proceedings SPIE Microand Nanotechnology: Smart Materials, Nanoand Micro-Smart Systems, vol. 6414 (art. no. 64140I, Adelaide, Australia, 10-13 Dec., 2006).
Introduction
In recent years there has been increased interest in low loss and low dispersion THz waveguides as potential substitutes for free-space optics in terahertz spectroscopy and imaging systems that require greater electromagnetic confinement.A number of waveguide solutions based on technologies from both microwave and optics, as reviewed in Ref [1], have been studied.Among the dielectric solutions proposed, solid-core sub-wavelength fibers [2] (so-called THz microwires [3,4]), hollow-core and solid-core microstructured fibers [5,6], and Ag/PS-coated hollow-core glass fibers [7] have the lowest loss reported in the literature.These waveguide solutions are either large in diameter (20-30 mm), which reduces the flexibility of the waveguide structure, or are only suitable for relatively narrow band applications due to limitation in photonic bandgap bandwidth or loss restrictions.
Recently, a novel class of THz fibers, porous fibers, was proposed [1,8,9].These fibers are air-clad fibers with diameters less than or comparable to the operating wavelengths (≤ 600 μm) with sub-wavelength features in the core, which allow low loss propagation and improved confinement of the field compared to microwires [1].It is also theoretically demonstrated [10] that porous fibers have a smaller decrease in the group velocity, i.e. lower dispersion, relative to microwires.Furthermore it has been shown [10] that these fibers can be designed to maintain polarization of the field by using asymmetrical sub-wavelength air-holes.
In terms of state of the art in THz waveguide fabrication, THz hollow-core and solid-core microstructured fibers [5,11,12] have been fabricated by stacking teflon or high density polyethylene capillary tubes.The tubes were stacked to form a two dimensional triangular lattice and then were fused together by either using thin layers of polyethylene film or by heat treatment.No fiber drawing was required for these THz waveguides since the dimension achieved by stacking was suited for THz guidance.Another approach proposed for fabrication of a THz solid-and hollow-core microstructured polymer waveguides [6,13] was drilling the hole pattern into a 60-70 mm diameter of polymer preform using a computer controlled mill, and drawing the preform down to 6 mm diameter fiber.
Porous fibers have outer diameters less or comparable to the operating wavelength (≤ 600 μm), resulting in sub-wavelength features within the transverse profile of the fibers.For preforms produced using the stacking method, the stacked preform needs to be drawn.As is known from optical fiber fabrication, to accomplish that, the bundle is usually held together by an outer jacket [14] to enable pressurization of the holes in the preform.The hole closure possibility is very high in this method and the maximum achieved porosity so far is 8-18% [15].Moreover this method is suitable for fabrication of porous fiber with hexagonal array of circular air-holes.The fabrication of a fiber made by drilling the holes in the preform is very time consuming for a large number of holes and this method has restricted maximum porosity due to the mechanical constraints of the hole size and the wall thickness between the holes [16].Furthermore the shape of the holes is limited to the circular shape.Dupuis et al. have proposed a subtraction technique for porous fiber fabrication [15].In this method after drawing down the composite preform, the fiber segments were submerged into a solvent for several days to etch away the material residing in holes.The fibers were also left to dry for several days.Although the air-holes were preserved in this method, the fabrication process in very lengthy and the choice of material is limited in terms of dissolving solvent and melting temperature differences between the materials.Large melting temperature differences can result in material degradation.Moreover the maximum porosity achieved was 29-45%.
In terms of loss values characterization of porous fibers, recently, Dupuis et al. have reported loss values of 0.01 cm −1 for a porous fiber with diameter 380 μm and 40% porosity [15].A non-destructive directional coupler method, where a second fiber is translated along the length of the test fiber, is used to measure the transmission losses [15].Also during the characterization of the microstructured TOPAS THz fiber (fiber SMA with 28% porosity [6]) Nielsen et al. has observed THz microporous guidance at 0.2 THz [6].No loss value has been reported for this fiber.
In this paper, to the best of our knowledge, we demonstrate the first fabrication of highly porous THz fibers with symmetrical and asymmetrical sub-wavelength air-holes and experimental characterization of their effective refractive indices (n eff ).The fabrication of these porous fibers can not be formed by stacking or drilling methods.The experimental results agree with the predicted effective refractive index values, which validates both the low dispersion characteristics of the porous fibers compared to their microwire counterparts and the birefringence characteristics for asymmetrical porous fiber.
Fiber design and fabrication
The polymer material used for the fabrication of porous fibers and microwire is Polymethyl methacrylate (PMMA), whose optical properties in the THz region are reported in Ref. [4].As a reference, at 0.5 THz, the absorption coefficient and refractive index of the PMMA are 4.2 cm −1 and 1.65, respectively.The fibers are fabricated in a two-step process.First the preforms with the macroscopic structure are manufactured using the extrusion technique, which has been demonstrated to be not only viable for soft glasses [17] but also for polymers [18].The preforms are extruded by heating up a bulk polymer billet to a temperature where the material gets soft (170-180 • C).Then the soft material is forced through an extrusion die using a ram extruder at a fixed speed.The die exit geometry determines the preform cross-section.In the next step, the preform of 10-15 mm diameter and 180 mm length is drawn down to bands of more than 10 meters of fibers with outer diameters of a few hundreds of microns using a fiber drawing tower.
The critical step in the porous fiber fabrication is the die design.As stated in Refs.[1,10] the loss and dispersion of a porous fiber depends on the porosity of the fiber (fraction of the air-holes to the total core area) and the size (but not the shape) of the air-holes.We start the die design with an hexagonal array of circular air-holes, which is one of the most studied and common arrangement of the air-holes in microstructured optical fibers.The cross-section of the die exit with hexagonal array of circular air-holes is shown in Fig. 1(a).The porosity achieved is limited to 47% due to extrusion die machining constraints, such as minimum wall thickness of 0.6 mm between the holes.In the fiber, the porosity will be even lower than the value given by the die exit due to thickening of the nodes between the air-holes [18].Recent advances in the fiber preform extrusion and die design of microstructured fibers [17] have allowed us to fabricate fibers with non-circular and sub-wavelength size air-holes.This results in thiner layer of material between non-circular air-holes in comparision to the circular air-holes, which increases the achievable porosity for the fibers.This increase in porosity using the new die design opens up a new opportunities in THz porous fiber fabrication.
In this paper we consider two types of porous fibers: porous fiber with symmetrical and asymmetrical sub-wavelength air-holes.The cross-sectiones of the designed die exit for a symmetrical (here after called spider-web), and an asymmetrical (here after called rectangular), porous fibers are shown in Figs.1(b) and 1(c), respectively.The cross-sectiones of the extruded preform of a spider-web and rectangular porous fibers are shown in Figs.1(d) and 1(e), respectively.Next the extruded preforms are drawn to porous fibers.A spider-web fiber preform of 120 mm diameter and 180 mm length was drawn down to two different outer diameters of 200-250 μm and 300-350 μm, where as rectangular fiber preform of 100 mm diameter and 180 mm length was drawn down to three different sets of outer diameters (350-400, 450-500, and 550-600 μm).The porosity of the spider-web and rectangular porous fibers given by the die design are 67% and 71%, respectively.Using scanning electron microscope (SEM) images, the porosity of the fibers is measured to be 57% and 65%, respectively, which is smaller than the porosity given by the die design due to rounding in the corners and thickening of the struts.The cleaving of these polymer porous fibers is non-trivial because they are easily squashed and deformed by using conventional cleaving blades, as shown in Figs.2(a) and 2(b) for the spider-web and rectangular porous fibers, respectively.Heating up the cleaving blade and the fiber to 70-80 • C as proposed by Law et al for microstructured polymer optical fibers [19], improves the cleaved end-face of the fibers.It is worth mentioning that the structure still gets deformed especially the outer ring, as shown in Figs.2(c) and 2(d), because of high porosity and missing outer solid region as in optical microstructured fibers to date.Furthermore, we observe that as long as the cleaving force is in line with the struts of rectangular porous fiber, the achieved end-face of these fibers is better compared to the spider-web porous fibers.
The last method explored for cleaving the porous fibers is using focused ion beam (FIB) milling.The Gallium ions are accelerated to an energy of 30 keV (kiloelectronvolts), and then focused onto the sample.An ion beam current of 21 nA is used.The resulted cross-section of the FIB cleaved spider-web and rectangular porous fibers are shown in Figs.2(e) and 2(f), respectively.The advantage of FIB milling to cleave porous fibers is that it provides smooth cuts across the entire fiber end-face.However, the amount of time required for cleaving a porous fiber with 400 μm diameter is about 17.5 hours.
To compare the properties of the porous fibers with a corresponding microwire, we also have used an extruded polymer optical fiber with 250 μm outer diameter [18] as a THz microwire.The THz properties of the billet used for microwire are same as that of the porous fiber.
Porous fiber modeling and characterization
As discussed in the Section 2, due to rounding of the corners and thickening of the polymer struts during extrusion and fiber drawing, the cross-sections of the fabricated fibers [Figs.2(e In order to observe the effect of the structure deformation on the THz properties of the fabricated porous fibers (α eff and n eff ), the theoretical values of the effective material loss and effective refractive index of the ideal and real porous fibers are compared in Figs.3(a) and 3(b).For the ideal porous fibers, the die exit cross-sections [Figs.1(b) and 1(c)] are scaled down proportionally to a 350 μm outer diameter (the diameter of the fabricated porous fiber) and used for numerical modeling.While, for the real porous fibers, the SEM images of the cross-sections of the fabricated porous fibers [Figs.2(e) and 2(f)] are used for numerical modeling.A Finite Element Modeling (FEM) technique instantiated in the commercial FEM package COMSOL 3.5 is used to calculate the theoretical values of the α eff and n eff .Different mesh densities are employed in different regions within the cross-section in order to achieve convergence for the calculated parameters.For comparison, the THz properties of a 300 μm diameter ideal circular shaped air-hole porous fiber with an hexagonal arrangement is also included.For this diameters, the effective material loss of the hexagonal array circular porous fiber at 0.3 THz is in the same order of magnitude compared with the spider-web porous fiber.Unsurprisingly the decrease in the porosity values of the fabricated fibers (due to rounding of the corners and thickening of the polymer struts) increases the value of the expected effective material loss and effective refractive index.Despite the increase, the characteristic values (α eff and n eff ) for real spider-web and rectangular porous fibers are still lower than that of the ideal hexagonal array circular air-hole porous fiber.
The THz properties of the fabricated porous fibers are investigated by using terahertz time domain spectroscopy (THz-TDS).A mode-locked Ti:sapphire laser with a pulse width of less than 170 fs, central wavelength of 800 nm and a repetition rate of 76 MHz is used to drive the emitter, a photoconductive array of antennas [20], and the detector.The detector is a center- excited dipole between coplanar strip lines photoconductive switch with a 5 μm gap.The fiber tips are directly launched on the emitter and detector.The schematic of the THz-TDS setup and photoconductive antennas are shown in Fig. 4.
Assuming single mode propagation, the equation governing the input and output electric fields of the fiber can be written in the frequency domain as [21]: where, E out (ω) and E ref (ω) are the complex electric fields at angular frequency ω on the entrance and exit of the fiber, respectively; T 1 and T 2 are the total transmission coefficients that take into account the reflections at the entrance and exit faces, respectively; C is the coupling coefficient, the same for the entrance and exit faces; β eff is the propagation constant of the fundamental mode; α eff is the effective material loss that the propagating mode experiences; and L is the fiber length.At least two different lengths of a fiber are required for calculating the THz properties (α eff and β eff ) of the fiber.Applying Eq. ( 1) to two different lengths (L 1 and L 2 ), the transfer function determined from the ratio of E out1 (ω) and E out2 (ω) reads as: The C, T 1 and T 2 coefficients are canceled provided that the positions of the fiber and antenna does not change, e.g. this would occur during a cut back measurement.Then α eff and β eff of the fiber is obtained form the amplitude and the phase of the transfer function (Eq.( 2)).
At this stage, it is not straight-forward to conduct a cut back measurement on porous fibers due to cleaving complexities discussed in Section 2. Therefore, three different lengths of each fiber are considered to determine the THz properties of spider-web and rectangular porous fibers.This introduces a large error in the loss measurement of the fibers, since the amplitude of the field guided by the fiber depends on the alignment of the fiber tip with the antenna (i.e. the coupling efficiency) and cleaved end-face of the fibers.However, the effective refractive index of the fibers, which depends on the phase of the transfer function and is independent from the C, T 1 and T 2 coefficients, can be determined.Figures 5(a), 5(b) and 5(c) show the electric fields of the terahertz pulse measured for three different lengths of a 200 μm and 350 μm diameter spider-web porous fibers and a 250 μm diameter microwire, respectively.The pulses are separated vertically for clear display.The red (top), green (middle) and blue (bottom) lines represent the pulses through the three different lengths, from the longest to shortest.The fiber lengths used for the 200 μm spider-web porous fiber are 24.4,21.0 and 15.4 mm, the 350 μm spider-web porous fiber are 24.9, 20.4 and mm and for the microwire are 25.0, 20.2 and 17.7 mm.Each pulse represent a single scan with a time constant of 1 s.Due to antenna structure there is reflection in the time domain signal at roughly 10 ps after the main pulse.In order to avoid artifacts caused by these reflection, the signals are cut-off at the zero crossing just before the reflection (shown with arrow points in Fig. 5(a)) and padded with zeros.The same number of peaks are considered for the three lengths of each fiber structure.
The effective refractive index for the fiber is determined by averaging the three individual effective refractive indices obtained from comparison of each pair of the three scans.Figure 6(a) shows the experimental results measured for the 200 μm and 350 μm diameter spider-web porous fibers and the 200 μm diameter microwire together with the theoretically calculated values of the real fiber structures (obtained from SEM image).
There are two major sources of errors that are considered: fiber length and data processing uncertainties as explained as follows.A ±0.1 mm variation is considered for the length uncertainty.In order to remove low-frequency 1/ f noise two techniques have been used, which are base line removal and high pass filtering.The difference between these methods and the effect of variation of the cut-off frequency of the high pass filter are considered as the source of data processing uncertainty.The error bars shown in Fig. 6(a) represent the quadrature sum of standard deviations obtained from two sources of uncertainty described above.There is a good agreement between the theoretical and experimental calculated values of effective refractive indices.It is worth mentioning that the difference between the experimental and theoretical values at lower effective refractive index values is most likely due to the slight bending of the fibers.
Figure 6(a) indicates that the refractive index of porous fibers are a relatively flat function of frequency relative to that of the microwire.This corresponds to a slight drop in the group velocity of porous fiber compared to that of the microwire resulting in lower dispersion for THz porous fiber, as discussed in Ref. [10].For further clarification the theoretical calculated normalized group velocity (ν g = ∂ ω/∂ β eff ) for the real 200 μm and 350 μm diameter spiderweb porous fibers and the microwire is shown in the inset of the Fig. 6(a).The experiment is repeated to define the THz properties of the x-and y-polarization of the fundamental mode of a 350 μm diameter rectangular porous fiber.The fiber is mounted on a rotational mount and the alignment of the tip of the fiber with the photoconductive antenna is monitored with a magnifier.The x-and y-polarization modes are acquired when the generated THz pulse on the antenna is parallel to the long and short sides of the rectangles, respectively.The fiber lengths considered for the experiment are 30.0,34.1 and 38.2 mm.
Figure 6(b) shows the experimentally measured refractive indices of the x-and y-polarization modes for the 350 μm diameter rectangular porous fiber together with the theoretically calculated values of the real fiber structure.Experimentally, a 0.012 birefringence is achieved for 0.65 THz which is comparable to the expected theoretical results.As mentioned before for the lower effective refractive indices, the experimental data does not match well with the theory and this can be attributed to unwanted fiber bending.The slight oscillation in the measured n eff values could be due to the different signal noise levels of the three lengths.It should be noted that the theoretical calculations based on the SEM image of the fabricated porous fibers confirm
Conclusion and discussion
In conclusion, we successfully produced spider-web (symmetrical) and rectangular (asymmetrical) air-hole shaped porous fibers with high porosity of 57% and 65%, respectively.The porous polymer fibers are mechanically soft and therefore require non-trivial cleaving.Among the methods applied, the best cleaving quality is obtained when FIB milling is employed.The calculated effective material loss and effective refractive index of the ideal and real spider-web and rectangular porous fibers show that an α eff < 0.25 cm −1 for f < 0.8 THz can be achieved.The good agreement between the measured and calculated n eff indicated the low dispersion characteristics of these porous fibers compared to microwires.A birefringence of 0.012 at 0.65 THz was achieved for the rectangular porous fiber.This confirms that these fibers, with low loss and low dispersion that can be practically designed to maintain polarization of the field, are a promising polymer waveguide solution and a good substitute for free-space THz propagation.This also opens up new opportunities in THz biosensing, where the porous fibers can be used to sense ultra small sample sizes.
In order to measure the effective material loss of the fibers, it is required to implement either a cut-back method [6] or directional coupler method [15], where the coupling parameters are not required.With the applied approach in this paper, it is not straight forward to measure the loss values of the porous fiber since the coupling parameters are highly dependent on the reproducibility of the quality of cleaved end-face of the fibers and the relative position of the fiber tips in respect to the antenna.
A practical consideration is the annealing of polymer porous fibers.This will reduce the stress generated during fiber drawing resulting in straighter fiber for experiment.Subsequently the radiation losses due to the fiber curvature will be suppressed.An alternate approach is the fabrication of the fibers using a soft glass such as F2, which has lowest loss in THz regime among the soft glasses suitable for structured preform extrusion [22].Moreover, the cleaving of the fiber will be much easier compared to porous polymer fibers.However, the effective material loss will be higher than that of the PMMA.
Fig. 1 .
Fig. 1.The die design of the (a) hexagonal array of circular (b) spider-web and (c) rectangular air-hole porous fibers.The extruded preform of the (d) spider-web (12 mm diameter) and (e) rectangular (10 mm diameter) porous fibers.The cross-section of the (f) spider-web and (g) rectangular porous fibers.
Fig. 2 .
Fig. 2. Cleaved end-face of the (a) spider-web and (b) rectangular porous fibers using conventional blades.Cleaved end-face of the (c) spider-web and (d) rectangular porous fibers by heating up the blade and fiber before cleaving.Cleaved end face of the (e) spider-web and (f) rectangular air-hole porous fiber using focused ion beam milling.
Fig. 3 .
Fig. 3. (a) Effective material loss and (b) effective refractive index of ideal (dashed lines)and real (solid lines) hexagonal array circular (black), spider-web (red) and rectangular (blue) porous fibers.The blue and cyan stand for the x-and y-polarizations of the fundamental mode of the rectangular porous fiber.
Fig. 4 .
Fig. 4. Schematic of the terahertz time-domain spectroscopy setup.The emitter and detector are shown in the inset.
Fig. 5 .
Fig. 5. (a)-The electric field of the terahertz pulse measured for (a) 200 μm and (b) 350 μm diameters of spider-web porous fiber and (c) 250 μm diameter of microwire.The top (red), middle (green) and bottom (blue) signals represents the measured terahertz pulse of long, medium and short length of fibers, respectively.The vertical offset has been introduced intentionally for clear display.The arrows in (a) indicate the cut-off point for each fiber length.
Fig. 6 .
Fig.6.Efective refractive index of a 200 μm (green) and a 350 μm (red) diameters of spider-web porous fiber, a 250 μm diameter microwire (black) and a 350 μm diameter of rectangular porous fiber x-polarization (blue) and y-polarization (cyan) as a function of frequency.The solid lines represent the theoretical results based on the real fiber while the circles represent the measured experimental results. | 6,256.6 | 2009-08-03T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
6D attractors and black hole microstates
We find a family of AdS$_2\times \mathcal{M}_4$ supersymmetric solutions of the six-dimensional $\mathrm{F}(4)$ gauged supergravity coupled to one vector multiplet that arises as a low energy description of massive type IIA supergravity on (warped) AdS$_6\times S^4$. $\mathcal{M}_4$ is either a K\"ahler-Einstein manifold or a product of two Riemann surfaces with a constant curvature metric. These solutions correspond to the near-horizon region of a family of static magnetically charged black holes. In the case where $\mathcal{M}_4$ is a product of Riemann surfaces, we successfully compare their entropy to a microscopic counting based on the recently computed topologically twisted index of the five-dimensional $\mathcal{N}=1$ $\mathrm{USp}(2 N)$ theory with $N_f$ fundamental flavors and an antisymmetric matter field. Furthermore, our results suggest that the near-horizon regions exhibit an attractor mechanism for the scalars in the matter coupled $\mathrm{F}(4)$ gauged supergravity, and we give a proposal for it.
The five-dimensional topologically twisted index, which is the partition function of a five-dimensional N = 1 theory on M 4 × S 1 with an abelian topological twist along M 4 , has been recently computed when M 4 is a toric Kähler manifold [32] or the product of two Riemann surfaces [32,33]. Therein, the N = 1 USp(2N ) gauge theory with N f hypermultiplets in the fundamental representation and one hypermultiplet in the antisymmetric representation of USp(2N ), arising on the worldvolume of D4-branes near D8-branes and orientifolds [34], has been analyzed in the large N limit. With some assumptions on the relevant saddle-point, the large N limit of the index for M 4 = Σ g 1 × Σ g 2 has been evaluated as a function of magnetic charges and chemical potentials for the Cartan subgroup of the SU(2) M global symmetry of the theory [32]. 1 This result provides a prediction for the entropy of a family of AdS 6 magnetically charged black holes in massive type IIA supergravity. This prediction has been successfully tested for the only existing black hole solution, the so-called universal one [35,36] with a particular value of the magnetic charge, using the results for the entropy given in [37]. It is the purpose of this paper to find new black hole solutions, explicitly depending on a set of magnetic charges, and show that their entropy is correctly accounted by the topologically twisted index.
To this end, we will consider a six-dimensional truncation of the supersymmetric warped AdS 6 × S 4 background of massive type IIA supergravity [38] dual to the USp(2N ) theory. This truncation is described by an F(4) gauged supergravity coupled to vector multiplets [39][40][41][42]. Furthermore, we will restrict ourselves to one vector multiplet corresponding to the Cartan subgroup of the SU(2) M global symmetry of the five-dimensional superconformal field theory (SCFT).
As a warm-up, and to test the consistency of the truncation, we consider the background AdS 4 × Σ g which corresponds to a twisted compactification of the five-dimensional theory on Σ g . We successfully compare the free energy of the solution with the field theory computation in [33] and the ten-dimensional gravity computation in [43].
We then find new black hole horizon geometries of the form AdS 2 × Σ g 1 × Σ g 2 . We turn on an abelian gauge field inside the SU(2) R-symmetry that performs the topological twist by cancelling the spin connection, and two magnetic fluxes p 1 and p 2 (one along each Riemann surface) for the U(1) gauge field in the additional vector multiplet. We thus have a two-parameter family of magnetically charged black holes. We compare the entropy with the value of the topologically twisted index Z(p 1 , p 2 , ∆) that also depends on a chemical potential for the U(1) ⊂ SU(2) M symmetry and we find that the statistical entropy S BH of the black holes as a function of the magnetic charges is obtained by evaluating Z(p 1 , p 2 , ∆) at its critical point∆: With a convenient democratic parameterization for the fluxes and chemical potentials, see (5.7) and (5.11), the explicit form of the index can be written as [32] I SCFT (s I , t I , ∆ I ) = 4 √ 2 15 This structure is reminiscent of an analogous result for AdS 4 black holes [7-10, 17, 18]. This analogy and the relation to other interesting field theory quantities like the S 5 free energy and the effective twisted superpotential of the partial compactification on one of the Riemann surfaces, were discussed in detail in [32].
We also obtain AdS 2 ×M 4 horizon geometries where M 4 is a four-dimensional Kähler-Einstein manifold depending on a magnetic flux along M 4 . We find a simple and intriguing expression for the entropy suggesting that the computation in [32] could be generalized to this case too. We leave this for future work.
In gravity, the field theory chemical potential ∆ can be associated with the horizon value of the vector multiplet scalar field φ 3 . With a convenient parameterization, we find that the functional I SCFT (p 1 , p 2 ,∆) coincides with the area of the horizon divided by 4G N , where G N is the six-dimensional Newton's constant, as a function of φ 3 . This is the attractor mechanism in six-dimensional gauged supergravity: after expressing all the fields in the gravity multiplet in terms of vector multiplet scalars using the BPS equations, the remaining BPS equations are equivalent to the extremization of the area of the horizon as a functional of vector multiplet scalars, and the critical value of this functional is the entropy. We see that the I-extremization principle is equivalent to the attractor mechanism in sixdimensional gauged supergravity, 2 thus generalizing what was found for AdS 4 black holes in [7-9, 17, 18].
More explicitly, we find that a central role is played by the quantity (3.4) where X I (φ 3 ) (I = 1, 2), defined in (2.12), are the gravity counterpart of ∆ I in (1.3). This six-dimensional quantity is reminiscent and can be thought of as the analogue of the prepotential F sugra (X I ) in four-dimensional N = 2 gauged supergravity. Indeed, we will find that attractor equations for AdS 4 vacua correspond to extremizing and attractor equations for black holes correspond to extremizing for M 4 being a Kähler-Einstein manifold, and to extremizing for M 4 = Σ g 1 × Σ g 2 . Here s I and t I are the magnetic charges -see (4.6), (5.7) and (6.6). This is similar to the attractor mechanism in four-dimensional N = 2 gauged supergravity [1,2]. We thus expect that, in more general F(4) gauged supergravites coupled to vector multiplets, the attractor equations for AdS solutions supported by magnetic fluxes are given by extremizing expressions of the form (1.4)-(1.7) with a suitable function I AdS 6 (X I ), homogeneous of degree three. The structure of this paper is as follows. In section 2 we discuss general aspects of the F(4) gauged supergravity coupled to vector multiplets. In section 3 we discuss the AdS 6 vacuum and an interesting partially off-shell version of its free energy that we relate to its field theory counterpart. In section 4 we consider the background AdS 4 × Σ g with a topological twist on Σ g and we successfully compare the free energy of the solution with the field theory computation in [33] and the ten-dimensional gravity computation in [43]. In section 5 we obtain a two-parameter family of black hole horizons AdS 2 × Σ g 1 × Σ g 2 and successfully reproduce their entropy using the topologically twisted index. In section 6 we find a one-parameter family of black hole horizons AdS 2 × M 4 where M 4 is a fourdimensional Kähler-Einstein manifold. Our conventions and some useful formulae are collected in appendix A.
Note added: While we were writing this work, we became aware of [46] which has some overlaps with the results presented here.
Matter coupled F(4) gauged supergravity
We consider a six-dimensional truncation of the supersymmetric warped AdS 6 × S 4 background of massive type IIA supergravity [38] described by an F(4) gauged supergravity coupled to vector multiplets. The minimal F(4) gauged supergravity was written in [40] and coupled to matter in [41,42]. F(4) is the relevant superalgebra for five-dimensional superconformal field theories and its bosonic subalgebra is SO(5, 2) × SU(2) R .
The bosonic part of the six-dimensional gravity multiplet consists of the metric g µν , four vectors A α , α = 0, 1, 2, 3, a two-form B µν and the dilaton σ. It is useful to split α = (0, r) where r = 1, 2, 3 is an index in the adjoint representation of SU(2) R . The fermionic components are a gravitino ψ A µ and a spin one-half fermion χ A , A = 1, 2, transforming in the fundamental representation of SU(2) R .
The vector multiplet in six-dimensions contains a gauge field A µ , four scalars φ α and a spin one-half fermion λ A . With n V vector multiplets, the 4n V scalar fields parameterize the coset space .
It is convenient to encode the scalar fields into a coset representative L Λ Σ ∈ SO(4, n V ), where indices are split as follows Λ = (α, I) with I = 1, . . . n V . A subgroup SU(2) R × G of dimension 3 + n V of SO(4, n V ) can be gauged.
The bosonic Lagrangian reads [42] 3) with f Λ ΠΓ the structure constants of the gauge group SU(2) R × G. The supersymmetry variations of the fermions are given by (2.4) where we suppressed the quadratic terms in fermions, σ rA B are the Pauli matrices, and we have defined In all the above formulae the indices Λ, Π, Γ, . . . are raised and lowered with the SO(4, n V ) invariant metric η ΛΣ = diag{1, 1, 1, 1, −1, . . . , −1} and the indices A, B, . . . with the SU(2) R tensor AB . We refer to the appendix for conventions, for the explicit form of the potential V , and the fermion mass matrices S AB , N AB , M I AB appearing in (2.4). The five-dimensional superconformal field theory dual to the warped background AdS 6 × S 4 has a gauge group USp(2N ), N f hypermultiplets in the fundamental representation and one hypermultiplet in the antisymmetric representation. The theory has an [34]. 4 The global SU(2) M acts on the antisymmetric field, SO(2N f ) on the fundamentals and U(1) I is the conserved instanton current. We just consider a supergravity containing one vector multiplet, n V = 1, corresponding to the U(1) subgroup of the global SU(2) M . We will consistently set to zero all gauge fields except A r=3 µ in SU(2) R and A I=1 µ that are needed for the twisting and to provide magnetic charges for the black holes. We will also require the scalar fields in the vector multiplet to be neutral under A r=3 µ and this restricts the nonzero components to φ 0 and φ 3 . For purely magnetic black holes we can find solutions with φ 0 = 0 and we further restrict to this case. 5 A convenient parameterization of the scalar coset is given by [47][48][49] The kinetic terms for the vectors can then be written as and the quantities in the fermionic variations read (2.8) The other fields that are turned on are the metric, the dilaton σ and the two-form B µν . It is consistent to set H µνλ = 0 but B µν is not in general zero and its value can be found by solving its equations of motion [37]. We believe that after all this simplification the theory is a consistent truncation of massive type IIA supergravity on the warped background AdS 6 × S 4 . We give evidence for this in section 4 where we match the ten-dimensional result found in [43].
We finish the discussion of the matter coupled theory with an argument about the definition of the R-symmetry for all asymptotically AdS 6 solutions in the theory. Let us first recall that the detailed match between supergravity and field theory for asymptotically AdS 4 black holes was facilitated by the gravitational answer for the R-symmetry along the holographic renormalization group (RG) flow [7], telling us explicitly how the Rsymmetry mixing is parametrized by the values of the scalar fields. In four-dimensional gauged supergravity the R-symmetry was carefully derived via the Dirac bracket of the supercharges Q obtained from the Noether procedure [50,51]. Following rigorously all these steps in six dimensions is out of our scope here; however, we can still provide some solid arguments and derive the expected gravitational R-symmetry as an explicit scalar dependent combination of the two U(1)'s mixing along the flow, F r=3 µν and F I=1 µν = F Λ=4 µν . This proposal is strongly backed up by the agreement with the field theory results we provide in the following sections.
It is reasonable to expect that, in analogy to the four-dimensional arguments in [50,51], the anti-commutator between two supercharges for asymptotically AdS 6 solutions is given by a surface integral 6 where A is the Killing spinor preserved by AdS 6 , and the super-covariant derivative D includes all terms on the right hand side of the gravitino variation in (2.4), i.e. δψ Aµ = D µ A . The above anti-commutator is the explicit field dependent realization of the abstract AdS 6 superalgebra, F(4), generating a combination of different asymptotic bosonic charges of the SO(5, 2)×SU(2) R generators. We are interested in the term in the gravitino variation in (2.4) proportional to T (AB) , which precisely enters in the definition of the conserved SU(2) R-charge. We are further breaking the R-symmetry down to U(1) so we only need to look at the part proportional to σ 3 AB , cf. (2.5). We are then led to the following formula for the conserved U(1) R-symmetry charge of a given solution, (2.10) Note that we are only interested to know the R-symmetry at a given radial slice of the spacetime (that when interpreted as a holographic RG flow becomes a measure of how the R-symmetry changes along the flow), not at the value of the asymptotic conserved charge. Therefore, we can extract a normalized version of the integrand that we hope to match with the R-symmetry mixing in field theory. Considering that L −1 33 = cosh(φ 3 ), L −1 34 = − sinh(φ 3 ) and that the democratic choice of U(1)'s corresponds to taking F 1,µν ≡ F 3,µν + F 4,µν , F 2,µν ≡ F 3,µν − F 4,µν we finally define where the mixing of the democratic U(1) symmetries F 1,2 is given by the scalar dependent quantities (2.12) 6 We are evaluating the Dirac bracket of two conserved asymptotic supercharges. Therefore the resulting surface integral is defined on the asymptotic AdS boundary ∂V and the standard notation is
The AdS 6 vacuum
The F(4) supergravity discussed in the previous section has an AdS 6 vacuum if we set g = 3m [40][41][42]. Indeed, considering a background with metric a nontrivial scalar profile for σ(r) and φ 3 (r), and setting all other fields to zero, the BPS equations (2.4) reduce to 7 where prime denotes the derivative of the function with respect to the radial coordinate r.
With g = 3m, the AdS 6 background corresponds to e −2f = r 2 and σ = φ 3 = 0. We have further set m = 1/2 so that the AdS 6 radius is normalized to one. A more suggestive way of solving the above equations is by taking the Ansatz e 2f (r) = e 2f 0 /r 2 , and σ, φ 3 independent of r. We can write the BPS equations in an alternative form by using the parameterization (2.12). The BPS equations for the fields in the gravity multiplet in terms of X 1,2 can be solved as 3) The on-shell supergravity action is given by We then see that the BPS equation for φ 3 , which implies φ 3 = 0, is equivalent to extremizing I AdS 6 with respect to X I . The function I AdS 6 (X I ) has a natural field theory interpretation. The S 5 free energy of the USp(2N ) theory reads [52] This can be generalized to the case where a mass parameter is turned on for U(1)
6)
7 One can derive these equations by taking the ultraviolet limit of the more general flow equations (4.5), (5.6), or (6.5). 8 What we denote as chemical potentials here are actually mass parameters for the antisymmetric matter field in the S 5 free energy. Comparing to [53] we have ∆ 1 = π 1 + 2i 3 m as , ∆ 2 = π 1 − 2i 3 m as .
where ∆ 1 + ∆ 2 = 2π and the extremal value is recovered for ∆ 1 = ∆ 2 = π. Upon using the standard AdS 6 /CFT 5 dictionary [52] and identifying X I ≡ ∆ I , we find that F S 5 (∆ I ) = I AdS 6 (X I ) . (3.8) Interestingly, as shown in [32], the same quantity is also related to the Seiberg-Witten prepotential of the five-dimensional theory on R 4 × S 1 which can be written as As discussed in the introduction, the function I AdS 6 (X I ) in six dimensions plays a role similar to the prepotential of four-dimensional N = 2 gauged supergravity. In the AdS 4 black hole story, the supergravity prepotential is similarly related both to the twisted superpotential and to the S 3 free energy of the dual field theory [7,10,17]. 9 4 The AdS 4 × Σ g solution The F(4) gauged supergravity has also an AdS 4 × Σ g solution corresponding to the twisted compactification of the five-dimensional SCFT on a Riemann surface Σ g of genus g. In the infrared the theory flows to a three-dimensional SCFT.
We consider the following Ansatz for the metric and for the gauge fields U(1) × U(1) ⊂ SU(2) R × U(1): with ζ = ±1. There is a nontrivial profile for the scalars σ(r), φ 3 (r) and all other fields are set to zero. Here, Σ g is a Riemann surface with metric normalized as R µν = κg µν , with κ = 1 for S 2 , κ = 0 for T 2 , and κ = −1 for g > 1. 10 The U(1) ⊂ SU(2) R gauge field is chosen in order to cancel the spin connection while the magnetic flux p parameterizes a family of three-dimensional SCFTs. If we choose spinors satisfying For black holes in AdS 4 × S 7 the prepotential is proportional to the function √ X 1 X 2 X 3 X 4 with 4 I=1 X I = 2π and for massive type IIA black holes to (X 1 X 2 X 3 ) 2/3 with 3 I=1 X I = 2π. 10 With this normalization vol(Σ g ) = 2πη g with η g = 2|g − 1| for g = 1 and η g = 1 for g = 1.
where the frame indices 3, 4 refer to the Riemann surface, the U(1) ⊂ SU(2) R gauge field cancels the spin connection along Σ g . This is precisely the topological twist. Requiring in addition that wherer is a frame index along the radial direction, the BPS equations (2.4) reduce to 11 (4.5) We choose the parameterization of the scalar field φ 3 as in (2.12) and, in addition, we introduce a redundant but democratic parameterization for the flux with s 1 + s 2 = 2(1 − g). We look for AdS 4 × Σ g vacua where e f (r) = e f 0 /r and h(r), σ(r) and φ 3 (r) are constant. Using the BPS equations (4.5), the fields in the gravity multiplet can be solved in terms of the X I 's as where we set g = 3m and m = 1/2. The on-shell supergravity action can be written as It turns out that the BPS equation for φ 3 is equivalent to the extremization of I AdS 4 (X I ) with respect to X I . The previous expression can be more elegantly rewritten as As expected, using (3.7) and identifying X I ≡ ∆ I , we find that F S 3 ×Σg (∆ I ) = I AdS 4 (X I ) , (4.10) 11 Here we correct a numerical factor in the gaugino variation in [47].
where F S 3 ×Σg is the S 3 × Σ g free energy of the same theory, as a function of R-charges, computed in [33] 12 Here, ∆ 1 + ∆ 2 = 2π. Moreover, as noticed in [32], this expression is also related to the effective twisted superpotential W of the theory compactified on Σ g × S 1 : The extremization of F S 3 ×Σg (∆ I ) with respect to ∆ I determines the exact R-symmetry of the three-dimensional field theory that is obtained by twisted compactification on Σ g . The critical value of F S 3 ×Σg (∆ I ) is the free energy of the theory and coincides with the value derived directly in ten-dimensional massive type IIA supergravity in [43]. This is an evidence that the gauged supergravity provides a consistent truncation of the tendimensional theory.
The AdS
Now we search for black hole horizon solutions of the form AdS 2 × Σ g 1 × Σ g 2 . We consider the following Ansatz for the metric ds 2 = e 2f (r) dt 2 − dr 2 − e 2h 1 (r) ds 2 Σg 1 − e 2h 2 (r) ds 2 Σg 2 , (5.1) and the gauge fields with ζ = ±1 and the previous conventions for Riemann surfaces. The U(1) ⊂ SU(2) R gauge field is chosen in order to cancel the spin connection and p 1 and p 2 are magnetic charges, one for each Riemann surface. There is as usual a nontrivial profile for the scalars σ(r), φ 3 (r). This time the two-form B µν cannot be set to zero. Assuming H µνλ = 0, the equations of motion require that which is solved by Comparing to [33] we have s 1 = (1−g)(1+n M ), s 2 = (1−g)(1−n M ), ∆ 1 = π(1+ ν AS ), ∆ 2 = π(1− ν AS ).
With the spinor projections where the frame indices 1, 2 refer to the first Riemann surface and 3, 4 to the second, the U(1) ⊂ SU(2) R gauge field cancels the spin connection, and the BPS equations (2.4) reduce to We choose the parameterization (2.12) for the scalar field φ 3 and a democratic parameterization for the fluxes with s 1 +s 2 = 2(1−g 1 ) and t 1 +t 2 = 2(1−g 2 ). To have a black hole horizon AdS 2 ×Σ g 1 ×Σ g 2 we set e f (r) = e f 0 /r and h 1 (r), h 2 (r), σ(r) and φ 3 (r) constant. Using the BPS equations (5.6), the fields in the gravity multiplet can be solved in terms of X I as e σ = (X 1 X 2 ) 1/8 (s 1 X 2 + s 2 X 1 )(t 1 X 2 + t 2 X 1 ) + 2X 1 X 2 (s 2 t 1 + s 1 t 2 ) 3π(s 1 X 2 + s 2 X 1 )(t 1 X 2 + t 2 X 1 ) where we set g = 3m and m = 1/2. The Bekenstein-Hawking entropy can be written as as a function of X I . It is quite remarkable that the BPS equation for φ 3 is equivalent to the extremization of I AdS 2 (X I ) with respect to X I . This is the attractor mechanism in six-dimensional gauged supergravity: once the fields in the gravity multiplet are expressed in terms of the scalars in the vector multiplet, the entropy is obtained by extremizing the functional I AdS 2 (X I ).
We can now compare the entropy of the six-dimensional black holes with the prediction of the topologically twisted index computed in [32]. The index, at large N , is given by [32] I SCFT (∆ I ) = 4 √ 2 15 The index depends on a chemical potential ∆ for the U(1) subgroup of the SU(2) global symmetry. As in [32], we find it convenient to use a pair of redundant but democratic parameters with ∆ 1 + ∆ 2 = 2π. In the spirit of the microscopic counting for magnetically charged AdS black holes in four dimensions, we expect that the entropy is obtained by extremizing I SCFT (∆ I ) with respect to ∆ I . This was called I-extremization principle in [7,8]. Using (3.7) and identifying X I ≡ ∆ I , we find that and we see that the field theory I-extremization precisely corresponds to the attractor mechanism in supergravity.
The AdS 2 × M 4 solution
It is easy to find more general black hole horizons with abelian twists. We consider the following metric where M 4 is a Kähler-Einstein manifold with metric normalized as R µν = κg µν (κ = ±1, 0), and gauge fields F r=3 = ζ g κ (e 12 + e 34 )e −2h(r) , F I=1 = ζ g p (e 12 + e 34 )e −2h(r) , where e i , i = 1, 2, 3, 4, are vierbeins in the directions corresponding to the manifold M 4 . The reduced holonomy group on the manifold, U(2), splits into U(1) that we choose to correspond to the selfdual part of the spin connection, ω + , and SU(2) for the anti-selfdual part, ω − . As in the previous section, there is a nontrivial profile for the scalars σ(r), φ 3 (r) and the two-form With the spinor projections (6.5) We choose the parameterization (2.12) for the scalar field φ 3 and a democratic parameterization for the fluxes s 1 ≡ κ + p , with s 1 + s 2 = 2κ. To have an AdS 2 × M 4 horizon topology we set e f (r) = e f 0 /r and h(r), σ(r) and φ 3 (r) constant. Using the BPS equations (6.5), the fields in the gravity multiplet can be solved in terms of X I as e σ = (X 1 X 2 ) 1/8 (s 1 X 2 + s 2 X 1 ) 2 + 4X 1 X 2 s 1 s 2 3π(s 1 X 2 + s 2 X 1 ) 2 | 6,395.2 | 2018-09-27T00:00:00.000 | [
"Physics"
] |
Identification of the Exclusivity of Individual’s Typing Style Using Soft Biometric Elements
Mohd Noorulfakhri Yaacob, Syed Zulkarnain Syed Idrus, Wan Azani Wan Mustafa, Mohd Aminudin Jamlos and Mohd Helmy Abd Wahab, “Identification of the Exclusivity of Individual’s Typing Style Using Soft Biometric Elements”, Annals of Emerging Technologies in Computing (AETiC), Print ISSN: 2516-0281, Online ISSN: 2516-029X, pp. 10-26, Vol. 4, No. 5, 1st April 2021, Published by International Association of Educators and Researchers (IAER), DOI: 10.33166/AETiC.2021.05.002, Available: http://aetic.theiaer.org/archive/v5/v5n5/p2.html. Review Article
Introduction
Keystroke dynamics (KD) is a method of identifying users based on how the operator uses a keyboard to type [1,2]. This method does not require high hardware investment but only involves changes in the system or application. The system which uses this method must record the time interval between the two letters that the user type. This KD can be categorized as behavioral biometrics. Generally, the recognition using KD or biometric behavior is less popular compared to other biometric methods such as the use of fingerprint, iris and DNA. Various recognition techniques have been used in KD studies to enable higher accuracy when using KD. The combination of KD recognition with the soft biometric features available to a person has been done in previous studies. The earlier study of this combination was done by Idrus, Cherrier [3] in 2012 by studying the use of hands when typing using one or both hands.
Soft biometrics is a technique that can be used for user recognition. Each individual has his or her own unique identity that can be distinguished from each other [4]. The soft biometric information of each individual is not sufficient to distinguish a person accurately, but it does enhance the ability to identify when combined with other biometrics [5].
In the 18th century, the soft biometric recognition method was first used by GALTON [6]. Their research used three main features of soft biometrics: anthropometric measurements (arm length), www.aetic.theiaer.org Scar effect and Mole, and Body shape. In 2001, soft biometric recognition techniques using the criteria of gender, race, eye colour and height were developed by Heckathorn, Broadhead [7].
Soft Biometrics Application for Keystroke Dynamics
The incorporation of soft biometric criteria in KD has been performed since 2011 to date. Various soft biometric criteria have been used in their study. The study on the integration of soft biometrics and KD was started in 2011 by Epp, Lippold [21]. Their research was on the combination of emotional elements as soft biometric features with KD. The results of the study using emotion elements (Confidence, Hesitance, Nervous, Relax, Sad, Fatigue, Anger, Joy and Happiness) had achieved an accuracy of between 77.4% and 87.8%. In the same year, Fairhurst and Da Costa-Abreu [22] conducted a study on the classification of gender by typing, male and female. The 10-fold cross-validation method was used to analyze the obtained data. The result acquired was 95% accuracy.
Later, a similar study to Fairhurst was carried out by Giot and Rosenberger [23] in 2012 which identified the gender based on typing. The results of this study ranged from 87.32% to 91.63%. Subsequently in 2013, the emotional stress aspect as a soft biometric feature was used in the joint study of KD [24]. The results of this study showed whether the user is in depression or not.
Similarly, a study conducted by Nahin, Alam [25] have shown that a user's emotions can be identified based on one's typing style. There are seven categories of emotions studied, namely anger, disgust, guilt, fear, joy, sad and shame. The results obtained was above 80% accuracy by classifying the users according to the emotions studied.
Bakhtiyari, Taghavi [26] has explored the use of emotional elements as a factor on how a user uses a keyboard, touch screen, and mouse. This research compared other normal methods regularly used to identify emotion such as Electroencephalography (EEG) machines, facial expression, voice and body language. The highest accuracy percentage obtained was 93.20%.
The combination of soft biometric and keystroke dynamics has attracted Idrus, Cherrier [27], [28] and Idrus [29] to conduct a study to identify the users based on gender, age, left or right hand and handedness. The best EER obtained from his study was 5.41% using the majority voting technique. Subsequently, in 2015, the study was continued by Idrus using the penalty combination and reward combination techniques to reduce the EER rate obtained in 2014 [30]. The EER results obtained for the reward combination are better than the penalty combination of 23.11%.
In 2016, a study on gender identification by typing was done by Antal and Nemes [31], but the aspect of their research was to identify gender typing using the touch screen. The results showed that the detection accuracy was 64.76% on the keystroke dataset and 57.16% on the touch screen. Also, in 2016, Idrus, Cherrier [32] conducted a study to classify typing in several soft biometric categories, namely gender, age range and handedness. The results showed an accuracy of 63% to 96%.
Latest KD studies related to soft biometric were conducted by Kołakowska [33]. They continue a study conducted by Nahin, Alam [25] which identify users' emotions during typing. Five emotions were studied in their research: Happiness, Boredom, Fear, Anger and Sadness. The results of their study concluded that the user's emotions influence a person typing at that time. However, a person's emotional control or personal strength will influence this type of typing. Katerina and Nicolaos [34] did the next KD-related study by studying KD with mouse and hand movements while using a computer. Table 1 illustrates the results and soft biometric elements used in previous studies.
Based on the previous studies, it is apparent that the method of using a keyboard can be distinguished using soft biometric elements. Various methods in utilizing soft biometric can be www.aetic.theiaer.org adapted in the industry by constantly checking the typing changes of the recorded actual system user, whether the user is authentic or not [35].
The research done in this paper incorporates four soft biometric elements in the use of KD. The soft biometric elements used in this study were cultures in Malaysia, gender, the region of birth in Malaysia and educational level.
Identification Approach
Research on authentication system using keystroke dynamics requires specific hardware and software to record information about each user's typing pattern. Computers that were used to record information on the typing style were equipped with special software. Other applications besides windows were terminated to prevent computer from being in normal condition. Each user was instructed to type several words using the software available in the provided computer. Each time or interval for each character typed by the user was recorded in a database in the application. Each interval between letters obtained during typing was stored in the database and later analysed. Each category of soft biometrics surveyed would involve two phases, namely training phase and testing phase. Support Vector Machine (SVM) had been selected as a technique to analyse and classify the raw data obtained. User authentication accuracy rate was measured and calculated for each category of soft biometric involved through SVM methods. The software used to execute SVM is MATLAB. Figure 1 shows an overview of the methodology used to perform this classification.
Individual Profiles Based on The Way of Typing
Classification of how to type was executed based on four categories of soft biometrics which are culture (Malays, Chinese and Indians), gender, educational level (CGPA -Cumulative Grade Point Average) and region of birth. This classification aims to isolate how users use the keyboard of each category. There are two approaches used in KD study which is the study based on a free text or a fixed text. The free text analysis is based on time-lapse between two consecutive letters or better known as digraphs whereas fixed text, the entire time category recorded in a word by the user is compared against the previous recorded time of the respective user. For example, the user is directed to type multiple times text / password during the enrolment process. The time interval of typing for each letter in the corresponding sentence was recorded. After that, the comparison is made by the system if the user types the same sentence for the second time and afterward. However, this study focuses on fixed text approach. www.aetic.theiaer.org
Data Analysis
The analysis of keystroke data done in this study is based on four soft biometric criteria namely culture, region of birth, gender and educational level. The cultures to be studied are the three main residents in Malaysia namely Malay, Chinese and Indian [36], whereas the ROB category is divided into 4 parts: north of the peninsular, east of the peninsular, center of the peninsular and south of peninsular Malaysia. For measurements based on educational level, the CGPA result was used as two parts, 3.0 and above and 3.0 below. In addition, this study also incorporated other minority in Malaysia (Bajau, Murut, Siam, Suluk, Iban, Kadazan, Bisaya, Kedayan, Iranum, Tidong etc) and classified them into one group labelled as "Others". Hence, for the soft biometric gender category, the total number of categories are four, namely the three main races in Malaysia and one other category. Support Vector Machine (SVM) was used during the classification process.
The kernel used in this study is Radial Base Function (RBF) [37]. RBF kernel was selected due to its suitability for analyzing non-linear data recorded in keystroke dynamics [38]. This kernel is also able to isolate data to high dimension data. The data analyzed was separated into two parts, namely training data and test data. For example, if 1% of the total data analyzed is used as training data, then 99% is used as test data. This process is named as a training process within the SVM. This process was repeated 100 times starting from 1% training ratio up to 90% of the training ratio and the average was recorded. The classification of data for each category analyzed was labeled 1 and -1. For example, in the culture category, comparison of Malays and Chinese on their way of typing, Malays data was labeled as -1 and Chinese data was labeled as 1. This process was imposed on each of the two classes analyzed. The items analyzed are shown in Table 2 below. This section clarifies the breakdown of statistical data collection obtained based on the four soft biometric criteria studied. Total number of volunteers involved was 250 people. Everyone was required to type 5 sentences given as many as 10 times correctly. Therefore, the total amount of keystroke data obtained in total is 12500. The statistical breakdown of the data is described in Figure 2 to Figure 5 below.
Experimental Results
This section describes the results of keystroke data analysis obtained.
Result Based on Culture
This study divides the culture into three main residents in Malaysia namely Malays, Chinese and Indians. Another minority group in Malaysia is in one category. Based on the entire data obtained (figure 2 -5), the statistical breakdown of the data is given: Malays -148; Chinese -53; Indians -38, Others -11. According to the statistics of collected data, keystroke data from Indians was the lowest among the three main races. Malays made up the biggest number of volunteers, which is 148 personnel participated to contribute keystroke data. For analysis purposes, the number of participants of each culture was made equal for each category. For example, only 53 randomly selected Malays and all 53 Chinese records were used to compare these two classes. With regard to the Indians category, only 38 randomly selected records for each Malays or Chinese category were chosen for comparison with Indians. This is to balance the amount of data to be analyzed using SVM because according to a study by Idrus, Cherrier [27] , the number of data between two different classes should be equivalent to enable the best analysis results by SVM.
As explained in the previous chapter, all volunteers were required to type 5 sentences given as many as 10 times correctly. The first 3 of the 10 typing attempts were not analyzed to provide the volunteers opportunities to familiarize themselves with the word and sequence of letters in the given word. All five letters provided to users are simplified in Table 3. The total number of records analyzed for the class of each word is as below (Table 4). Figure 6 shows the results obtained for an average of 100 times the iterations of recognition rate accuracy with the learning ratio between Chinese and Indian using 5 words. A total of 38 samples of data typing volunteers from Chinese and 38 samples from Indian volunteers had been analyzed. The results obtained were quite good because of the 50% learning ratio, the accuracy earned had reached 75% for "instagram facebook twitter" and "the sound of music", while for the other 3 words the accuracy ranged from 78.5% to 83%. Table 5 shows a summary of the accuracy obtained from 50% to 90% of the learning ratio. Figure 7 shows the results obtained for an average of 100 times the iterations of recognition rate accuracy with the learning ratio between the Malays and Chinese. The results obtained among the Malays and Chinese are better than the Chinese and Indians. Based on the 50% learning ratio, the accuracy obtained was between 83.5% and 88.5%. Table 6 shows a summary of the accuracy obtained for 50% of the learning ratio. Figure 8 shows the results obtained for an average of 100 times the iterations of recognition rate accuracy with the learning ratio between the Malays and Indians. The results obtained between the www.aetic.theiaer.org two culture are 82% up to 88% accuracy for the learning ratio of 50% and above. Table 7 shows a summary of the accuracy obtained for 50% of the learning ratio. Figure 9 shows the results obtained for an average of 100 times the iterations of recognition rate accuracy with the learning ratio between Others and Indians. The results obtained between the two culture were 72.9% up to 90.4% accuracy for the learning ratio of 50% and above. Table 8 shows a summary of the accuracy obtained for 50% of the learning ratio. Figure 10 shows the results obtained for an average of 100 times the iterations of recognition rate accuracy with the learning ratio between Others and Chinese. The results obtained between the two culture were 65.3% up to 81% accuracy for the learning ratio of 50% and above. Table 9 shows a summary of the accuracy obtained for 50% of the learning ratio. Figure 11 shows the results obtained for an average of 100 times the iterations of recognition rate accuracy with the learning ratio between Others and Malays. The results obtained between the two culture were 67.8%% up to 81% accuracy for the learning ratio of 50% and above. Table 10 shows a summary of the accuracy obtained for 50% of the learning ratio.
Summary of Analysis Based on Culture
The results of six classes keystroke data analysis based on culture are summarized in Table 11 below. Overall, it can be concluded that the Malays VS Chinese had the highest average accuracy rate of 86.02% for the 50% learning ratio. The rate of accuracy increased for every additional 10% learning ratio. This proves that typing for the same user group can be distinguished by category. The more learning data supplied to the system for identification, the higher the accuracy obtained.
Result Based on Education Level Using CGPA
This study uses CGPA as a benchmark to differentiate the educational achievement of a person. The CGPA was divided into two sections, 3.0 and above and 3.0 and below. Based on the statistics in Figure 4, data obtained during the data collection process recorded the number of volunteers who received a CGPA of less than 3.0 was 60. Therefore, the data of volunteers who obtained CGPA 3.0 and above were randomly selected for 60 people to enable the balance of data between the two classes during the analysis process. Figure 12 shows the results obtained for an average of 100 times the iterations of recognition rate accuracy with the learning ratio between the CGPA 3.0 above and below 3.0 using 5 words. The results in Table 12 showed that recognition performance ranged from 78% to 87% at the 50% learning ratio. Based on the results obtained from the classification using educational level measured using CGPA, it was found that there were good significant differences. This may be due to the number of volunteers involved among university students. www.aetic.theiaer.org
Result Based on Gender
Gender is the final feature of soft biometric used for user classification using KD. Based on the results obtained from Figure 13, it is shown that the classification of gender based on typing methods was between 62% and 80.9% at 50% learning ratio. The results obtained show that the classification of users based on typing style for men and women is indistinguishable because it provides inconsistent results. This may be due to the similar preferences of these two different groups. For example, a man possessed a woman characteristic and vice versa.
Result Based on Region of Birth
Malaysia consists of 13 states and 3 federal territories. These states can be grouped into 4 sections: North of Peninsular Malaysia, East of Peninsular Malaysia, Central of Peninsular Malaysia, South of Peninsular Malaysia, Sabah and Sarawak [39,40]. This research focused on the states in Peninsular Malaysia only and each state was grouped according to the breakdown in Table 13. Based on the overall data obtained from Figure 3 above, the statistical breakdown of data by each region is, Northern (NN) -136; Eastern (EN)-43, Southern (SN) -33 and Central (CL) -33. 5 data are classified as 'Others (OS)' because they are not included in the list of regions of birth studied. The five data are volunteers from Saudi Arabia, Thailand and three Indonesian. The Others category will not be analyzed because the data set obtained is too small. From the statistics, keystroke data for the www.aetic.theiaer.org Central and Southern region of Peninsular Malaysia is the lowest among the other regions. Volunteers in the north region were the largest participants, i.e. 136. For analysis purposes, the number of participants in each region were made equal. For example, only 33 records randomly selected from the Northern and Eastern regions of Peninsular Malaysia were used to be compared against the Central region. This is to balance the amount of data to be analyzed using SVM. The same is done for other regions of birth by comparing the lowest number of records between the two classes. The total number of records analyzed for the class of each word is shown in Table 14: Figure 14 shows the results obtained for an average of 100 times the iterations of recognition rate accuracy with the learning ratio between the northern and southern regions using 5 words. The results obtained were poor because of the 50% learning ratio, the accuracy obtained only reached between 69% to 74% for 4 words, while only 1 sentence reaches 75% which is Langkawi island. This means the typing pattern for volunteers in the Northern and Southern regions cannot be clearly distinguished for these two categories. Table 15 shows a summary of the accuracy obtained for 50% of the learning ratio. Figure 15 shows the results obtained for an average of 100 times the iterations of recognition rate accuracy with the learning ratio between the central and eastern regions using 5 words. The results obtained were quite good because of the 50% learning ratio, since the accuracy obtained from three sentences exceeded 80%, only two sentences reached between 73% -78% which is langkawi island www.aetic.theiaer.org and tunku abdul rahman. Table 16 shows a summary of the accuracy obtained for 50% of the learning ratio. Figure 16 shows the results obtained for an average of 100 times the iterations of recognition rate accuracy with the learning ratio between the eastern and southern regions using 5 words. The results obtained between the two classes were 75.02% up to 87.44% accuracy for the learning ratio of 50% and above. Table 17 shows a summary of the accuracy obtained for 50% of the learning ratio. Figure 17 shows the results obtained for an average of 100 times the iterations of recognition rate accuracy with the learning ratio between the northern and central regions using 5 words. The results obtained between the two classes were 85% up to 93% accuracy for the learning ratio of 50% and above. Table 18 shows a summary of the accuracy obtained for 50% -90% of the learning ratio. Figure 18 shows the results obtained for an average of 100 times the iterations of recognition rate accuracy with the learning ratio between the central and southern regions using 5 words. The results obtained between the two classes were 70.65%% up to 83.21%% accuracy for the learning ratio of 50% and above. Table 19 shows a summary of the accuracy obtained for 50% -90% of the learning ratio. Figure 19 shows the results obtained for an average of 100 times the iterations of recognition rate accuracy with the learning ratio between the northern and eastern regions using 5 words. Table 20 shows the results obtained between the two classes can be categorized as good because three of the five sentences tested had more than 80% accuracy, while the remaining two sentences had 77.07% and 79.71% accuracy.
Summary Analysis Based on Region of Birth
The results of six classes keystroke data analysis based on region of birth can be summarized as illustrated in Table 21. Overall, typing classification using region of birth gave a rather impressive result because at the 50% learning ratio, the lowest accuracy rate was 73.79%%, while the highest accuracy was 91.17%. The most distinguishable typing method was for the North versus Central and North versus East categories due to at 50% SVM learning, the accuracy obtained was over 80%.
Summary
It can be concluded from the results obtained that the application of the soft biometric elements in the Keystroke Dynamic study can be used for several categories. This classification proves that the combination of soft biometric and KD can be used as an additional security feature in system authentication. The system can compare user profiles detected by using the keyboard and profile registered in the system. The results are expected to help other researchers study the different aspects of soft biometric that can be used to classify users by typing. The results show that the best classifications of typing pattern that can be identified via soft biometric are culture and region of birth. The best classification for culture is obtained from Malay vs. Chinese category with an average reading accuracy of 88.52% at 90% learning ratio. The best classification for region of birth was obtained from the Northern vs. Central category with an average reading accuracy of 91.17% at 90% learning ratio. This shows that there are clear differences in typing patterns for a culture and region of birth. This may be because the writing and speaking languages used by each culture are quite different.
The results of this study can be used in daily environment, especially in the field of computer forensic. With the existence of a database to record the typing patterns of each group of people, KD can help the authorities in identifying groups of cyber-criminals who commit offenses. In addition, this KD can be used in the control of access to all systems that use a username and password where it can be used as a second security filter after the username and password. To further enhance the study in the field of KD and soft biometric, future researchers can use other soft biometric elements in the study of KD and use different identification techniques such as Fuzzy Logic and Neural Network. | 5,556.6 | 2021-03-20T00:00:00.000 | [
"Computer Science",
"Economics",
"Education"
] |
A Non-Invasive Multichannel Hybrid Fiber-Optic Sensor System for Vital Sign Monitoring
In this article, we briefly describe the design, construction, and functional verification of a hybrid multichannel fiber-optic sensor system for basic vital sign monitoring. This sensor uses a novel non-invasive measurement probe based on the fiber Bragg grating (FBG). The probe is composed of two FBGs encapsulated inside a polydimethylsiloxane polymer (PDMS). The PDMS is non-reactive to human skin and resistant to electromagnetic waves, UV absorption, and radiation. We emphasize the construction of the probe to be specifically used for basic vital sign monitoring such as body temperature, respiratory rate and heart rate. The proposed sensor system can continuously process incoming signals from up to 128 individuals. We first present the overall design of this novel multichannel sensor and then elaborate on how it has the potential to simplify vital sign monitoring and consequently improve the comfort level of patients in long-term health care facilities, hospitals and clinics. The reference ECG signal was acquired with the use of standard gel electrodes fixed to the monitored person’s chest using a real-time monitoring system for ECG signals with virtual instrumentation. The outcomes of these experiments have unambiguously proved the functionality of the sensor system and will be used to inform our future research in this fast developing and emerging field.
Introduction
Current trends in the development of vital sign monitoring technologies demonstrate that the future points in the direction of the introduction of sophisticated diagnostic devices that integrate several diagnostic measures in one all-purpose instrument or probe.
Here, we introduce the concept of a novel sensor system, which offers the potential of continuously monitoring the vital signs of up to 128 individuals by using a fiber optic probe. This patent-pending probe, which functions on the basis of the fiber Bragg grating, was designed, constructed, and validated by the authors in the Czech Republic. The advantage of the proposed multichannel fiber-optic sensor system is the possibility of monitoring patients in the harsh magnetic resonance (MR) environment (e.g., Magnetic Resonance, X-ray, UV absorption and radiation, etc.).
The authors have developed a measurement probe that allows for monitoring the mechanical vibrations of human body evoked by living activities: breathing and heart rate [1]. The body temperature as well as the respiratory and heart rate were obtained by a spectral evaluation of the measured signals. The mechanical strain is transferred to the fiber Bragg grating (FBG) by a thoracic elastic strap. This monitoring method is fully isolated, and it ensures absolute electrical safety for the patient.
Recent years have witnessed a growing increase in the utilization of Optical Sensors (OS) for a variety of emerging biomedical applications [2,3]. A number of articles presented only results for measurement of respiration rate or heart rate. For example, the article [7] reports on results obtained from monitoring the respiration and cardiac activity of a patient during a magnetic resonance imaging (MRI) survey using an optical strain sensor based on an FBG. Several research groups reported vital signs monitoring based on fiber interferometry method [14,[26][27][28][29]. FBG sensors were used to monitor the cardiac activity and respiration [4,6,8,10,13,17,19,28,30].
Textiles can be used to encapsulate sensors and measure heartbeat and respiration rate simultaneously. Several techniques, such as wrap and weft knitting, weaving and stitching, can be used to embed the fiber sensing elements into textile fabrics [8,9,16,21,31,32]. For example, the textile based respiratory rate sensor demonstrated in [21] is very simple, cost-effective and comfortable to wear. This sensor can be placed on thoracic or abdominal areas, but the heartbeat signal cannot be measured due to the lower sensitivity of fiber macro bend effects. However, the authors in [9] report on a textile fiber optic microbend sensor for heartbeat and respiration monitoring simultaneously.
Our novel measurement probe (described below) allows measurement of body temperature in addition to other advantages offered by the sensors described in the above articles. Based on our comprehensive literature and patent search, we can safely claim that our presented solution is innovative.
In general, there are three types of Optical Sensors: (1) non-invasive sensors, which come into direct contact with the human skin; (2) minimally-invasive sensors, which are used for measurements carried out in body cavities; and (3) invasive sensors, which are used for measurements made inside organs or in the bloodstream (intravascular). Desirable characteristic features of Optical Sensors include their independence from an active power supply and a high immunity to electromagnetic interference. Thanks to these attributes, Optical Sensors can be used with other electronic equipment without generating electric noise that may compromise the quality of vital sign monitoring and potentially lead into patient safety concerns. Optical Sensors are gaining more popularity due to their flexibility, improved functionality, and reliability. The very small dimensions of optical fibers allow them to be encapsulated inside very thin catheters and injection needles, thereby enabling localized and minimally-invasive monitoring. Biocompatibility is a very important consideration in the acquisition and evaluation of high quality data from sensors coming into contact with living tissue. As Optical Sensors are biocompatible and do not influence the patient's body in any major way, they offer a great level of patient comfort during vital sign monitoring.
The major aim of this article is to introduce our novel vital sign measurement probe (which belongs to the group of non-invasive optical sensors) and its associated sensor system. In this limited space, we do not intend to compare our system with other existing sensors. The comprehensive characterization of our novel sensor system and its comparison with other existing sensors will be the focus of our future articles with detailed signal processing.
Fiber Bragg Grating
The FBG is characterized by a periodic change of the refractive index in the fiber's core ( Figure 1). When we project a beam of light from a Super-luminescent Light Emitting Diode (SLED) into an optical fiber, a narrow spectral part of the beam is reflected while other wavelengths are transmitted without any attenuation by this structure. The reflected wavelength is called the Bragg wavelength (λ B ) and is given by the following equation: where n e f f is the effective refractive index of the fiber's core within the Bragg grating and Λ is the period of its changes. External effects like deformation or temperature influence the Bragg wavelength and the period of changes of the refractive index in a linear fashion, which translates into a linear shift in the Bragg wavelength. To describe the Bragg grating sensitivity in a quartz optical fiber, we use the normalized deformation coefficient (at constant temperature) as: and the normalized temperature coefficient (at constant strain) as: where λ B is the Bragg wavelength, ∆λ B is the shift of the Bragg wavelength, ∆ε is the change of deformation and ∆T represents a change in temperature. The FBG with the Bragg wavelength at 1500 nm shows the deformation sensitivity of 1.2 pm/µstrain and temperature sensitivity of 10.3 pm/ • C [33]. As FBGs are single-point sensors, with multiplexing techniques, we can easily connect them together and obtain a multi-point measurement probe. The most common methods are the wavelength-division multiplexing (WDM) and time-division multiplexing (TDM). The analysis of the capacity of these multiplexing methods is described in [34]. The most widely used multiplexing method in sensor applications is the wavelength-division multiplexing, which is based on the spectral division of individual gratings. In the WDM method, light from an LED passes through a circulator to the FBG array, and the reflected light is detected in the OSA (Optical Spectrum Analyser), where each FBG is tuned to a different Bragg wavelength ( Figure 2). In the OSA, each peak represents one FBG, and the respective frequency shifts are related to the applied deformation or temperature change. The measured value is expressed by a shift of the Bragg wavelength of the FBG that is being affected. To apply the wavelength-division multiplexing method, it is necessary to avoid the overlapping of neighboring spectrums. For this reason, each FBG probe is assigned a specific measurement window, whose size is given by a sensitivity coefficient as well as the expected maximal influence of the measured value.
Novel Design of Measurement Probe
Our novel measurement probe is based on two FBGs encapsulated inside a polydimethylsiloxane (PDMS) polymer ( Figure 3). As FBGs are sensitive to strain and temperature changes, they are suitable for many biomedical measurements. For example, they could be used in thermodilution-based cardiac output monitoring instruments or temperature measurement systems. We chose the PDMS polymer to increase the temperature sensitivity of our probe and ensure its non-reactivity to the human skin. Figure 4 shows temperature sensitivity measurements of a bare FBG at a Bragg wavelength of 1554.1203 nm and its encapsulated version inside the PDMS as an example. It is evident that the temperature sensitivity significantly increases (almost four times) from 10.378 pm/ • C to 39.44 pm/ • C due to encapsulation in the PDMS polymer. The presented results in [35] indicate that this type of encapsulation does not affect the structure of the FBG.
Temperature (°C) 20 40 The encapsulation of the measurement probe was realized by PDMS with the designation of Sylgard 184. Sylgard 184 is a two-component casting compound: the A component creates its own pre-polymer and the B component is a curing agent. Both components are mixed together according to a datasheet in a weight ratio of 10:1 (A:B). Bubbles and microbubbles that result from the combination of the pre-polymer and the curing agent can be removed using an ultrasonic bath. Homogeneity of the connection is realized using a laboratory shaker. The measurement probe contains two connectors of EURO 2000 type. The first connector is plugged into the measurement system or a preceding probe, and the second one is plugged into a succeeding probe.
The Multichannel Concept
The multichannel hybrid fiber-optic sensor system ( Figure 5) is based upon a wide-spectral SLED as the optical source with a bandwidth from 1512.5 nm to 1587.5 nm and an output power of 1 mW. The emitting light is projected by means of a circulator into a four-channel optical switch with the switching time about 0.5 ms. This switch routes incoming light rays (from the SLED) into four measurement channels. This arrangement increases the sensor capacity up to four times. One measurement channel can store up to 32 vital sign measurement probes, where each probe is assigned a measurement window with a spectral width around 1.9 nm (Table 1).
Reflected signals from each measurement channel are projected through the circulator into an optical spectrum analyzer. The signal is then processed in the Digital Signal Processing (DSP) Unit. The Electronic Control Unit controls the parameters of each individual optical unit such as the capacity of the light source, the switching speed of the optical switch and the sensitivity of the optical spectrum analyzer. The experimental solution proposed in this article uses the fiber Bragg gratings encapsulated into PDMS polymer to monitor patient's vital signs. Individual sensors are detected by one optical fiber. To distinguish individual readings from FBGs the wavelength division multiplexing method is used. Each FBG sensor is assigned a specific measurement window. The fiber Bragg gratings of individual sensors belonging to a specific type of spectral model [36] are given by the relationship: where λ Bn is a Bragg wavelength of the n-th sensor, λ B0 is a wavelength of the left boundary of spectral region's radiation source, k Nε is a normalized deformation coefficient, MR P is a positive measurement scope, MR N is a negative measurement scope and GB is a Guard Band. The maximum number of sensors that can be evaluated from one optic fiber is limited by the right boundary of the radiation source (the right boundary of the last measurement window must be smaller than the right boundary of the radiation source).
Movements of the thoracic cavity (its expansion or contraction) depend on the way of breathing (deep or shallow) and particularly on its capacity. Due to this fact, a series of measurements were carried out to set the input parameters (MR P , MR N , GB) of the sensory array design. The aim of this measurement was to specify an extreme case which involves the largest expansion of the thoracic cavity during breathing, which can cause the deformation of the sensor. These measurements were carried out in 10 volunteer test subjects of different age, sex, weight and height. Based on the realized measurements, the input parameters were defined: MR P = 800 µstrain and MR N = −800 µstrain. The Guard Band provides a level of immunity against unpredictable effects. However, it decreases with the number of monitored subjects. In our computations we selected a GB = 0.4 nm based on a number of experimental observations. Table 1 shows the parameters of the fiber Bragg gratings for each sensor. These parameters were calculated by using Equation (4) with input parameters MR P = 800 µstrain, MR N = −800 µstrain, Figure 6 shows the power spectrum of LED source radiation with the reflected spectral parts of 32 FBG sensors in series.
Results
The experimental tests were carried out on ten persons of both sexes (six men and four women) in a room with the temperature of 24 • C. The tested persons were between 20 and 45 years of age, their weight was between 55 and 115 kg and their height was between 155 and 192 cm. No significant differences were found in the quality of the received signal depending on the age, weight, and height. The experiments were carried out in a lab environment and they were discussed with the senior doctor of the long-term health care department of the University Hospital in Brno, Czech Republic. The vital sign monitoring probe was placed around the pulmonic area on the chest and fixed into the position by a contact elastic strap (Figure 7). The subjects were tested in both standing and supine positions. The measurements showed that the testing positions did not influence the sensitivity of the measurement probe.
Two approaches were used to evaluate the heart and respiratory rates. The first approach allows for the determination of these rates in the time domain by identifying the periodic cycles in the form of local maximum Equation (5). Whereas in the second approach, the Fourier transform is applied on obtained measurements to calculate the dominant frequency f and produce the rates in cycles or respirations per minute (rpm) by using Equation (6).
where t i is a time position of the i-th maximum (of breathing and pulse progress), t i−1 is a time position of the previous maximum, and f is a dominant frequency obtained using the Fourier transform. The measurements are based on monitoring the movements of the thoracic cavity of a subject while breathing. The mechanical strain is transferred to the FBG by a thoracic elastic strap. The body temperature as well as respiratory and heart rates were obtained by the spectral evaluation of the measured signals.
Measurement of Respiratory Rate
An application software was developed in Matlab (R2015a, MathWorks, Natick, MA, USA) to perform a variety of tasks including: (1) calculation of the breathing rate based upon the acquired breathing data, (2) controlling the measurement process and setting up the constant parameters necessary to for breathing rate calculations, (3) carrying out the final analysis and evaluation of the breathing rate measurement error. The software also produced a bar chart which was updated based upon previously set parameters such as duration of the measurement and durations of breathing in (inspiration) and breathing out (expiration). The bar charts clearly validated that the subject under test was breathing and ensured the determination of a relatively constant breathing rate. The application also produced the final value of the respiratory rate, which was used for data processing and determination of the system error rate. The multichannel measurement system (including our novel probe) was used to monitor the subject's breathing activity (measured as cyles or respirations per minute) with a sampling frequency of 300 Hz. For example, the Bragg wavelength shift for the subject M1 is shown in Figure 8a The key experimental results of breathing measurements are summarized in Table 2. The maximum relative error of 5.41% was observed in patient F3. The total relative system error rate for ten test subjects and total time record of 159 min and 40 s was 3.9%. It should be noted that in the monitoring of mechanical vibrations of human body, as in any physiological measurement, the signal quality can be corrupted by external motion artifacts. These errors can be caused by the high sensitivity of the sensor. This sensitivity becomes problematic when the monitored person for example changes his/her position (minor artifacts), performs additional movements, coughs, etc. (major artifacts). The amplitude of stresses produced by such movements can then be higher than the amplitude of vibrations depending on breathing (Figure 9). Figure 9. A recording from a subject in this study (M2). Subject M2 was asked to change his position and to simulate strong coughing. For example, the influence of a position change on the respiratory rate is shown in red on the left (minor artifacts) and the influence of strong coughing on the breathing rate is shown in red on the right (major artifacts).
Measurement of Heart Rate
The acquired data for the blood pressure heart rate measurement were processed by using a Butterworth second order band-pass filter with corner frequencies f L = 0.75 Hz and f H = 5 Hz, respectively. The respiratory rate, which was below 0.75 Hz, and the unwanted signals (noise) with characteristic frequencies higher than 5 Hz were filtered out. The filtered signal of the subject M1 which represents the superposition of breathing and pulse pressure is shown in Figure 10a (details are shown in Figure 10b). The filtering procedure does not cause a loss of physiological information. Using Fourier series analysis (Figure 11), we determined a dominant frequency (1.145 Hz) of the filtered (blood pressure) pulse waveform and calculated a heart rate of 68.7 beats/min, which closely matched reference measured value 68.1 beats/min. The reference ECG signal was acquired with the use of standard gel electrodes fixed to the monitored person's chest using a real-time monitoring system for an ECG signal with virtual instrumentation. The low noise ECG signal was acquired by the National Instrument Educational Laboratory Virtual Instrumentation Suite (NI ELVIS, II Series, National Instruments, Austin, TX, USA) using a three-lead system. Appropriate signal processing was implemented on pulse pressure data acquired from our novel sensor. These processing steps produced a noise-free signal. Fast and reliable detection of pulse peaks were then accurately achieved and estimation of clinically important parameters such as pulse peak-to-peak intervals corresponding to ECG R-R intervals (R-R is the time elapsing between two consecutive R waves in the electrocardiogram) became possible. Figure 11. The frequency spectrum of pulsed pressure filtered signal (for heart rate calculation) in subject M1.
In order to compare the differences between the reference heart rate (HR) and heart rhythm estimated using the signal from the sensor, the Bland-Altman plot was utilized [37]. The differences between the sensor and the reference traces, x 1 − x 2 , are plotted against the average, (x 1 + x 2 )/2. The reproducibility is considered to be good if 95% of the results lie within a ±1.96 SD (Standard Deviation) range. Figure 12 shows the Bland-Altman statistics for four tested subjects. The heart rate (HR) is expressed in beats per minute (bpm). (k) Figure 12. Reproducibility of HR determination capabilities of our novel sensor probe (based on comparison with ECG-based HR calculations) using the Bland-Altman method using data acquired from ten subjects.
The key experimental results of cardiac activity measurements are summarized in Table 3. For the entire data set, 96.54% of the values lie within the ±1.96 SD range in HR determination, and no significant differences between individuals were observed. The results show no systematic errors, and the error has no proportional character as well as does not depend on the HR value. The Bland-Altman statistical analysis demonstrates the HR detection with a satisfactory accuracy for multiple subjects. The heart rate (HR) is expressed in beats per minute (bpm). The amplitude of stresses produced by such movements can be several dozen times higher than the amplitude produced by the heart rate ( Figure 13).
Major artifacts
Minor artifacts Figure 13. A recording from a female subject in this study. Subject F3 was asked to simulate strong coughing and to change her position. For example, the influence of strong coughing on the heart rate is shown in red on the left (major artifacts) and the influence of a position change on the heart rate is shown in red on the right (minor artifacts).
Measurement of Body Temperature
As the FBG is sensitive to both temperature and deformation, one possible way to detect both simultaneously is to use two FBGs with different thermal and deformation sensitivitities [38]. Our novel measurement probe with its encapsulation and specific shape enables us to achieve different sensitivities. When the measurement probe is influenced by both temperature and deformation simultaneously, then the amplitude of both impacts is calculated by the following relationship: where ∆T is the temperature change, ∆ε is deformation, K nε is the deformation coefficient, K nT the temperature coefficient belonging to the first or second FBG and D is the discriminant which must be non-zero and is given in the following equation: Using the above-mentioned Equations (7) and (8), it is possible to determine the extent of deformation and temperature from the two FBG wavelengths. Figure 14 shows the results of the body temperature measurement using our system in a test subject for 180 s. The calculation of the body temperature using Equations (7) and (8) produced 35.8 • C, which closely matched the reference measured value 35.7 • C by a digital thermometer (Greisinger, Prague, Czech Republic) and its temperature recordings. The measurement accuracy is defined by a spectral distinction of the optical spectrum analyser to 1 pm. Due to a temperature sensitivity of the Fiber Bragg gratings around 10 pm/ • C, the final temperature value can be determined at one-tenth of • C. When the measurements were taken, there was no record of any more significant influence (the maximum relative error was 0.36%) of body movement on the temperature measurement. This fact is based on a situation in which both gratings were under the influence of the same deformation. In this case, temperature is calculated according to Equation (7) Figure 14. Body temperature measurement result in the test subject M1 for 180 s using our novel sensor system and reference digital thermometer. Table 4 shows the temperature values obtained in a time interval of 180 s with a step of 30 s. The maximum relative error 0.36% was observed in patient F3.
Discussion
Our novel probe and its associated multichannel system offer the possibility of continuous monitoring of the basic vital signs (Body Temperature, Heart Rate and Respiratory Rate) in up to 128 patients altogether. The creation of a totally non-invasive, electrically safe (can be used in the magnetic resonance imaging environment), cost-effective, and patient-friendly technology that enables large scale vital sign monitoring, and data collection, and analysis is indeed exciting. Such technology could prove very useful for long-term health care facilities, hospitals and clinics. Our novel non-invasive biocompatible measurement probe ensures maximum electrical safety and patient comfort. The functionality of the proposed system was verified by a series of real experimental measurements of basic vital signs. Due to space limitation, we do not intend to consider all aspects of the design and implementation of our novel sensor system here. This short article should serve as a gateway into and an initial step towards the introduction of a new field of non-invasive and cost-effective patient monitoring which is in its infancy and has not been comprehensively explored. For any new biomedical technology like our novel sensor system and its underlying methodologies to qualify as an effective scientific instrument, and more importantly, gain clinical acceptance and find daily utilization in common clinical practice, it is essential to carry out extensive clinical research to validate its safety and efficacy. These important requirements compel us to set comprehensive research goals for clinical trials in the near future.
It is important to emphasize that the PDMS offers a unique set of desirable characteristics suitable for biomedical applications. Its non-reactivity to human skin, its mechanical and thermal resistance, its immunity to electromagnetic noise, and its ability to increase the temperature sensitivity make the PDMS a very attractive material of choice for our novel system. However, one of the possible and current ways of non-invasive monitoring methods of vital signs of the human body is the use of patch monitors [39,40] or sensors embedded into a bed or seat that do not require additional actions to prepare the patient for monitoring [3].
The team of authors offers a solution which is basically focused on the monitoring of long-term ill patients with a minimum of physical movement load. During experiments, all test subjects were asked to simulate their natural behavior in the most accurate way (for instance, the focus was on the use of fine motor skills-not only movements of arms and hands, legs and feet, changes of body positions, coughing, but also walking). These aspects are taken into account in the results described above of the probe efficiency, or, more precisely, of the whole measurement system. Authors are ready to carry out a detailed analysis of the influence of these artifacts in the follow-up research.
Conclusions
In this article, we briefly described the overall design and realization of the prototype of a novel non-invasive multichannel hybrid fiber-optic sensor system for basic vital sign monitoring.
Once clinically tested and validated, it has a great potential to establish itself on the market in the field of modern non-invasive patient monitoring. The main advantage of our solution is the design of a novel patient-friendly measurement probe, which constitutes the core of our system. The functionality of the system was verified by a series of experimental measurements of vital signs (Body Temperature, Heart Rate and Respiratory Rate). The integration of these three vital parameters within one all-purpose probe represents a unique patented solution by the authors. Our experimental results, acquired by testing our novel prototype in research laboratory conditions, have unambiguously proven the functionality of the system. To gain clinical acceptance and find daily utilization in common clinical practice, we are now positioned to carry out extensive clinical research to validate the safety and efficacy of our system.
None of the patients participating in the study complained of any discomfort associated with the presence of the sensor in a thoracic strap placed on her/his chest when asked about it after the examination. The Bland-Altman statistical analysis demonstrates the heart rate detection with a satisfactory accuracy in multiple subjects. For the entire data set, 96.54% of the values lie within the ±1.96 SD range for the HR determination. This fact could be acceptable for clinicians as the sensor is designed for monitoring rather than diagnosis. The results of respiratory measurement are characterized by the maximum relative error of 5.41%, and the maximum relative error of temperature measurement is 0.36%. The clinicians at the University Hospital (Brno, Czech Republic) intend to use this novel sensor in the near future to perform clinical studies on a variety of health care issues. | 6,400.8 | 2017-01-01T00:00:00.000 | [
"Computer Science"
] |
ECOLOGICAL RISKS IN THE ECONOMIC TRANSITION CONDITIONS : CASE STUDY OF REINDUSTRIALIZATION OF VALJEVO
Аbstract: In this paper we presented the case study of reindustrialization of Valjevo – ecological risks in the conditions of economic transition. The crisis in Serbia during the 1990s had, among others, consequences visible in the manufacturing activities, especially industry. Nevertheless, Valjevo, as an evident example of favorable business environment, has made significant efforts to build an ambient for successful entrepreneurship. The aim of the paper is a critical presentation of the state of: economy and employment, processes of reindustrialization, environment and ecological risks in Valjevo. Special attention was given to the presentation of investment activities (direct investments recorded from companies from Austria, Italy, Slovenia, Croatia and other countries). The state of the environmental elements’ quality has been estimated, and, on the basis of this, we can conclude that in Valjevo, it is mainly satisfactory, although, in some areas, more or less altered. Also, that a higher concentration of population and various economic activities, along with inherited industrial production (outdated technological processes, large quantities of industrial waste, low energy efficiency, lack of facilities and equipment for pollution reduction, etc.) are in conjunction with the possible occurrence of ecological risks. It was found that the current state of data and planning documents related to ecological risks assessment on the territory of Valjevo, is characterized by the basic availability of information on the dangers of possible natural disasters and technological accidents, as well as on severity of consequences that can cause (endangering the health, security and life of people, damage of a smaller or larger volume, changes of the environmental status,
Introduction
Reindustrializiation is a process of structural change in industries, proceeds as a result of a need to reinvigorate national ecomomies.Therefore, economic growth is stimulated through government aid and tax incentives, modernization of factories and machinery, (foreign) direct investments etc.Its main feature is an increase in relative participation in the GDP creation and employment, particulary in these industrial branches in which technical and technological progress is the leading development factor.According to Dimitrijević et al. (2013) reindustrialization does not mean recovery of failed enterprises, but: expansion of financially "healthy" (private and public) enterprises, revitalizing of public enterprises and restructuring companies that can contribute to the production gap elimination and the emergence and development of new enterprises based on modern technology platforms (in private and public sector).The issues of the processes of transition and reindustrialization were dealt with by numerous foreign (Dubenetskii, 2014;Gryczka, 2015;Göler & Lehmeier, 2012;Młody, 2016;Russu, 2010), and domestic authors (Adžić, 2007;Milivojević, 2015;Mićić, 2015;Mićić & Zeremski, 2011;Stevanović i dr., 2013) and many other.
Valjevo is the economic, cultural and administrative center of Valjevo region and the seat of the Kolubara Administrative District.According to the 2011 census data, in the city live 59,073 inhabitants, and in the municipality 90,312 (РЗС, 2014)2 .In administrative terms, city of Valjevo borders with municipalities of Ub and Koceljeva in the north, Osečina and Ljubovija in the west, Kosjerić and Požega in the south, and Mionica and Lajkovac in the east.
Industry of Valjevo began to develop after the end of the Second World War.However, the process of industrialization reached its peak in the late 1970s and early 1980s, which "caused" urbanization of the city.The main industrial branch in Valjevo was metal processing, where in 1975 about 70% of the total number of employees in industrial activity were employed (Скупштина Општине Ваљево, 1976).Investments are an important indicator of the development of certain (economic) branches.In the development of this branch has been invested actively -by expanding existing capacities and by building new plants (until the 1990s).Investments in the industry of Valjevo amounted only 18.80% in total investments in the municipalities of the Kolubara District in 1991.Nevertheless, it has managed to remain an economic center of the District thanks to the (already) developed metal processing, food production and processing, machine and electrical industry.
During the mentioned period, and further, number of workers employed in industry and mining increased.Based on data from the 1971 census, the participation of employees in these sectors in the total (active) population of the municipality of Valjevo amounted to 16.73%, and in 2011 21.72% (calculated on the basis of data: РЗС, 2014b; СЗС, 1974).Contrary to this increase, in the period from 1971 to 2011, a decrease of 30% in the share of the population working in agriculture was recorded (on the Municipality level).That resulted in a smaller volume of investments in agriculture.This was the "consequence" of the development of Valjevo -in the district and administrative center, and the acquisition of new (tertiary and quadratic) functions.Over time, Valjevo has specialized in tertiary-quaternary activities that took over primacy in relation to secondary3 ones, and according to Матијевић (2005) it gained the role of a regional pole of concentration.Regional aspects of reindustrialization were the subject of research of many authors (Jucu, 2015;Kuleshov & Seliverstov, 2016;Lončar & Braičić, 2016;Ждан et al., 2016;Seliverstov, 2017;Wink et al., 2016) and others.
Methodology and data
This paper is structured for the purpose of examining and presenting the case study of reindustrialization of Valjevo -ecological risks in the conditions of economic transition -namely after the change of the political regime in Serbia in 2000.Thus we tried to determine and give a critical examination of the: process of reindustrializiation and foreign direct investment inflows, recent state of economy (employment rates and types of the industries), and state of the environment and ecological risks in Valjevo during given period.In terms of methodology, the paper is based on an analysis of data obtained from Statistical office of Serbia (РЗС).For a more comprehensive consideration of given issues, various local and regional data (from Department for local development and economy of the city of Valjevo (Odsek za lokalni razvoj i privredu)) and publications (Local Environmental Action Plan of Valjevo (ЛЕАП), Spatial Plan of the Municipality of Valjevo (ППО Ваљево), General urban plan for the city of Valjevo (ГУП), General Regulation Plan for the Tourist Center Divčibare (План генералне регулације за туристички центар Дивчибаре), and Regional Spatial Plan for the Kolubara and Mačva District (РППП Колубарског и Мачванског УО)) and foreign sources (scientific papers and researches) were used.
Recent state of the economy and employment
The transformation from a socialist country and country-planned economy towards a western-style democracy and market based economy has caused dramatic changes in economic, social, ecological and spatial development in many post-socialist countries (Miljanović et al., 2010), as well as in Serbia.Like many other cities in Serbia, Valjevo faced difficulties of economic restructuring long-lasting process.The transition period caused immeasurable damage to the economy of Valjevo.The consequences of the privatization process (textile and furniture industries, cardboard packaging, and of large companies in the field of metal processing) are: • the process of restructuring and privatization of almost all large state owned and social owned enterprises (which employed most of the labor force), • increase in unemployment (7,800 persons or 89 per 1,000 inhabitants in 2015) (РЗС, 2017) and Present economy of Valjevo is characterized by metal processing industry with weapons production, production of electrical household appliances, ready-made clothing, primary agricultural production, construction and graphic industry.The total number of employees in Valjevo in 2015 was 25,999 (44.01% of the total population), i.e. 296 per 1,000 inhabitants.Data on the registered number of employees indicate a high share of employees in the social sector (companies, enterprises, institutions, cooperatives and other organizations) -19,400 or 74.62%, private entrepreneurs (self-employed persons) and their employees -4,719 or 18.15% of the total number of employees, and 1.880 (7.23%) were registered as individual farmers (based on data: РЗС, 2017).By sectors of activity, the largest number of employees is recorded in: processing industry -8.253, wholesale and retail trade (and repair of motor vehicles) -3.521, health care and social protection sector -1.882, education -1.804, public administration (and mandatory social insurance) -1,627, construction -1,314 and transport (and storage) -1,006 persons (РЗС, 2017).Adaptability to the conditions of a market economy also recommends maximizing of existing capacities use, and organizing them in the conditions which are required by the modern society.Degree of the development success depends on the achieved level of dynamics and flexibility of all involved (state and local) stakeholders (Јеремић, 2013).
The base of the local economy consists of 5,675 enterprises (of which 96.73% are small, 2.74% medium and 0.53% large sized).The largest domestic companies are: Vujić doo, Agranela, Klanica Divci, INGRAP-OMNI, BOSIS and others (http://www.valjevo.rs/).Although Valjevo has begun to favor and stimulate the development of private sector (as well as small and medium-sized enterprises) a few decades ago, its overall effects on income and employment are still insufficient.Nevertheless, Valjevo also makes significant efforts to build an environment for successful operations and bringing investors, and is recognizable by a large number of investment locations.The General urban plan for the city of Valjevo ( 2013) provides of about 340 ha for two industrial zones -"Valjevo" and "Krušik" and economic zone "Beloševac".
Processes of reindustrializiation
Although the crisis in Serbia during the 1990s had consequences visible in the sphere of manufacturing activities, especially in the industry, Valjevo, as an evident example of favorable business environment, has made significant efforts to build an ambient for successful entrepreneurship.Also, it is worth to mention that on whole industry as well as the degree of industrialization, investments (domestic and foreign) have great impact.The accumulation (or lack) of investment activities in selected environments is the consequence of a number of factors where social economic differentiation in a region is reflected in changed location factors and where its advantages or disadvantages also contribute to the occurrence of new social and regional inequalities (Аничић и др., 2011;Ravbar, 2009).Some of the reasons for investing in Valjevo are: favorable geographical position, good state of (communal, transportation…) infrastructure, already built manufacturing and infrastructural capacities, adequate age and educational structure of laboure force, a large number of investment sites, benefits for the development of agricultural production -availability of raw materials for the food processing industry, etc. Sectors and activities that are considered to be suitable for investments are: metal processing, electrical, food production and processing, IT, chemical, and textile industry, trade with financial services, and wood processing and construction.Therefore, interest of foreign investors (for example, Austrian "Austrotherm"5 , Italian "Golden Lady"6 and Slovenian "Gorenje"7 ) who build production facilities in Valjevo in past few years, is not surprising.Since 2012, there are two megamarkets in the city: "Roda market" of the Croatian concern "Agrokor" and "InterEX" owned by French "Intermarché" and the company "IDEA" owned by the Croatian retailer "Konzum".In April 2016, the retail park of the Austrian company "Immofinanz" began operating in Valjevo.The total value of the above investments is about 74 million €, with about 3,000 employees (Odsek za lokalni razvoj i privredu, 2016).
State of the environment
The basic factors of environmental quality state are: pollution of water, air and soil, area devastation and reduction of quality of life caused by economic activities, as well as pollution caused by absence of organized collection of solid municipal and hazardous waste, irrational and uncontrolled exploitation of mineral raw materials, uncontrolled and irregular use of agrochemicals in agriculture, and others.According to current data, the state of environment of Valjevo is altered -more or less poluted and degradated in some areas.
Valjevo conducts continuous monitoring of environmental quality parameters (e.g.air8 and water).In the other settlements of the Municipality permanent monitoring of the state of the media has not been established.Locations of the most important pollutants -industrial plants and exploitation sites of clay, Zn and Mn, can be classified as areas of polluted and degraded environment, while the industrial zone in Valjevo has been recognized as one of the most polluted sites ("hot spot") in the Kolubara District (РППП Колубарског и Мачванског УО, 2013).
Pollution of air, water, soil, ionizing radiation and noise are significantly caused by: activities in mining and metallurgy -Valjevo (along with Lazarevac and Tamnava) basin, exploitation of clay, industrial activities etc.Big pollution sorce (of air and soil) is road traffic in the city center and in the corridors of the state roads in Valjevo's surroundings, as well as the heating and household combustion.Also, the problem presents non adequate state of communal infrastructure, which is reflected in: inadequate disposal of waste -a large number of non-sanitary landfills9 and large quantities of wastewater from industry (and rural settlements10 ), which are being recruited without previous treatment into the rivers Kolubra, Gradac and other.However, a positive example is the water supply system of the city and other settlements of the Municipality.Valjevo water supply system has become one of the best waterworks in Serbia, after the construction of the new water treatment plant "Pećina".After putting in operation of a new water purification plant at Divčibare and the introduction of reverse osmosis, the problem of water quality in this waterworks has been solved (ППО Ваљево, 2005).
The quality of water resources in the research area diverse.Watercourses of the Valjevo mountain range are without greater effluent loading and have excellent and very good status (class I and I / II).All watercourses in the city area, but also in the downstream Kolubara, by the data from Water management bases of the Republic of Serbia, have high quality class, although the real quality of the Kolubara is III / IV class.The quality of groundwater varies (particles of Fe, Mn and Zn are present) (Odsek za lokalni razvoj i privredu, 2016; РППП Колубарског и Мачванског УО, 2013).
Air quality depends on the level of pollutants from various sources of pollution.They can cause harmful effects on natural ecosystems and humah health when exceeding the permissible limit values.However, air quality is not significantly threatened, with the exception of the city area of Valjevo in the winter months (heating, household combustion).Also, Joksić et al. (2010) point out that as in other urban settlements in Serbia, the influence of PM 10 (byproduct of industrial activities and emissions from road traffic, heating and resuspended dust) on the air quality state in Valjevo is high.
The area of Valjevo consists for the largest part (about 70%) of I-IV quality class soils, which means that arable land is predominant.V-VIII quality class includes around 30% and covers hilly and mountainous terrains (Odsek za lokalni razvoj i privredu, 2016).The soil is endangered by exploitation, industry and traffic.Valjevo region is extremely endangered by erosion, with occurrences of: heavy erosion (in the Kolubara basin), medium erosion (in the municipality of Valjevo) and week erosion on larger areas (РППП Колубарског и Мачванског УО, 2013).
The communal waste collection system covers 95% of the city population, and only a few rural settlements.From the aspect of citizens' security and risk management, it is necessary to emphasize that Valjevo has solved the problems of managing hazardous chemical and medical waste, but it has not solved the problem of disposal of slaughtering industry waste (Odsek za lokalni razvoj i privredu, 2016).In waste management sector, unofficial and old landfills and dumps are also a major problem, mostly in rural settlements that are not part of an organized waste collecting network.
The area of tourist and mountain resort Divčibare, due to negative anthropogenic activities is threatened with serious environmental damage.Expansion of construction during recent years requires an urgent solution of one the key problems -sewerage network.If nothing is done in the near future, there is a danger of deterioration of ground and surface water resources and soil quality, and other consequences (План генералне регулације за туристички центар Дивчибаре, 2008).
Ecological risks
Risk represents a measure of a certain level of probability that some (natural or anthropogenic) activity or process, directly or indirectly, might cause a danger to the environment, human life and health, and other values.Risk assessment/analysis is a methodology used for determination the nature and extent of risk by analyzing potential hazards and evaluating existing conditions of vulnerability that could pose a potential threat or harm to people, property, livelihoods and the environment on which they depend (Rodier & Norton, 1992;UNISDR, 2009).
Ecological risk can be defined as evaluation of the likelihood of environmental hazards that can lead to endangering humans, living world or ecosystems (US EPA, 1998).In this regard, the following activities should be pointed out: management of ecological risks -a complex process of collecting, organizing, analyzing and presenting scientific and other data, for the purpose of enacting and implementing decisions related to the protection and improvement of the state of the environment and the assessment of ecological risk -establishing the existence of a relationship between risk factors (stressors) and ecological effects (Bakrač i dr., 2012).Brigagao (1990) emphasizes the importance of the influence of social (in)equity and local sustainable development on the state of (elements) of the environment and the occurrence of (ecological) risks.Same author highlights desirable activities such as: equality of rights over natural resources, prohibition of environmental aggression, exchange of information on (existing and possible) national and regional ecological risks, cooperation in emergency situations, establishing responsibility for environmental problems at the international level, as well as self-sufficient development.
In Valjevo, as potential ecological risks (from the occurrence of natural disasters) are recognized: hydrological (flood), lithospheric (earthquakes, landslides), atmospheric (hailstorm, drought) and biological (forest fires).The degree of vulnerability is different and depends on the type of disaster and the expected potential damage (endangering the health, security and life of people and material damage of a smaller or larger volume).These risks, according to Dragićević & Filipović (2009) might have a significant and tragic impact on society, impair the normal ways of life, hinder the economic, cultural, and sometimes political conditions of life, and delay the development of local communities.In terms of preventive actions and measures, to reduce and limit impacts of this type of ecological risks, given problems have been identified: insufficiently developed civil protection system for assistance during natural disasters and catastrophic events, lack of early warning system, unsatisfactorily level of awareness and lack of the citizens' culture of security in terms of protection, rescue and disaster risk reduction, the lack of a state insurance system against emergencies and others.Also, industrial production (both at national and regional (and local) levels) contributes to environmental pollution and the occurrence of ecological risks due to: obsolete technological processes, low volume of use of secondary raw materials, low energy efficiency, existence of large quantities of industrial waste, lack of simulative measures, as well as pollution abatement technologies and equipment (especially waste water treatment plants, exhaust gases and hazardous waste) to reduce pollution.In terms of ecological risks aspects in due to the specificity of industrial production in Valjevo (e.g.potential ecological risks of the integration of explosive substances into ammunition), the company "Krušik", manufacturer of artillery and rocket ammunition, is distinguished.The greatest risks are the processes of casting and pressing explosive substances and emissions of the mixtures of amorphous explosives in the air in the process of ventilation of production facilities.Although rigorous military-control regulations are respected, the ventilation and control equipment is outdated and requires more frequent verification procedures.Also, particular problem in "Krušik" are dispersed powders of Al, Mg and other metals which are additives to pyrotechnical mixtures during the final integration of ammunition assemblies (Milinović i dr., 2008).
Conclusion
It can be concluded that during and after transitional period, both at national, as well as at regional and local levels, continuous work on the process of building a constructive framework for national security and environmental risk management, is necessary.Numerous contemporary risks and dangers (ecological, technological, technical, health and other) that the local communities face, as well as the particularity and sensitivity of the consequences they cause (environmental, social, material, health), impose the need for protection and security to become a priority and interest of the civil society sector.
The current state of data and planning documents regarding ecological risks occurrence on the territory of Valjevo, is characterized by the basic availability of information, both on the dangers of possible natural disasters and anthropogenic accidents, as well as on severity of consequences 11 .In or-der to evaluate the spatial vulnerability degree (and the possibility of occurrence of ecological risks), it is necessary to create a vulnerability cadastre in function of spatial and urban planning, in the form of a list of points and zones of possible risks, probability of occurrence, extent of consequences, and and set protection priorities.Unfortunately, data suggest that there are insufficient capacities of local authorities, expert services and professional personnel trained for modern approach to managing ecological risks and inadequate (and in insufficient volume) monitoring of natural and anthropogenic processes that determine the occurrence of ecological risks in Valjevo.
In order to minimize the consequences of possible ecological risks and to develop a sustainable and effective way of managing them, it is required to undertake measures of prevention and protection, both of the population, as well as of natural and material goods.In this regard, at the local level, in Valjevo, the existence and operation (preventive 12 and curative 13 ) of the Caritas office, a trained facilitator for assisting and supporting local communities in the design of ways to reduce risks and face possible natural disasters, should also be emphasized.Also, it is essential that investors, as well as local government in Valjevo, follow legal norms and standards 14 and realize the importance of using new technologies that can provide better quality of the environment in the city and its surroundings and prevention of ecological risks.This refers to: affirmation of the use of renewable energy sources 15 , preservation of the water and air quality, modernization of the waste management system (recycling, purification, etc.), development of databases and information systems, as well as monitoring of the state of the environment, introduction of environmentally friendly and best available technologies in production, transport, energy, etc.In order to achieve these priorities in the forthcoming period, the activities should focus on socially responsible business, establishing better links between science, technology and entrepreneurship, and raising the level of awareness of all stakeholders on the importance of ecological risks manage and control. | 5,083.4 | 2017-01-01T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Rare-Earth-Doped Low Phonon Energy Halide Crystals for Mid-Infrared Laser Sources
Since ~15 years, solid state lasers emitting in bands II (2.7-4.3, 4.5-5.2 m) and III (8-14 m) of the atmosphere transparency spectral range are being developed for imaging, polluting species detection as well as military NRBC detection and optronic countermeasures. Because most of these applications require highly brilliant and/or important peak power laser sources, several RE3+-doped (RE=rare earth) low phonon energy (ħ<400 cm-1) chloride and bromide crystals, such as APb2X5 (A=K,Rb;X=Cl,Br) or CsCdBr3, stand out as promising laser gain media in the mid-infrared (MIR) spectral range [Doualan & Moncorge, 2003; Isaenko et al., 2008]. Indeed, these bulk crystals, transparent up to more than 18 m, could be used at room temperature and their emission lifetime-emission cross section product (EMR) at the laser wavelength is high enough to allow for energy storage and subsequent pulsed regime laser operation. Moreover, in these systems, the laser beam quality could be high even at high output powers.
Introduction
Since ~15 years, solid state lasers emitting in bands and III (8-14 m) of the atmosphere transparency spectral range are being developed for imaging, polluting species detection as well as military NRBC detection and optronic countermeasures.Because most of these applications require highly brilliant and/or important peak power laser sources, several RE 3+ -doped (RE=rare earth) low phonon energy (ħ<400 cm -1 ) chloride and bromide crystals, such as APb 2 X 5 (A=K,Rb;X=Cl,Br) or CsCdBr 3 , stand out as promising laser gain media in the mid-infrared (MIR) spectral range [Doualan & Moncorgé, 2003;Isaenko et al., 2008].Indeed, these bulk crystals, transparent up to more than 18 m, could be used at room temperature and their emission lifetime-emission cross section product ( EM R ) at the laser wavelength is high enough to allow for energy storage and subsequent pulsed regime laser operation.Moreover, in these systems, the laser beam quality could be high even at high output powers.This chapter is composed of three parts.The first one reviews all the successful laser operations ever demonstrated.The basic thermal, mechanical, optical and spectroscopical properties characterizations are presented for a series of halide crystals : single crystal Raman spectroscopy, Fourier-transformed infrared (FTIR) spectroscopy, X-Ray diffraction (XRD) and thermal conductivity data, showing at a glance the transparency range, the highest phonon energies, the site symmetry of RE 3+ ions as well as mechanical hardness, among other laser-related characteristics.The second part deals with RE 3+ ions spectroscopy related to the pumping strategy for a better inversion population kinetics at play during laser operation.Such mechanisms as upconversion energy transfers (ETU) and excited-state absorption (ESA) are detailed in the case of Er 3+ and Pr 3+ ions.Spectroscopic data on more exotic Tl 3 PbX 5 (X=Cl,Br) crystals, which are optically non linear in addition to the general propensity to MIR laser operation is presented only to illustrate crystal field strength trends affecting the absorption and emission bands, as well as energy transfer mechanisms between Er 3+ ions and ultimately gain cross sections.The third part of this chapter addresses the synthesis and crystal growth of pure and RE-doped chlorides and bromides in relation to their spectroscopical and laser operation properties.The choice of such growth parameters as the nature and shape of the crucible, the nature of the gas and its pressure, the growth rate, is fundamental to avoid bubble formation, stabilize RE 3+ oxidation state, minimize the complications arising from crystallographic phase transition and the mechanical stresses upon cooling, control as much as possible RE 3+ -ion segregation, and so on.All these aspects ultimately affect laser operation and the relationships between growth conditions, growth defects and laser performances have scarcely been discussed in the literature on these laser materials, which is surprising if these crystals are to be produced on a large industrial scale by the well spread Bridgman-Stockbarger method.
MIR solid state laser operation
To date, the longest laser wavelength ever achieved, 7.15 m [Bowman et al., 1994[Bowman et al., , 1996]], has been obtained with a 4-mm long LaCl 3 :Pr 3+ (0.7 % at.) single crystal operated in the pulsed regime.This pioneering work by Bowman and his collaborators has not been reproduced since 1996, probably because the crystals were so hygroscopic that they had to be kept in a cryostat during all the measurements to avoid their deliquescence.In these crystals, the most energetic phonon vibrates at 210 cm -1 and the transparency range extends up to 15 m.The pumping scheme of this laser emission involves an ETU mechanism by photon addition from the thermalized 3 H 6 and 3 F 2 levels according to 2* 3 H 6 3 H 5 + 3 F 3 (figure 1).The laser crystal is pumped at 2.02 m on the 3 F 2 level by means of a Tm:YAG laser (being itself diode-or flashlamp-pumped) delivering free-running pulse trains at frequencies from 2 to 100 kHz with an average power 70 W.In spite of this "awkward and expensive 2 m pumping scheme", as Howse et al. recently put it [Howse et al., 2010], the crystal exhibits a high absorption cross section around 2 m (10 times higher, for instance, than the absorption one around 800 nm, which corresponds to the 4 I 9/2 pump level of Er 3+ :KPb 2 Cl 5 , figures 2 and 3).The laser emission centered at 7.15 m occurs between levels 3 F 3 (the experimental lifetime of which 2 , measured by direct pumping with an Er 3+ laser at 1.6 m, equals 58 s) and 3 F 2 .At 20 °C, the laser slope efficiency reaches 3.9 %, the absorbed power conversion yield 2.3 % and the pump threshold ~4 mJ.Bowman et al. suggested that thermalization of levels 3 F 4 and 3 F 3 would make efficient a direct pumping at 1.5 m on level The main reason why the APb 2 X 5 (A=K,Rb;X=Cl,Br) family of laser hosts has triggered a continuous breed of publications since 2001 lies in the non hygroscopicity of the crystals, which turns out to be unusual among chlorides and bromides host crystals with luminescent properties [Egger at al., 1999;Kaminska et al., 2011;Nitsch et al., 1993Nitsch et al., , 1995aNitsch et al., , 1995bNitsch et al., , 2004;;Nitsch & Rodová 1999;Riedener et al., 1997;Rodová et al., 1995;Vinogradova et al., 2005;Zhou et al., 2000].The interest of diode pumping lies in the simple and compact cavity design (cavity length of 8 mm with KPb 2 Cl 5 :Er 3+ crystals), and that of KPb 2 Cl 5 crystals, as previously stated, to manipulate a non hygroscopic and air stable gain medium.Even if a 19 W power diode had to be used (absorption rate ~4 %) with a beam waist ~150 m, it should be noted that this crystal was efficiently operated at ~50 Hz, without any cooling system, clearly establishing its satisfying thermomechanical properties (table 1).Laser operation was demonstrated at 5.5 µm in RbPb 2 Cl 5 crystals doped with Dy 3+ ions (2.10 19 cm -3 , 6 H 9/2 + 6 F 11/2 6 H 11/2 ), with flash-lamp pumped YAG:Nd 3+ laser operating in free multimode simultaneously at 1.32 and 1.34 µm, with a repetition rate of 2.5 Hz and a beam waist in the crystal ~300 µm [Okhrimchuk et al., 2007].Preliminary laser tests on unoriented crystals obviously full of scattering defects gave a laser slope ~0.1 % and a threshold of 25 mJ.Although Dy 3+ ion spectroscopy were investigated in CaGaS 4 and KPb 2 Cl 5 crystals, because of their seemingly promising laser transitions 6 H 11/2 6 H 13/2 at 4.31 µm, we shall not insist on that since they can not be currently diode pumped at 1.32 µm [Nostrand et al., 1998[Nostrand et al., , 1999;;Okhrimchuk, 2008] Thermal expansion coefficients (10 -6 K -1 ) 1. Laser-related crystallographic, thermal, mechanical, optical and spectroscopical properties of selected low phonon energy halide crystals [Aleksandrov et al., 2005;Atuchin et al., 2011;Cockroft et al., 1992;Doualan & Moncorgé, 2003;Ferrier et al., 2006aFerrier et al., , 2007Ferrier et al., , 2008aFerrier et al., , 2008bFerrier et al., , 2009aFerrier et al., , 2009b;;Heber et al., 2001;Isaenko et al., 2008Isaenko et al., , 2009aIsaenko et al., , 2009b;;Malkin et al., 2001;Mel'nikova et al., 2005;Merkulov et al., 2005;Neukum et al., 1994;Nostrand et al., 2001;Okhrimchuk et al., 2006;Popova et al., 2001;Quagliano et al., 1996;Ren et al., 2003;Singh et al., 2005;Velázquez et al., 2006aVelázquez et al., , 2006b;;Virey et al., 1998;Vtyurin et al., 2004Vtyurin et al., , 2006]].(*) per lead type of crystallographic site.
3+ -doped single crystals
The great number of close energy levels within the 4f configuration explains the interest for RE ions as optically active species for MIR applications.When dissolved in low phonon energy crystals such as halides and bromides, nonradiative multiphonon emission probabilities between the two levels of the laser transition are significantly reduced.Hence, they allow for reaching long emitting level lifetimes, on the order of a few tens of µs to tens of ms.Consequently, RE-doped APb 2 X 5 crystals are likely to insure sufficient energy storage for amplification.The shape and magnitude of the absorption bands around 800 and 980 nm (figure 3), where efficient, compact, rugged, high-powered and cheap laser diodes are easily available as pumping sources, has been widely characterized.The forced electric dipole emission bands from the Er 3+ ions 4 I 9/2 multiplet, which are never obtained in the MIR range (1.7 m, 4.5 m) in oxides and fluorides, was also exhaustively discussed (figure 4).On the other hand, energy transfers between Pr 3+ ions (figure 1), on which the world record in terms of laser wavelength is based, has been investigated with an emphasis put on ion pairing effects, by comparing the efficiency of energy transfers in KPb 2 Cl 5 , Tl 3 PbBr 5 and CsCdBr 3 , all of which being non hygroscopic.Exactly as in the case of Er 3+ ions, the shape, magnitude and possible polarization effects of the absorption and emission bands involved in laser operation around 5 m have been commented and compared to well established laser systems in the near infrared such as Nd 3+ :YAG.Absorption (or transmission), emission and excitation spectra recorded as a function of temperature at low temperature (10-50 K) over a broad spectral range allow for accurately determining (3 cm -1 ) the crystal-field energy sublevels of Er 3+ ions dissolved into KPb 2 Cl 5 and Tl 3 PbBr 5 and fitting the energy level structure with parameterized crystal-field model Hamiltonians.This makes possible, among other things :
to calculate the electronic sum over states function of the first 11 multiplets (from 4 I 15/2 to 2 H 9/2 ) to characterize the possible existence of several incorporation sites for RE ions The knowledge of these data is mandatory not only to estimate absorption and emission cross sections of the optical lines exploited for laser operation, but also to assign the bands observed on the excitation and anti-Stokes emission spectra.This has given a strong impetus to spectroscopic investigations in Russia, in Europe and in the USA [Balda et al., 2004;Gruber et al., 2006;Jenkins et al., 2003;Quimby et al., 2008;Tkachuk et al., 2005Tkachuk et al., , 2007]].The peak-by-peak assignment of crystal-field sublevels also permits to check the number of peaks expected for a complete degeneracy lift of each multiplet (J+1/2).This demonstrates the occurrence of only one symmetry-type site for Er 3+ ions in these two host compounds, a priori C 1 (table 1).However, as a great number of non equivalent defects is likely to form (for instance, more than one hundred in KPb 2 Cl 5 [Velázquez et al., 2006b]) and the point group symmetry is very low for all the atomic positions, the precise characterization of the Er 3+ incorporation sites in both compounds remains difficult.Crystal-field calculations with C s /C 2 point groups lead to a satisfactory agreement between experimental and calculated energy levels [Ferrier et al., 2007[Ferrier et al., , 2008a;;Gruber et al., 2006].
Absorption cross sections around 800 and 980 nm, spectral range in which diode pumping is commonly available, are shown in figure 3. The absorption bands are large (for a RE 3+doped system) and so suitable for diode pumping.Judd-Ofelt analysis performed with absorption spectra over the first 11 excited states showed that the branching ratio of the 4 I 9/2 4 I 11/2 transition around 4.5 m amounts to only 1.2 %.But even if this is low as compared to other laser systems, these absorption cross sections are typical of forced electric dipole transitions and sufficient to be exploited under diode pumping [Bowman et al., 1999[Bowman et al., , 2001;;Condon et al., 2006a], provided that a high power is used.Cross section calibrated emission spectra obtained under excitation at 800 nm display a poorly structured and virtually independent on polarization broad band, which exemplifies the interest in chlorides and bromides for MIR laser applications (figure 4).Indeed, emissions from multiplet 4 I 9/2 , to multiplet 4 I 13/2 at 1.7 m, and to multiplet 4 I 11/2 at 4.5 m are never observed neither in oxides nor in fluorides.In particular, in Tl 3 PbBr 5 :Er 3+ , 15 « Raman » phonons of the highest energy are required to match the energy difference between the two multiplets implied in the laser transition 4 I 9/2 4 I 11/2 , which entails multiphonon emission kinetics much slower than radiative de-excitation ones (figure 2).
Emission cross-sections, ~2.10 -21 cm 2 ( EM R ~(3-5).10 -24 cm 2 .sat max ~4.5 m), are as high in Tl 3 PbBr 5 as in KPb 2 Cl 5 , and the experimental lifetimes of the three first excited levels 2 to 4 ms in both compounds, which is favourable to diode pumped pulsed regime laser operation.As the experimental lifetime of the terminal level (more than 3 ms) is longer than that of the emitting one (around 2 ms), we investigated the possibility of observing anti-Stokes emissions at room temperature and recording their excitation spectra likely to unveil parasitic mechanisms depleting 4 I 9/2 and 4 I 11/2 energy levels.Figure 5 shows the anti-Stokes luminescence issued from 4 G 11/2 , 2 H 9/2 (violet), 4 F 3/2 / 4 F 5/2 (blue), 2 H 11/2 , 4 S 3/2 (green), 4 F 9/2 and 2 H 11/2 (red) by Ti:Sa laser cw excitation at 804 nm, in Er 3+ -doped KPb 2 Cl 5 and Tl 3 PbBr 5 crystals.In order to understand by which mechanism(s), likely to affect population inversion kinetics during laser operation, all these levels gets populated in spite of the fact that excitation around 800 nm is non resonant, it proved useful to get excitation spectra of these emissions as well as fluorescence decay measurements under pulsed non resonant excitation.References [Balda et al., 2004;Ferrier et al., 2007Ferrier et al., , 2008a;;Quimby et al., 2008;Tkachuk et al., 2007] 6), the laser pumping wavelength which is foreseen in this system.
In Tl 3 PbBr 5 crystals, excited state absorption cross section are virtually the same in magnitude but with a systematic red shift ~5-6 nm.Time resolved emission spectroscopy is mandatory to quantify the different relaxation rates, since population mechanisms of 2 H 9/2 and ( 4 F 3/2 , 4 F 5/2 ) levels coexist more or less as a function of the excitation wavelength.Calibration of the spectra in cross sections units by successive application of Fuchtbauer-Ladenburg and MacCumber relationships permits to estimate the energy transfer microparameters which appear in the rate equations driving the population inversion kinetics during laser operation, and consequently to optimize the wavelength and temporal width of pumping pulses.This investigation demonstrated that the effect of ETU (see in Fig. 6) turns out to be completely negligible until the concentration of 3.3 mol % is achieved in KPb 2 Cl 5 crystals [Ferrier et al., 2007] (which corresponds to an average Er 3+ -Er 3+ distance of ~12 Å), and that optical pumping at 800 nm, that is, at the absorption peak in KPb 2 Cl 5 :Er 3+ , does not lead to substantial excited state absorption losses towards the 2 H 9/2 level (see in Fig. 5).As a matter of fact, when the Er 3+ ion concentration increases, losses by ETU also increase, but less rapidly than the population rate of the emitting 4 I 9/2 level, so that the resulting population inversion also increases.A striking structure-property relationship can be seen in the fact that the 4 F 7/2 experimental lifetime is seven to eight times longer in Tl 3 PbBr 5 (70 s) and in KPb 2 Br 5 (85 s) than in KPb 2 Cl 5 (10 s), which demonstrates that non radiative multiphonon emission kinetics are much slower in bromides than in chlorides [Hömmerich et al., 2005].Indeed, the 2 H 11/2 level just below the 4 F 7/2 one lies at 1321 (resp.1296) cm -1 in KPb 2 Cl 5 (resp.Tl 3 PbBr 5 ), requiring 6.4 (resp.9.4) phonons of the highest frequency to match this energy difference.Riedener and Güdel [Riedener & Güdel, 1997] have already observed blue and very weak anti-Stokes luminescence in RbGd 2 Br 7 :Er 3+ crystals, that was spaced by a few hundreds of cm -1 from intense anti-Stokes green luminescence in RbGd 2 Cl 7 :Er 3+ crystals.It was obtained under excitation at 977.6 nm in the bromide and at 975.1 nm in the chloride on the 4 I 11/2 level, followed by ESA on the 4 F 7/2 level.In addition, the bottleneck effect of the laser effect likely to arise from the long emission lifetime of the 4 I 13/2 level 4.6 ms, is unsignificant in pulsed regime because the latter is virtually empty.The gain cross section is displayed in figure 7 and permits the identification of the most probable laser oscillation wavelength at the maxima.
3+ -doped single crystals
By adjusting their fluorescence data at 1.6 m issued from the 3 F 3 level with the system of population rate equations reported hereafter, Moreover, by matching the decay rate to an exponential law, they determined the apparent lifetime 2 " " 900 s (that is, 15.5 times longer than the real 2 ) standing for the electronic feeding of the 3 F 3 level due to the crossed relaxation mechanism (figure 1).It is worth noting the laser emission centered at 5.2 m (figure 1) which is obtained at 130 K with the same pumping scheme, a power conversion yield of 23 % and a threshold of 2 mJ that increases strongly with temperature.Homovalent substitution of La 3+ cations to Pr 3+ ones modifies very weakly local vibrational frequencies because there is neither vacancies nor interstitial formation in the vicinity of Pr 3+ ions and the molar masses are very close (only 1.4 % relative difference).It seems that Pr 3+ ions form pairs, explaining the efficiency of the energy transfer [Guillot-Noël et al., 2004].This pionnering work of Bowman and his coworkers triggered the interest for a systematic study of Pr 3+ ion fluorescence kinetics around 7.2 m, in host compounds as KPb 2 Cl 5 , Tl 3 PbBr 5 and CsCdBr 3 , all non hygroscopic, by means of two pumping schemes : at 2 m, with a diode pumped Tm 3+ laser, on the 3 F 2 level followed by thermalization and ETU ; at 1.5 m, with an Er 3+ laser, directly on 3 F 4 level followed by thermalization.
In the near future, such systems will also be pumped at 4.6 m by means of high power quantum cascade laser (QCL) diodes for the emission at 5 m from the 3 H 5 level, particularly in KPb 2 Cl 5 because the pumping strategy involving ETU seems more favourable than in CsCdBr 3 , not only because of lower phonon energies but also because the one dimensional crystal structure of the latter compounds favours ion pairing of Pr 3+ [Amedzake et al., 2008;An & May, 2006;Balda et al., 2002Balda et al., , 2003;;Bluiett et al., 2008;Chukalina, 2004;Ferrier et al., 2009aFerrier et al., , 2009b;;Gafurov et al., 2002;Guillot-Noël et al., 2004;Neukum et al., 1994;Rana & Kaseta, 1983].Absorption and emission spectra recorded as a function of temperature at low temperature (10-50 K) over a broad spectral range allows for the determination of Pr 3+ ions crystal field sublevels dissolved in KPb 2 Cl 5 , and to match them with the eigenvalues of a parameterized crystal field model hamiltonian taking into account 74 sublevels of the 12 first multiplets.The 3 P 1 and 1 I 6 were not included in the refinement procedure because their crystal field sublevels could not be successfully deconvoluted, even at low temperatures.Absorption cross section peaks around 2 m are virtually twice higher in CsCdBr 3 than in Tl 3 PbBr 5 crystals, and the absorption line profiles in CsCdBr 3 :Pr 3+ differs from those observed in the two other compounds (figure 8).In the former crystals, absorption lines are narrow and structured, whereas in the latter ones they are broad, consistently with point defect disorders expected for the three crystal structures [Ferrier et al., 2006a;Guillot-Noël et al., 2004;Velázquez et al., 2006b].The absorption cross section at 1.55 m is of the same magnitude as that at 2 m in KPb 2 Cl 5 and Tl 3 PbBr 5 , suggesting the possibility of a direct pumping on the ( 3 F 4 , 3 F 3 ) levels with an Er 3+ Kigre laser (or any other means : fibre, IR laser diode, etc.).Crystal field calculations with point symmetry C s /C 2 lead to a good agreement between experimental and simulated energy www.intechopen.comSolid State Laser 130 levels [Ferrier et al., 2008b].Figure 1 allows for checking the many possible resonances around 2.1 m, between 3 H 4 , 3 H 5 , ( 3 F 2 , 3 H 6 ) and ( 3 F 4 , 3 F 3 ) levels, propicious to efficient energy transfers (even at low Pr 3+ ions concentration) which make difficult the resolution of the emission spectra and the interpretation of the relaxation kinetics of these complex excitations.Fluorescence decay measurements, from the 3 F 3 level after excitation at 1.54 m, realized on weakly (~5.10 18 ions.cm - ) and « strongly » (~5.31×10 19 ions.cm - ) Pr 3+ -doped KPb 2 Cl 5 crystals, revealed an exponential decay in the first case, and non exponential in the second one, firmly establishing energy transfers with concentration such as : 3 H 4 + 3 F 3 3 H 5 + 3 H 6 and 3 H 5 + 3 F 3 2* 3 H 6 .
Fig. 8. Absorption cross sections around 1.55 and 2 m at room temperature.The absorption cross section around 4.6 m are shown in figure 10.
Broad band emission spectra are poorly structured and exhibit important emission cross section, ~(2-4).10 -2 cm 2 ( EM R ~(2-4).10 -23 cm 2 .sat the emission peak, virtually as much as in YAG:Nd 3+ at 1.064 m), in the two compounds CsCdBr 3 and KPb 2 Cl 5 .Several levels emit around 4.5-5 m [Rana & Kaseta, 1983] : the ( 3 F 4 , 3 F 3 ) levels around 5 m, the ( 3 F 2 , 3 H 6 ) levels around 4.5 m and the 3 H 6 level around 4.9 m.The former ones already gave rise to the laser operation described in section 1, but in comparison with the other levels, the branching ratio of the laser transition proves to be rather weak (~2.3 % (resp.~1.6 %) versus ~14.7 % (resp.~8.4 %) and ~48.4 % (resp.~54.4%) in KPb 2 Cl 5 (resp.CsCdBr 3 )).The 3 H 6 3 H 5 transition seems interesting not only because of its branching ratio, but also because the experimental lifetime is quite long, 1 ms (3.6 ms in CsCdBr 3 ), and so favourable to energy storage.Nevertheless, this transition competes with ground state absorption and energy transfer processes induced by the excitation at 2 m, likely to provoke important losses.Finally, the simplest solution remains the 3 H 5 3 H 4 transition which, even if it ends on the ground state, has a long lifetime, a quantum yield close to 1 (=100 %) and broad and high absorption and emission cross sections.Such an emission could be efficiently pumped by means of QCL diodes around 4.6 m.Although several energy levels (( 3 F 4 , 3 F 3 ), ( 3 F 2 , 3 H 6 ) and 3 H 6 ) can emit around 4-5 m, by means of the same excitation scheme as described above, and with a time resolved detection, it was possible to discriminate the IR luminescence associated with the 3 H 5 3 H 4 transition, and to calibrate the spectra in cross section units by means of the Fuchtbauer-Ladenburg formula (for Tl 3 PbBr 5 , we took the experimental lifetime, f ( 3 H 5 )=30 ms).Absorption and emission spectra around 4.6 m turn out to be very broad in the three compounds (figure 10).The emission cross section is about twice that found in KPb 2 Cl 5 :Er 3+ around 4.5 m and the radiative lifetimes are several tens of ms ( EM R ~(2-3).10 -22 cm 2 .sat the peak).Judd-Ofelt analysis gives R ( 3 H 5 )=38 ms (versus f =5.65 ms in KPb 2 Cl 5 ), R ( 3 H 5 )=82 ms (versus f =29 ms dans CsCdBr 3 ), once again favourable to energy storage with a view to short-pulse MIR laser operation.For a total Pr 3+ ions concentration typically ~10 20 (resp.5.10 19 ) cm -3 and a crystal length of 5 mm, the absorption rate of a pump beam at 4.6 m would reach 21.3 (resp.11.3) %.The emission lines profiles, shown in figure 10 for the two compounds KPb 2 Cl 5 and CsCdBr 3 , were checked by calibrating them by the reciprocity method, and the agreement with the previous spectra is satisfying [Ferrier et al., 2008b[Ferrier et al., , 2009b]].Gain cross section calculated with these spectra are displayed in figure 11.In the case of KPb 2 Cl 5 and of Tl 3 PbBr 5 , the gain cross section is very broad and devoid of maximum, which could give rise to a broad tunability over a large spectral range.In the case of CsCdBr 3 , the gain cross section, weaker, exhibits a peak at 4.75 m and a rounded shape ~5 m, where laser operation is expected.In all cases, we observe that the gain cross section is positive as soon as =0.4,from 4.96 to 5.6 m for CsCdBr 3 , and from 4.87 to 5.48 m for KPb 2 Cl 5 .As this type of 3-level lasers require a high population inversion ratio (=0.4), an efficient pumping source must be designed, which can excite the strongest absorption bands around 1. 6 ( 3 H 4 ( 3 F 4 , 3 F 3 )) and 2 m ( 3 H 4 ( 3 F 2 , 3 H 6 )) with solid state lasers or Er 3+ or Tm 3+ -doped fibers now widely spread.
Fig. 10.Absorption and emission cross sections between 3 H 5 and 3 H 4 levels at room temperature.The arrow indicates an artefact peak due to the response function of a filter 3.5 m incorporated in the setup in order to eliminate fluorescence harmonics.
Another way would consist in codoping the crystal with Yb 3+ ions (resp.Tm 3+ ions), and to exploit Yb 3+ -Pr 3+ energy transfer [Balda et al., 2003;Bluiett et al., 2008;Howse et al., 2010] thanks to high-powered low cost 980 nm (resp.800 nm) diode pumping, but the high concentrations required remain hardly achievable in these compounds on the one hand, and the energy difference between laser and pumping wavelengths which would entail an important thermal load, on the other hand, lead us to consider another promising alternative : QCL diode pumping around 4.6 m (which delivers 1.8 W at room temperature [Bai et al., 2008]), that is, pumping directly in the emitting level.Preliminary calculations, which do not take into account neither excited state absorption nor energy transfers, suggest that these sources should allow for reaching a sufficient population inversion ratio to obtain laser operation at reasonable pumping powers with power conversion yields on the order ~10 % [Ferrier et al., 2008b[Ferrier et al., , 2009b]].Let us stress once again the necessity of both obtaining microstructure-related loss-free crystals and designing an efficient pumping scheme.Indeed, even if the gain cross section becomes positive at 40 % population inversion, the latter quickly saturates at 53.5 % in KPb 2 Cl 5 :Pr 3+ for a pump beam waist at 4.6 m of 100 m, and one expects laser emission centered at 5.05 m, where the gain cross section reaches a low ~1.1×10 -21 cm 2 (figure 11).
For a crystal length of 5 mm and an excited ion concentration of 10 19 cm -3 , the cavity roundtrip gain amounts to 1.1 %, that is, virtually the same as that found in KPb 2 Cl 5 :Er 3+ .In order to explain the homogeneous and inhomogeneous substantial broadening of the absorption and emission bands in Pr 3+ :KPb 2 Cl 5 , a scrutinous examination of the absorption line profile on the 3 P 0 profile at 487.5 nm is mandatory [Ferrier et al., 2008b].As a matter of fact, for this non degenerate level one expects one single line per substitution site, provided that the second crystal field sublevel of the ground state 3 H 4 does not contribute to the absorption signal shape : as it lies at 15 cm -1 , it should not be significantly populated at temperatures lower than 21.6 K. Figure 12 clearly shows the presence of five absorption lines and consequently as many substitution sites for Pr 3+ ions in KPb 2 Cl 5 , giving a beginning of explanation to the origin of the inhomogeneous band broadening, which consequently dominates the homogeneous broadening at low temperature [Ferrier et al., 2008b].This characteristic is consistent with the structural description of the point defects [Velázquez et al., 2006b], but if it is so, why does Er 3+ or Eu 3+ substitution lead to majoritarily one type of point defect [Cascales et al., 2005;Ferrier et al., 2007;Gruber et al., 2006;Velázquez et al., 2009] ?Moreover this characteristic confirms the advantage of using KPb 2 Cl 5 instead of LaCl 3 , the lines of which are much narrower and force the experimentalist to find a pumping source emitting precisely, for instance, at 1.54 m.
In Figure 13, we see that these energy transfers do not only occur undistinctly in KPb 2 Cl 5 and in CsCdBr 3 , but that they also occur in the former compound with Pr 3+ ions concentrations a hundred times weaker.The Pr 3+ ion pairing in CsCdBr 3 , favourable to energy transfers described above, can be related to the decrease by a factor of 4 of the relative intensity of the emission issued from ( 3 F 2 , 3 H 6 ) levels in favour of those issued from ( 3 F 4 , 3 F 3 ) levels.
Loss mechanisms and current challenges in the growth of MIR laser halide crystals
The growth conditions of pure or RE 3+ -doped APb 2 X 5 (A=K,Rb ; X=Cl,Br) and Tl 3 PbX 5 (X=Cl,Br) single crystals for MIR laser and/or nonlinear optical applications have been exhaustively detailed in an impressive body of work [Amedzake et al., 2008;Atuchin et al., 2011;Bekenev et al., 2011;Condon et al., 2006b;Ferrier et al., 2006aFerrier et al., , 2006b;;Gang et al., 2008;Isaenko et al., 2001;Nitsch et al., 1993Nitsch et al., , 1995aNitsch et al., , 1995bNitsch et al., , 2004;;Nitsch & Rodová 1999;Oyebola et al., 2010;Rodová et al., 1995;Roy et al., 2003;Singh et al., 2005;Tigréat et al., 2001;Velázquez et al., 2006aVelázquez et al., , 2006bVelázquez et al., , 2009;;Voda et al., 2004;Wang et al., 2007].Special safety conditions for the manipulation of large amounts of TlCl or TlBr powders must be followed, but this does not entail insuperable difficulties [Peter & Viraraghavan, 2005].It is now widely established that crystal growth by the Bridgman method must be carried out in sealed silica ampoules under a low pressure (~10 -2 atm) of Cl 2 or HCl (HBr for bromides) gas to avoid bubble formation, and at low growth rates (~0.1-1mm.h -1 ) in axial thermal gradients typically 15-20 K.cm -1 with opening angles of the ampoule bottom lower than 120° to avoid shear stress induced cracks.PbX 2 (X=Cl,Br) starting materials must be purified by previous solidification runs by the Bridgman method [Basiev et al., 2004].In KPb 2 Cl 5 crystals, roundtrip losses mechanisms, such as laser beam depolarization and double refraction [Condon et al., 2006b], are likely to be due to a classical kind of twinning, which occurs above a finite stress threshold, consisting in twin planes perpendicular to [100] and [001] directions and leading to a loss of the laser power transmitted within the cavity of up to 50 %.This has seriously hindered further development of highly brilliant laser systems at 4.6 m (and presumably higher wavelengths), since with an absorption cross section abs 2.10 -21 cm 2 at 800 nm, an Er 3+ ion concentration 4 15/2 I n ~5.10 19 cm -3 and a typical crystal length of 5 mm, the pump beam absorption rate in a classical two-mirror cavity based on a KPb 2 Cl 5 :Er 3+ amplifier is about 5 %.On the other hand, the expected round-trip gain ( 1.9×10 -21 cm 2 , barely reaches 1 %.So, the two main current challenges in terms of laser crystal growth are to increase the Er 3+ ion content and avoid twinning of the crystals upon cooling.The twinning patterns observed under polarized light are due to the phase transition that occurs upon cooling the crystal, at T t =255 °C.The mechanism of this phase transition has been definitely unveiled in recent years simultaneously by us and a Russian team [Mel'nikova et al., 2005[Mel'nikova et al., , 2006;;Merkulov et al., 2005;Velázquez et al., 2006bVelázquez et al., , 2009;;Velázquez & Pérez, 2007].The phase transition is driven by the K + and Pb(2) 2+ cationic ordering that induces a Pmcn to P2 1 /c symmetry change upon cooling, at 255 °C.Similar twinning patterns were observed in KPb 2 Br 5 crystals [Mel'nikova et al., 2005], and twinning patterns associated with the crossing of another phase transition was also observed in Tl 3 PbBr 5 [Singh et al., 2005].It is possible to produce, by means of slow cooling, small size (~1 to 8 mm 3 ) untwined single crystals, but any stress of magnitude higher than a certain threshold value will result in the formation of a twinning pattern.In order to get rid of that, one must increase the diffusion activation energy, E a , by increasing the radius of the A + cation in APb 2 X 5 (X=Cl,Br) compounds, in such a way that E a RT m , with T m =432.3 °C for KPb 2 Cl 5 .On passing from A=K to A=Rb, the crystal does not undergo any phase transition between the melting and room temperatures, which explains the interest of several research groups in developing RE-doped RbPb 2 Cl 5 crystals.However, according to our preliminary experiments, in rubidium lead chloride crystals, the Er 3+ solubility seems lower than in potassium lead chlorides.So, it is clear that some innovative codoping of Er 3+ ions with another ion devoid of optical activity must be found, in order to increase its solubility in rubidium lead halides [Isaenko et al., 2009b;Tarasova et al., 2011;Velázquez et al., 2009].
Conclusion
This chapter has emphasized the laser potential of the currently most important rare-earth doped chlorides and bromides laser crystals.Some of them (LaCl 3 :Pr 3+ , KPb 2 Cl 5 :Er 3+ , KPb 2 Cl 5 :Dy 3+ ) have already led to laser systems operating in bands II and III of the atmosphere transmission window.All these crystals were grown exclusively by the Bridgman method in sealed silica ampoules.After 15 years of research on MIR solid state lasers carried out in Europe, USA, Russia and more recently China, lots of improvements remain to be done in the realm of synthesis and crystal growth of the laser materials, as well as on the optimization of the pumping strategy.Both kind of advances will require new and original solid solution crystal engineering in order to both get rid of the phase transition inducing twinned microstructures detrimental to efficient laser operation and increase the suitable RE ions solubilities in the halide host compounds.As the gain cross sections in the spectral range where laser operation was demonstrated are relatively weak (~(1-5).10 -2 cm 2 ), an important effort must be devoted to the annihilation of the losses which could also be achieved by a clever shaping and functionalization of the crystals.In this perspective, 3level laser operation by QCL diode pumping on the Pr 3+ 3 H 5 multiplet in KPb 2 Cl 5 crystals could be interesting because of the low thermal load, the substantial absorption cross section and also because the refractive index difference between twin domains should be much smaller in this spectral range.
Fig. 1 .
Fig. 1.Energy level diagram of Pr 3+ ions in the LaCl 3 host crystal.Blue arrows indicate the phonon assisted non radiative ETU mechanism.
Fig. 12 .
Fig. 12. Ground state absorption on the 3 P 0 level of Pr 3+ ions dissolved into KPb Cl 5 , at low temperature and as a function of temperature.Five lorentzian functions are necessary to deconvolute the whole signal.
Table
highlight the relevance of such processes as : | 8,855 | 2012-02-17T00:00:00.000 | [
"Physics",
"Materials Science"
] |
On the practical usefulness of the Hardware Efficient Ansatz
Variational Quantum Algorithms (VQAs) and Quantum Machine Learning (QML) models train a parametrized quantum circuit to solve a given learning task. The success of these algorithms greatly hinges on appropriately choosing an ansatz for the quantum circuit. Perhaps one of the most famous ansatzes is the one-dimensional layered Hardware Efficient Ansatz (HEA), which seeks to minimize the effect of hardware noise by using native gates and connectives. The use of this HEA has generated a certain ambivalence arising from the fact that while it suffers from barren plateaus at long depths, it can also avoid them at shallow ones. In this work, we attempt to determine whether one should, or should not, use a HEA. We rigorously identify scenarios where shallow HEAs should likely be avoided (e.g., VQA or QML tasks with data satisfying a volume law of entanglement). More importantly, we identify a Goldilocks scenario where shallow HEAs could achieve a quantum speedup: QML tasks with data satisfying an area law of entanglement. We provide examples for such scenario (such as Gaussian diagonal ensemble random Hamiltonian discrimination), and we show that in these cases a shallow HEA is always trainable and that there exists an anti-concentration of loss function values. Our work highlights the crucial role that input states play in the trainability of a parametrized quantum circuit, a phenomenon that is verified in our numerics.
Variational Quantum Algorithms (VQAs) and Quantum Machine Learning (QML) models train a parametrized quantum circuit to solve a given learning task.The success of these algorithms greatly hinges on appropriately choosing an ansatz for the quantum circuit.Perhaps one of the most famous ansatzes is the one-dimensional layered Hardware Efficient Ansatz (HEA), which seeks to minimize the effect of hardware noise by using native gates and connectives.The use of this HEA has generated a certain ambivalence arising from the fact that while it suffers from barren plateaus at long depths, it can also avoid them at shallow ones.In this work, we attempt to determine whether one should, or should not, use a HEA.We rigorously identify scenarios where shallow HEAs should likely be avoided (e.g., VQA or QML tasks with data satisfying a volume law of entanglement).More importantly, we identify a Goldilocks scenario where shallow HEAs could achieve a quantum speedup: QML tasks with data satisfying an area law of entanglement.We provide examples for such scenario (such as Gaussian diagonal ensemble random Hamiltonian discrimination), and we show that in these cases a shallow HEA is always trainable and that there exists an anti-concentration of loss function values.Our work highlights the crucial role that input states play in the trainability of a parametrized quantum circuit, a phenomenon that is verified in our numerics.
Introduction
The advent of Noisy Intermediate-Scale Quantum (NISQ) [1] computers has generated a tremendous amount of excitement.Despite the presence of hardware noise and their limited qubit count, near-term quantum computers are al-ready capable of outperforming the world's largest super-computers on certain contrived mathematical tasks [2][3][4].This has started a veritable rat race to solve real-life tasks of interest in NISQ hardware.
One of the most promising strategies to make practical use of near-term quantum computers is to train parametrized hybrid quantum-classical models.Here, a quantum device is used to estimate a classically hard-to-compute quantity, while one also leverages classical optimizers to train the parameters in the model.When the algorithm is problem-driven, we usually refer to it as a Variational Quantum Algorithm (VQA) [5,6].VQAs can be used for a wide range of tasks such as finding the ground state of molecular Hamiltonians [7,8], solving combinatorial optimization tasks [9,10] and solving linear systems of equations [11][12][13], among others.On the other hand, when the algorithm is data-driven, we refer to it as a Quantum Machine Learning (QML) model [14,15].QML can be used in supervised [16,17], unsupervised [18] and reinforced [19] learning problems, where the data processed in the quantum device can either be classical data embedded in quantum states [16,20], or quantum data obtained from some physical process [21][22][23].
Both VQAs and QML models train parametrized quantum circuits U (θ) to solve their respective tasks.One of, if not the, most important aspect in determining the success of these near-term algorithms is the choice of ansatz for the parametrized quantum circuit [24].By ansatz, we mean the specifications for the arrangement and type of quantum gates in U (θ), and how these depend on the set of trainable parameters θ.Recently, the field of ansatz design has seen a Cambrian explosion where researchers have proposed a plethora of ansatzes for VQAs and QML [5,6].These include variable structure ansatzes [25][26][27][28][29], problem-inspired ansatzes [30][31][32][33][34] and even the recently introduced field of geometric quantum machine learning where one embeds information about the data symmetries into U (θ) [35][36][37][38][39][40][41].
Perhaps the most famous, and simultaneously infamous, ansatz is the so-called Hardware Efficient Ansatz (HEA).As its name implies, the main objective of HEA is to mitigate the effect of hardware noise by using gates native to the specific device being used.The previous avoids the gate overhead which arises when compiling [52] a nonnative gate-set into a sequence of native gates.While the HEA was originally proposed within the framework of VQAs, it is now also widely used in QML tasks.The strengths of the HEA are that it can be as depth-frugal as possible and that it is problem-agnostic, meaning that one can use it in any scenario.However, its wide usability could also be its greater weakness, as it is believed that the HEA cannot have a good performance on all tasks [46] (this is similar to the famous no-freelunch theorem in classical machine learning [53]).Moreover, it was shown that deep HEA circuits suffer from barren plateaus [42] due to their high expressibility [46].Despite these difficulties, the HEA is not completely hopeless.In Ref. [43], the HEA saw a glimmer of hope as it was shown that shallow HEAs can be immune to barren plateaus, and thus have trainability guarantees.
From the previous, the HEA was left in a sort of gray-area of ansatzes, where its practical usefulness was unclear.On the one hand, there is a common practice in the field of using the HEA irrespective of the problem one is trying to solve.On the other hand, there is a significant push to move away from problem-agnostic HEA, and instead develop problem-specific ansatzes.However, the answer to questions such as "Should we use (if at all) the HEA? " or "What problems are shallow HEAs good for?" have not been rigorously tackled.
In this work, we attempt to determine what are the problems in VQAs and QML where HEAs should, or should not be used.As we will see, our results indicate that HEAs should likely be avoided in VQA tasks where the input state is a product state, as the ensuing algorithm can be efficiently simulated via classical methods.Similarly, we will rigorously prove that HEAs should not be used in QML tasks where the input data satisfies a volume law of entanglement.In these cases, we connect the entanglement in the input data to the phenomenon of cost concentration, and we show that high levels of entanglement lead to barren plateaus, and hence to untrainability.Finally, we identify a scenario where shallow HEAs can be useful and potentially capable of achieving a quantum advantage: QML tasks where the input data satisfies an area law of entanglement.In these cases, we can guarantee that the optimization landscape will not exhibit barren plateaus.Taken together our results highlight the critical importance that the input data plays in the trainability of a model.
Variational Quantum Algorithms and Quantum Machine Learning
Throughout this work, we will consider two related, but conceptually different, hybrid quantumclassical models.The first, which we will denote as a Variational Quantum Algorithm (VQA) model, can be used to solve the following tasks Definition 1 (Variational Quantum Algorithms).
Let O be a Hermitian operator, whose ground state encodes the solution to a problem of interest.In a VQA task, the goal is to minimize a cost function C(θ), parametrized through a quantum circuit U (θ), to prepare the ground state of O from a fiduciary state |ψ 0 ⟩.
In a VQA task one usually defines a cost function of the form and trains the parameters in U (θ) by solving the optimization task arg min θ C(θ).Then, while Quantum Machine Learning (QML) models can be used for a wide range of learning tasks, here we will focus on supervised problems Definition 2 (Quantum Machine Learning).Let S = {y s , |ψ s ⟩} be a dataset of interest, where |ψ s ⟩ are n-qubit states and y s associated realvalued labels.In a QML task, the goal is to train a model, by minimizing a loss function L(θ) parametrized through a quantum neural network, i.e., a parametrized quantum circuit, U (θ), to predict labels that closely match those in the dataset.
The exact form of L(θ), and concomitantly the nature of what we want to "learn" from the dataset depends on the task at hand.For instance, in a binary QML classification task where y s are labels one can minimize an empirical loss function such as the mean-squared error The architecture of a HEA seeks to minimize the effect of hardware noise by following the topology, and using the native gates, of the physical hardware.Specifically, we consider HEA as a one-dimensional alternating layered ansatz of two-qubit gates organized in a brick-like fashion.In the figure, we show how a first layer of gates is implemented at time t1 while a second layer at time t2.At the end of the computation, a local operator is measured.
with O s being label-dependent Hermitian operator.The parameters in the quantum neural network U (θ) are trained by solving the optimization task arg min θ L(θ), and the ensuing parameters, along with the loss, are used to make predictions.While VQAs and QML share some similarities, they also share some differences.Let us first discuss their similarities.First, in both frameworks, one trains a parametrized quantum circuit.This requires choosing an ansatz for U (θ) and using a classical optimizer to train its parameters.As for their differences, in a VQA task as described in Definition 1 and Eq.(1), the input state to the parametrized quantum circuit U (θ) is usually an easy-to-prepare state |ψ 0 ⟩ such as the all-zero state, or some physically motivated product state (e.g., the Hartree-Fock state in quantum chemistry [5,54]).On the other hand, in a QML task as in Definition 2 and Eq.(2), the input states to U (θ) are taken from the dataset S, and thus can be extremely complex quantum states (see Fig. 1(a)).
Hardware Efficient Ansatz
As previously mentioned, one of the most important aspects of VQAs and QML models is the choice of ansatz for U (θ).Without loss of generality, we assume that the parametrized quantum circuit is expressed as where the {V l } are some unparametrized unitaries, {H l } are traceless Pauli operators, and where θ = (θ 1 , θ 2 , . ..).While recently the field of ansatz design has seen a tremendous amount of interest, here we will focus on the HEA, one of the most widely used ansatzes in the literature.Originally introduced in Ref. [55], the term HEA is a generic name commonly reserved for ansatzes that are aimed at reducing the circuit depth by choosing gates {V l } and generators {H l } from a native gate alphabet determined from the connectivity and interactions to the specific quantum computer being used.
As shown in Fig. 1(b), throughout this work we will consider the most depth-frugal instantiation of the HEA: the one-dimensional alternating layered HEA.Here, one assumes that the physical qubits in the hardware are organized in a chain, where the i-th qubit can be coupled with the (i − 1)-th and (i + 1)-th.Then, at each layer of the circuit one connects each qubit with its nearest neighbors in an alternating, brick-like, fashion.We will denote as D the depth, or the number of layers, of the ansatz.This type of alternatinglayered HEA exploits the native connectivity of the device to maximize the number of operations at each layer while preventing qubits to idle.For instance, alternating-layered HEAs are extremely well suited for the IBM quantum hardware topology where only nearest neighbor qubits are directly connected (see e.g.Ref. [56]).We note that henceforth when we use the term HEA, we will refer to the alternating-layered ansatz of Fig. 1.
3 Trainability of the HEA
Review of the literature
In recent years, several results about the nontrainability of VQAs/QML have been pointed out [42][43][44][45][46][47][48][49][50][51].In particular, it has been shown that quantum landscapes can exhibit the barren plateau phenomenon, which is nowadays consid-ered to be one of the most challenging bottlenecks for trainability of these hybrid models.We say that the cost function exhibits a barren plateau if, for the cost, or loss function, the optimization landscape becomes exponentially flat with the number of qubits.When this occurs, an exponential number of measurement shots are required to resolve and determine a cost-minimizing direction.In practice, the exponential scaling in the precision due to the barren plateaus erases the potential quantum advantage, as the VQA or QML scheme will have complexity comparable to the exponential scaling of classical algorithms.
Being more concrete, let f (θ) = C(θ), L s (θ), i.e., either the cost function C(θ) of a VQAs, or the s-th term L s (θ) in the loss function of a QML settings.For simplicity of notation, we will omit the "s" sub-index of O s and ψ s when f (θ) = L s (θ).In a barren plateau, there are two types of concentration (or flatness) notions that have been explored: deterministic concentration (all landscape is flat) and probabilistic (most of the landscape is flat).Let us first define the deterministic notion of concentration:
Definition 3. (Deterministic concentration) Let the trivial value of the cost function be
The above definition puts forward a necessary condition for trainability.It is clear that if f (θ) is ϵ-concentrated, then f (θ) must be resolved within an error that scales as ∼ ϵ, i.e., one must use ∼ ϵ −2 measurement shots to estimate f (θ).Thus, we define a VQA/QML model to be trainable if ϵ vanishes no faster than polynomially with n (ϵ ∈ Ω(1/ poly(n))).Conversely, if ϵ ∈ O(2 −n ), one requires an exponential number of measurement shots to resolve the quantum landscape, making the model non-scalable to a higher number of qubits.Deterministic concentration was shown in Refs.[57,58], which study the performance of VQA and QML models in the presence of quantum noise and prove that |f (θ) − f trv | ∈ O(q D ), where 0 < q < 1 is a parameter that characterizes the noise.Using the results therein, it can be shown that if the depth D of the HEA is D ∈ O(poly(n)), then the noise acting through the circuit leads to an exponential concentration around the trivial value f trv .
Let us now consider the following definition of probabilistic concentration: Definition 4 (Probabilistic concentration).Let ⟨•⟩ θ be the average with respect to the parameters where the average is taken over the domains {Θ}.
Here we make an important remark on the connection between probabilistic concentration and barren plateaus.The barren plateau phenomenon, as initially formulated in Ref. [42] indicates that the cost function gradients are concentrated, i.e., that where ∂ ν f (θ) := ∂f (θ)/∂θ ν .However one can prove that probabilistic cost concentration implies probabilistic gradient concentration, and vice-versa [47].According to Definition 4, we can again see that if ϵ ∈ O(2 −n ), one requires an exponential number of measurement shots to navigate through the optimization landscape.As shown in Ref. [42], such probabilistic concentration can occur if the depth is D ∈ O(poly(n)), as at the ansatz becomes a 2-design [42,59,60].
From the previous, we know that deep HEAs with D ∈ O(poly(n)) can exhibit both deterministic cost concentration (due to noise), but also probabilistic cost concentration (due to high expressibility [46]).However, the question still remains open of whether HEA can avoid barren plateaus and cost concentration with subpolynomial depths.This question was answered in Ref. [43].Where it was shown that HEAs can avoid barren plateaus and have trainability guarantees if two necessary conditions are met: locality and shallowness.In particular, one can prove that if D ∈ O(log(n)), then measuring global operators -i.e., O is a sum of operators acting non-identically on every qubit -leads to barren plateaus, whereas measuring local operators -i.e.O is a sum of operators acting (at most) on k qubits, for k ∈ O(1) -leads to gradients that vanish only polynomially in n.
A new source for untrainability
The discussions in the previous section provide a sort of recipe for avoiding expressibility-induced probabilistic concentration (see Definition 4), and noise-induced deterministic concentration: Use local cost measurement operators and keep the depth of the quantum circuit shallow enough.
Unfortunately, the previous is still not enough to guarantee trainability.As there are other sources of untrainability which are usually less explored.To understand what those are, we will recall a simplified version of the main result in Theorem 2 of Ref. [43].First, let O act non-trivially only on two adjacent qubits, one of them being the ⌊ n 2 ⌋-th qubit, and let us study the partial derivative ∂ ν f (θ) with respect to a parameter in the last gate acting before O (see Fig. 2).The variance of ∂ ν f (θ) is lower bounded as [43] G where we recall that D is the depth of the HEA, |ψ⟩ is the input state, and with , is the Hilbert-Schmidt distance between M and Tr[M ] 1 d M , where d M is the dimension of the matrix M .Moreover, here we defined as the reduced density matrix on the qubits with index i ∈ [k, k ′ ].As such, ψ k,k ′ correspond to the reduced states of all possible combinations of adjacent qubits in the light-cone generated by O (see Fig. 2).Here, by light-cone we refer to the set of qubit indexes that are causally related to O via U (θ), i.e., the set of indexes over which U † (θ)OU (θ) acts non-trivially.
Equation (7) provides the necessary condition to guarantee trainability, i.e., to ensure that the gradients do not vanish exponentially.First, one recovers the condition on the HEA that D ∈ O(log(n)).However, a closer inspection of the above formula reveals that both the initial state |ψ⟩ and the measurement operator O also play a key role.Namely one needs that O, as well as any of the reduced density matrices of |ψ⟩ an any set adjacent qubits in the light-cone, to not be close (in Hilbert-Schmidt distance) to the (normalized) identity matrix.This is due to the fact that if ) and the trainability guarantees are lost (the lower bound in Eq. (6) becomes trivial).
The previous results highlight that one should pay close attention to the measurement operator and the input states.Moreover, these results make intuitive sense as they say that extracting information by measuring an operator O that is exponentially close to the identity will be exponentially hard.Similarly, training an ansatz with local gates on a state whose marginals are exponentially close to being maximally mixed will be exponentially hard.
Here we remark that in a practical scenario of interest, one does not expect O to be exponentially close to the identity.For a VQA, one is interested in finding the ground state of O (see Eq. (1)), and as such, it is reasonable to expect that O is non-trivially close to the identity [5,6].Then, for QML there is additional freedom in choosing the measurement operators O s in (2), meaning that one simply needs to choose an operator with nonexponentially vanishing support in non-identity Pauli operators.
In the following sections, we will take a closer look at the role that the input state can have in the trainability of shallow-depth HEA.
Entanglement and information scrambling
Here we will briefly recall two fundamental concepts: that of states satisfying an area law of entanglement, and that of states satisfying a volume law of entanglement.Then, we will relate the concept of area law of entanglement with that of scrambling.
First, let us rigorously define what we mean by area and volume laws of entanglement.
where S(ρ) = −Tr[ρ log(ρ)] is the entropy of entanglement.Conversely, the state possesses area law for the entanglement within Λ and Λ if Note that the above definition of area vs. volume law of entanglement is nonstandard.In particular, it is completely agnostic to the geometry on which the given state resides.However, as we will show, it is the relevant one to look at for the purpose of this work.
From the definition of volume law of entanglement, the concept of scrambling of quantum information can be easily defined; the information contained in |ψ⟩ is said to be scrambled throughout the system if the state |ψ⟩ follows a volume law for the entropy of entanglement according to Definition 5 across any bipartition such that |Λ| ∈ O(log(n)).
Here we further recall that an informationtheoretic measure of the quantum information that can be extracted by a subsystem Λ is which quantifies the maximum distinguishability between the reduced density matrix ψ Λ and the maximally mixed state for some constant c > 0.
The definition of scrambling of quantum information easily follows from the definition of volume law for entanglement in Definition 5. Indeed, given a subsystem Λ, one has the following bound for some c > 0. Conversely, the state possesses area law for the entanglement within Λ and Λ if
HEA and volume law of entanglement
As we show in this section, a shallow HEA will be untrainable if the input state satisfies a volume law of entanglement according to Definition 5. Before going deeper into the technical details, let us sketch the idea behind our statement, with the following warm-up example.
A toy model
Let us consider for simplicity the case when O is a local operator acting non-trivially on a single qubit, and let us recall that In the Heisenberg picture we can interpret f (θ) as the expectation value of the backwards-in-time evolved operator O(θ) = U † (θ)OU (θ) over the initial state |ψ⟩.Thanks to the brick-like structure of the HEA, one can see from a simple geometrical argument that the operator O(θ) will act non-trivially only on a set Λ containing (at most) 2D qubits (see Fig. 2, and also see below for the rigorous proof).Thus, we can compute f (θ) as where Λ is the complement set of Λ, and where we assume |Λ| ≪ | Λ|.Since the HEA is shallow, the cost function is evaluated by tracing out the majority of qubits, i.e. | Λ| ∼ n.If the input state |ψ⟩ is highly entangled, since |Λ| ≪ | Λ|, and thanks to the monogamy of entanglement [61], we can assume that there is a subset of Λ, say Λ ′ , maximally entangled with Λ, i.e., , where ϵ ≪ 1 and |ϕ⟩ being orthogonal to the rest.Neglecting terms in ϵ 2 , and choosing ∥O∥ ∞ = 1, this results in a function 2ϵ-concentrated around its trivial value The previous shows that a highly entangled state, such as the one presented above which satisfies a volume law-of entanglement, will lead to a landscape that exhibits a deterministic exponential concentration according to Definition 3.
Formal statement
In this section, we will present a deterministic concentration result.To begin, let us introduce the following definition: Definition 8 (Support of a Pauli operator).Let P be a Pauli operator, we define the support supp(P ) as the ordered set of natural numbers q i labeling the qubits on which P acts non-trivially.
with S the number of qubits on which P acts nontrivially.
We are finally ready to state a deterministic concentration result based on the informationtheoretic measure I Λ (ψ) for HEA circuits and for f (θ) = C(θ), L s (θ) being the VQA cost function or the QML loss function.
Theorem 1 (Concentration and measurement operator support). Let U (θ) be HEA with depth D, and
Then, the following bound on the size of Λ holds: See App.A for the proof.
Let us discuss the implications of Theorem 1. First, we find that the difference between the training function f (θ), and its trivial value f trv depends on the information-theoretic measure of information scrambling I Λ (ψ) for |Λ| = max i |Λ i | (see Definition 6).From Eq. ( 19) it is clear that the size of Λ is determined by two factors: (i) the depth of the circuit D, and (ii) the locality of the operator O.As soon as either the depth D or max i supp(P i ) starts scaling with the number of qubits n, the bound in Eq. ( 18) becomes trivial as one can obtain information by measuring a large enough subsystem with |Λ| ∈ Θ(n).However, as explained above in Sec. 3, we already know that this regime is precluded as the necessary requirements to ensure the trainability of the HEA (and thus to ensure trainability of the VQA/QML model) are: (i) the depth of the HEA circuit must not exceed O(log(n)), and (ii) the operator O must have local support on at most O(log(n)) qubits.Hence, the trainability of the model is solely determined by the scaling of I Λ (ψ).
From Theorem 1, we can derive the following corollaries Then, if |ψ⟩ satisfies a volume law, or alternatively if the information contained in |ψ⟩ is scrambled, i.e., if I Λ (ψ) ∈ O(2 −cn ) for some c > 0, and if ∥O∥ ∞ ∈ Ω(1), then: Here we can see that if the information contained in |ψ⟩ is too scrambled throughout the system, one has deterministic exponential concentration of cost values according to Definition 3.
Theorem 1 puts forward another important necessary condition for trainability and to avoid deterministic concentration: the information in the input state must not be too scrambled throughout the system.When this occurs, the information in |ψ⟩ cannot be accessed by local measurements, and hence one cannot train the shallow depth HEA.
At this point, we ask the question of how typical is for a state to contain information scrambled throughout the system, hidden in non-local degrees of freedom, and resulting in To answer this question, we use tools of the Haar measure and show that for the overwhelming majority of states, the information cannot be accessed by local measurements as their information is too scrambled.
See App.A for a proof.Note that henceforth, we will refer to overwhelming probability as a probability 1 up to a exponentially (in n, the size of the system) decaying correction.In many tasks, multiple copies of a quantum state |ψ⟩ are used to predict important properties, such as entanglement entropy [62][63][64], quantum magic [65][66][67], or state discrimination [68].Thus, it is worth asking whether a function of the form ] can be trained when U (θ) is a shallow HEA acting on 2n qubits.In the following corollary, we prove that for the overwhelming majority of states, I Λ (ψ ⊗2 ) ∈ O(2 −n ), and one has deterministic concentration according to Definition 3 even if one has access to two copies of a quantum state.
Corollary 3. Suppose one has access to 2 copies of a Haar random state |ψ⟩ and one computes the function
be the depth of a HEA U (θ), and 2 , where c = (18π 3 ) −1 , and one has: See App.A for a proof.Note that the generalization to more copies is straightforward.
The above results show us that there is indeed a no-free-lunch for the shallow HEA.The majority of states in the Hilbert space, follow a volume law for the entanglement entropy and thus have quantum information hidden in highly nonlocal degrees of freedom, which cannot be accessed through local measurement at the output of a shallow HEA.
HEA and area law of entanglement
The previous results indicate that shallow HEAs are untrainable for states with a volume law of entanglement, i.e., they are untrainable for the vast majority of states.The question still remains of whether shallow HEA can be used if the input states follow an area law of entanglement as in Definition 5. Surprisingly, we can show that in this case there is no concentration, as the following result holds Theorem 2 (Anti-concentration of expectation values).Let U (θ) be a shallow HEA with depth D ∈ O(log(n)) where each local two-qubit gate forms a 2-design on two qubits.Then, let O = i c i P i be the measurement composed of, at most, polynomially many traceless Pauli operators P i having support on at most two neighboring qubits, and where i c 2 i ∈ O(poly(n)).If the input state follows an area law of entanglement, for any set of parameters θ B and θ A = θ B + êAB l AB with l AB ∈ Ω(1/ poly(n)), then See App.B for the proof.
Theorem 2 shows that if the input states to the shallow HEA follow an area law of entanglement, then the function f (θ) anti-concentrates.That is, one can expect that the loss function values will differ (at least polynomially) at sufficiently different points of the landscape.This naturally should imply that the cost function does not have barren plateaus or exponentially vanishing gradients.In fact, we can prove this intuition to be true as it can be formalized in the following result.See App.B for the proof.Taken together, Theorem 2 and Proposition 1 suggest that shallow HEAs are ideal for processing states with area law of entanglement, as the loss landscape is immune to barren plateaus.Evidently, the previous shallow HEAs are capable of achieving a quantum advantage.However, determining whether a quantum advantage is feasible or not, for such ansatzes is beyond the scope of this work (as it requires a detailed analysis of properties beyond the absence of barren plateaus such as quantifying the presence of local minima) we can still further identify scenarios where a quantum advantage could potentially exist.
First, let us rule out certain scenarios where a provable quantum advantage will be unlikely.These correspond to cases where the input state |ψ⟩ satisfies an area law of entanglement but also admits an efficient classical representation [69][70][71][72][73].
The key issue here is that if the input state admits a classical decomposition, then the expectation value f (θ) for U (θ) being a shallow HEA can be efficiently classically simulated [74].For instance, one can readily show that the following result holds.
Proposition 2 (Cost of classically computing f (θ)).Let U (θ) be an alternating layered HEA of depth D, and O = i c i P i .Let |ψ⟩ be an input stat that admits a Matrix Product State [75] (MPS) description with bond-dimension χ.Then, there exists a classical algorithm that can estimate f (θ) with a complexity which scales as O((χ • 4 D ) 3 ).
The proof of the above proposition can be found in [75].From the previous theorem, we can readily derive the following corollary.
Corollary 4. Shallow depth HEAs with depth D ∈ O(log(n)), and with an input state with a bonddimension χ ∈ O(poly(n)) can be efficiently classically simulated with a complexity that scales as O(poly(n)).
Note that Proposition 2 and its concomitant Corollary 4 do not preclude the possibility that shallow HEA can be useful even if the input state admits an efficient classical description.This is due to the fact that, while requiring computational resources that scale polynomially with n (if χ is at most polynomially large with n), the order of the polynomial can still lead to prohibitively large (albeit polynomially growing) computational resources.Still, we will not focus on discussing this fine line, instead, we will attempt to find scenarios where a quantum advantage can be achieved.
In particular, we highlight the seminal work of Ref. [76], which indicates that while states satisfying an area law of entanglement constitute just a very small fraction of all the states (which is expected from the fact that Haar random states -the vast majority of states-satisfy a volume law), the subset of such area law of entanglement states that admit an efficient classical representation is exponentially small.This result can be better visualized in Fig. 3.The previous gives hope that one can achieve a quantum advantage with area law classically-unsimulable states.
Implications of our results
Let us here discuss how our results can help identify scenarios where shallow HEA can be useful, and scenarios where they should be avoided.The vast majority of states satisfy a volume law, and hence a shallow HEA cannot be used to extract information from them.From the set of states satisfying an area law, only a very small subset admits an efficient classical representation.For these states, the effect of a shallow HEA can be efficiently simulated.As such, there exists a Goldilocks regime where HEA can potentially be used to achieve a quantum advantage: non-classically-simulable area law states.
Implications to VQAs
As indicated in Definition 1, in a VQA one initializes the circuit to some easy-to-prepare fiduciary quantum state |ψ 0 ⟩.For instance, in a variational quantum eigensolver [7] quantum chemistry application such an initial state is usually the unentangled mean-field Hartree-Fock state [54].Similarly, when solving a combinatorial optimization task with the quantum optimization approximation algorithm [9] the initial state is an equal superposition of all elements in the computational basis |+⟩ ⊗n .In both of these cases, the initial states are separable, satisfy an area law, and admit an efficient classical decomposition.This means that while the shallow HEA will be trainable, it will also be classically simulable.This situation will arise for most VQA implementations as it is highly uncommon to prepare non-classically simulable initial states.From the previous, we can see that shallow HEA should likely be avoided in VQA implementations if one seeks to find a quantum advantage.
problem-dependent, implying that the usability of the HEA depends on the task at hand.Our results indicate that HEAs should be avoided when the input states satisfy a volume law of entanglement or when they follow an area law but also admit an efficient classical description.In fact, it is clear that while the HEA is widely used in the literature, most cases where it is employed fall within the cases where the HEA should be avoided [45].As such, we expect that many proposals in the literature should be revised.However, the trainability guarantees pointed out in this work, narrow down the scenarios where the HEA should be used, and leave the door open for using shallow HEAs in QML tasks to analyze non-classically simulable area-law states.In the following section, we give an explicit example, based on state discrimination between area law states having no MPS decomposition, with a possible achievable quantum advantage.
Random Hamiltonian discrimination 7.1 General framework
In this section, we present an application of our results in a QML setting based on Hamiltonian Discrimination.The QML problem is summarized as follows: the data contains states that are obtained by evolving an initial state either by a general Hamiltonian or by a Hamiltonian possessing a given symmetry.The goal is to train a QML model to distinguish between states arising from these two evolutions.In the example below, we show how the role of entanglement governs the success of the QML algorithm.
Let us begin by formally stating the problem.Consider two Hamiltonians H G , H S , and a lo- s=s+2 10: end while We consider the case when U (θ) is a parametrized shallow HEA, and is O a local operator measured at the output of the circuit.We define In the following, we will drop the superscript in |ψ s ⟩ ∈ S to light the notation, unless necessary.Then, the goal is to minimize the empirical loss function: where N is the size of the dataset S.There are two necessary conditions for the success of the algorithm: (i) the parameter landscape is not exponentially concentrated around its trivial value, and (ii) there exists θ 0 such that the model outputs are different for data in distinct classes.For instance, this can be achieved if U † (θ 0 )OU (θ 0 ) = S; as here L s (H S , θ 0 , t) = 1 for any s such that |ψ s ⟩ ≡ |ψ H S s ⟩ t .Then, one also needs to have L s (H G , θ 0 , t) not being close to one with high probability.Note that if the symmetry S is a local operator, and O is chosen to be local, there are cases in which a shallow-depth HEA can find the solution U † (θ 0 )OU (θ 0 ) = S.Such an example is shown below.
Gaussian Diagonal Ensemble Hamiltonian discrimination
Let us now specialize the example to an analytically tractable problem.We first show how the growth of the evolution time t, and thus the entanglement generation, affects the HEA's ability to solve the task.Then, we show that there exists a critical time t * for which the states in the dataset satisfy an area law, and thus for which the QML algorithm can succeed.Since classically simulating random Hamiltonian evolution is a difficult task, the latter constitutes an example where a QML algorithm can enjoy a quantum speed-up with respect to classical machine learning.
Let H G be a random Hamiltonian, i.e., H G = i E i Π i , where Π i are projectors onto random Haar states, and E i are normally distributed around 0 with standard deviation 1/2, (see App. C for additional details).This ensemble of random Hamiltonians is called Gaussian Diagonal Ensemble (GDE), and it is the simplest, non-trivial example where our results apply.In Fig. 5 we explicitly show how the time evolution under such a Hamiltonian can be implemented in a quantum circuit.Generalizations to wider used ensembles, such as Gaussian Unitary Ensemble (GUE), Gaussian Symplectic Ensemble (GSE), Gaussian Orthogonal Ensemble (GOE), or the Poisson Ensemble (P), will be straightforward.We refer the reader to Refs.[77,78] for more details on these techniques.
Consider a bipartition of n qubits, i.e., A ∪ B such that |A| ≪ |B|.Let H S a Random Hamiltonian commuting with all the operators on a local subsystem A, i.e. [H S , P A ] = 0 for all P A .We can choose H S as H S = 1 A ⊗ H B , with H B belonging to the GDE in the subsystem B. Let H G be a random Hamiltonian belonging to the GDE ensemble in the subsystem A∪B.Since the Hamiltonian H S commutes with all the operators in A, we choose the symmetry S to be S ≡ P A ⊗ 1 B , i.e. a Pauli operator with local support on A. To build the data-set S, we thus identify the vector space containing all the eigenvectors with eigenvalue 1 of P A , V P A = span{|z⟩ ∈ C 2⊗n | P A |z⟩ = |z⟩} and follow Algorithm 1.Note that, with this choice, dim(V P A ) = 2 |A|−1 and thus we take |A| ≥ 2. The QML task is to distinguish states evolved in time by H G or by H S .Let us choose O a Pauli operator having support on a local subsystem.Then, the following proposition holds: Proposition 3. Let L s (H, θ, t) be the expectation value defined in Eq. (24), for H ∈ {H G , H S }.If there exist θ 0 such that U † (θ 0 )OU (θ 0 ) = S, then See App.C for the proof.Notably, the symmetry of H S ensures that if the HEA is able to find θ 0 , then the output L s (H S , θ 0 , t) = 1 is distinguishable from the expected value of L s (H G , θ 0 , t) which is exponentially suppressed in t.While in principle it is possible to minimize the loss function L(θ, t) for any t, the following theorem states that as the time t grows, the parameter landscape get more and more concentrated, according to Definition 3.
Theorem 3 (Concentration of loss for GDE Hamiltonians).Let L s (H, θ, t) be the expectation value defined in Eq. (24) for H ∈ {H G , H S }.For random GDE Hamiltonians one has See App.C for the proof.Note the above concentration bound holds for both H S and H G , provided that |A| ≪ |B|.From the above, one can readily derive the following corollary: Corollary 5. Let L s (H, θ, t) be the expectation value defined in Eq. (24) for H ∈ {H G , H S }.Then for t ≥ 4α √ n/ log 2 e, α > 0, and ϵ = e −βn , then: Taken together, Theorem 3 and Corollary 5, provide a no-go theorem for the success of the QML task, as they indicate that beyond t ∼ √ n one encounters deterministic concentration with overwhelming probability.Crucially, the role of the entanglement generated by H ∈ {H G , H S } is hidden in the variable t of the bound in Theorem 3. Indeed, as shown in Refs.[77,78] the entanglement for random GDE Hamiltonians is monotonically growing with t.
random numbers sampled from N (0, 1/2), and corresponds to the eigenvalues of While the previous results indicate that a HEAbased QML model will fail on the random Hamiltonian discrimination QML task for t ∼ √ n (due to high levels of entanglement), this does not preclude the possibility of the model succeeding for smaller evolution times.Notably, here we can show that for t = O( log(n)) the conditions are ideal for a quantum advantage: the states in the dataset will satisfy an area law of entanglement, and since GDE Hamiltonians are built out of a very deep random quantum circuit, their time evolution can be classically hard.In particular, the following theorem holds.
Proof.The corollary easily descends from Proposition 4, Theorem 2 and Proposition 1.
As shown above, for t ∈ O( log(n)), the states generated by the time evolution of GDE Hamiltonians obey to area law of the entanglement with overwhelming probability.Thanks to Theorem 2, we also have that the loss function L(θ, t) anticoncentrates, giving strong evidence of the success of the Hamiltonian Discrimination QML task.
Numerical simulations
In this section, we present numerical results which further explore the connection between the entanglement in the input state, and the phenomenon of gradient concentration.In particular, we are interested in showing how the parameter landscape of a QML problem becomes more and more concentrated as the entanglement in the input state grows.
To create n-qubit states with different amounts of entanglement, we will consider time-evolved states of the form where |ψ 0 ⟩ is a random product state, and where H is the Heisenberg model with first-neighbor interactions with periodic boundary conditions (n + 1 ≡ 1).
Here, σ i with σ = X, Y, Z, denotes a Pauli operator acting on qubit i.As we will see below, as t increases, so does the entanglement in |ψ t ⟩.
Next, we will consider a learning task where we want to minimize a cost-function of the form where O Z = i Z i (i.e., O Z is a sum of 1-local operators), and where U (θ) is a shallow HEA.Specifically, we employ the HEA architecture shown in Fig. 6 which is composed of an initial layer of general single qubit rotations, followed by two-qubit gates on alternating pairs of qubits.The two-qubit gates are themselves composed of a CNOT gate followed by general single-qubit gates on each qubit.
In Figs.7(a,b) we show averaged norm of the gradient ∂ µ L(θ), i.e. ∥∇L∥ ∞ ≡ max µ |∂ µ L(θ)|, as a function of the evolution time t used to prepare the input state of the HEA for different problem sizes.Gradients are computed by averaging over 400 random product states |ψ 0 ⟩, and two sets of random parameters in the HEA for each initial state.Here we can see that for small evolution times the cost exhibits large gradients independently of the system size.This result is expected as we recall that in the limit t → 0 the input state |ψ 0 ⟩ is a tensor product state, which, along with 1-local measurements and the HEA structure, leads to the gradients which norms are independent of n.As t increases, we can see that the gradient norm decrease until a saturation value G sat is achieved.Moreover, we can see that the value of G sat depends on the number of qubits in the system.In fact, as shown in Fig. 7(c), G sat decays polynomially with n.We can further understand this behavior by noting that as t increases, the time-evolution exp (−iHt) produces larger amounts of entanglement in the input state, and concomitantly smaller gradients (as indicated by our main results above).To see that this is the case, we compute rescaled entropy S(ρ 2 ) = −Tr[ρ 2 log(ρ 2 )]/2 where ρ 2 is the reduced state on two nearest-neighbor qubits for a sufficiently large time t such that G sat is achieved.Results are shown in Fig. 7(c).It shows a positive correlation between the decay of gradients and the increase in reduced state entropy.Thus, the more entanglement in the input state, the smaller the gradients, and the more concentrated the landscape.
Discussion and conclusions
Understanding the capabilities and limitations of VQA and QML algorithms is crucial to developing strategies that can be used to achieve a quantum advantage.One of the most relevant ingredients in ensuring the success of a VQA/QML model is the choice of ansatzes for the parametrized quantum circuit.In this work, we focused our attention on the shallow HEA, as it can avoid barren plateaus, and since it is perhaps one of the most NISQ-friendly ansatzes.Currently, the HEA is widely used for a plethora of problems, irrespective of whether it is well-fit for the task and data at hand.In a sense, the HEA is still a "solution in search of a problem" as there was no rigorous study of the tasks where it should, or should not be used.In this work, we establish rigorous results, showing how, and in which contexts, HEAs are (and are not) useful and can eventually provide a signature of quantum advantage.
We first review relevant results from the literature, discussing the notion of cost and loss function concentration and necessary conditions for trainability of HEAs -i.e.shallowness and locality of measurements.Here we highlight the existence of a new source of untrainability of shallow HEAs: the entanglement of the input states.On one hand, we proved that HEAs are untrainable if the input states satisfy a volume law of entanglement, as the cost function is deterministically concentrated around its trivial value.On the other hand, if the input states follow an area law of entanglement, the HEA is trainable.In fact, here we prove that the loss function anti-concentrates, i.e., it differs, at least polynomially, at sufficiently different points of the parameters landscape.
While the role of entanglement in the trainability of VQA and QML models has been explored in Refs.[50,51], the results found therein are conceptually different from ours.Namely, in these references the authors point out that deep parametrized quantum circuit ansatzes create volume law for the entanglement entropy, making the parameter landscape exponentially flat in the number of qubits and thus giving rise to entanglementinduced barren plateaus.As such, these results study the entanglement created during the circuit, but not that already present in the input states.For instance, the shallow HEA cannot create volume law of entanglement, yet, it is still untrainable if such entanglement exists in the input state.Hence, our work provides a new source of untrainability for certain datasets.
Next, we also analyzed the still open question of whether the HEA is able to achieve a quan- tum advantage in a VQA/QML setting.While the answer is far beyond the scope of the paper, we identified regimes in which the HEA can or cannot provide quantum speed-ups.Here we proved that thanks to the shallowness of HEAs, input states with bond dimensions at most polynomially in the number of qubits can be simulated with only a polynomial overhead on classical machines.This result rules out the use of HEA in VQAs: as many examples show, the typical input state for a VQA is an easy-to-prepare product state, thus allowing an efficient classical decomposition.Conversely, for QML algorithms the question still remains open: the portion of area law states admitting an efficient classical description is exponentially small [76].While this is not a guarantee for achieving quantum advantage, this is definitely the window to look at for applications beyond those solvable by classical capabilities.
We indeed push forward the latter intuition and provide an example to which our results apply.Namely, we present a Hamiltonian discrimination QML problem, where initial product states are evolved in time by two types of Hamiltonians, one possessing a given local symmetry, and one completely general.We show that, while the task becomes less and less feasible if the evolution time is long (as entanglement growing in time), for a given time window (scaling logarithmically with the number of qubits) such states possess area law of entanglement, ensuring the absence of barren plateaus in the loss landscape.
Such an example serves as a pivotal one for future, and hopefully fruitful, usages of the HEA.Our recipe to prepare a Barren-plateau-free QML problem is the following: consider entangled enough quantum input data, pass it through a shallow-depth circuit, and then measure with local operators.Importantly, if one wishes to use this scheme for classical data, it will be extremely important to find data-embedding schemes that lead to area-law states, but which are not themselves classically simulable.We expect that the search for such entangled-tamed embedding could be a fruitful research direction.
There are still several directions to be explored after the analysis of the present work.We indeed emphasize that, while for unstructured HEAs, this paper definitely rules out volume law states as input states of QML algorithms with HEA ansatzes, there is the fascinating possibility that, with even a little prior knowledge of the input states, a problem-aware HEA could avoid exponential concentration in the parameter landscapes.Indeed, the choice of some structured, and problemdependent ansatz can avoid barren plateaus: a prominent example is the Geometric Quantum Machine Learning, which exploits the geometric symmetries of the input data-set to design symmetry-aware parametrized ansatzes [35][36][37][38][39][40][41].[89] Salvatore F.E. Oliviero, Lorenzo Leone where c i := Tr[OP i ]/2 n , and Tr[P i P j ] = 2 n δ ij .Then, we define the support supp(P i ) as the ordered set of qubits containing non-identity operators in the tensor product structure of each Pauli operator j is a single qubit Pauli operator acting on the j-th qubit.Explicitly, supp(P i ) = {q is an ordered subset of natural numbers labeling the qubits on which P i acts non-trivially, and S i labels the number of qubits on which P i acts, see Fig. 8. From this, we also define the support of O as the ordered subset of qubits given by the union of the supports of each P i , i.e., supp(O) = i supp(P i ).Moreover, we note that which the latter follows from Holder's inequality: which follows from a counting argument).Let us define the pairwise relative distance between the qubits q k and q k+1 belonging to supp(P i ) as: It is now possible to define clusters, i.e., subsets of contiguous qubits whose pairwise relative distance is less than 2D.The definition can be done recursively: let 1 be the first cluster, then q Li clusters of qubits for any i = 1, . . ., N , such that 1 ≤ L i ≤ S i .Note that, we can write the operator O as α .Consequently, we can rewrite the cost function by expanding It is possible to evaluate the distance between f (θ) and f trv , that can be rewritten as: , the distance between f (θ) and f trv reads: where ψ Λ ≡ TrΛ |ψ⟩⟨ψ|.In the first inequality, we use the triangle inequality of absolute value, while in the second one, we used | Tr[AB]| ≤ ∥A∥ ∞ ∥B∥ 1 , and the fact that unitary operators preserve the trace and any Schatten p-norm.Finally, we used In what follows we will use the following lemma and corollary, which we will prove at the end of this Appendix (see App. D).
Lemma 1.Let U (θ) be a HEA with depth D as in Fig. 1(b), and O α be an operator, as in Eq. (35).Then for any The following corollary descends from the above lemma and the clustering of qubits.
A.2 Proof of Corollary 1
By theorem 1, we have if the information is scrambled, according to Definition 6, we have that, since , there exists a constant c ′ such that n maxi supp(Pi) ∈ O(n c ′ log(n) ), and therefore: where we crudely upperbounded 3n < n 2 (for n > 3).Now, we have that for any k ∈ Ω(1).Choosing k = c/2 for simplicity, we finally reach the final bound: where we considered ∥O∥ ∞ ∈ Ω(1).
A.3 Proof of Corollary 2
The proof of Corollary 2 follows from the proof in the work of Popescu et al. [79].Here we report it for completeness.
Let ψ Λ = TrΛ[|ψ⟩⟨ψ|], consider a complete set of (Hermitian) observables and decompose the state on the complete set of observables Consider the 1-norm distance between ψ Λ and the completely mixed state 1 Λ d −1 Λ : In the first inequality, we have used the norm equivalence, while in the second we have expanded ψ Λ as in Eq. (48).The third inequality follows by simply taking the maximum over i in the summation.Thus: Since the expectation values are Lipschitz functions [79], by exploiting Levy's lemma one can easily prove that: where C = (18π 3 ) −1 .By choosing ϵ = 2 −1/3n , one gets: To conclude, in virtue of Theorem 1, we have: Thanks to the above equation, we can bound I Λi < 2 |Λi| 2 −n/3 with overwhelming probability.Thus, , and D ∈ O(log(n)), it means that there exists two constants c, c ′ such that To conclude, we can loosely write that for any constant k we have for simplicity, we finally reach the final bound: holding with probability where we bound |Λ| < n.
A.4 Proof of Corollary 3
Now suppose one has as input k copies of a Haar random state |ψ⟩ on n qubits, denote it as: and denote d = 2 n the dimension of the Hilbert space in which |ψ⟩ is living.Let us decompose its reduced density matrix | in a Hermitian operator basis P i : Let us prove that each expectation value on |Ψ (k) ⟩ is a Lipschitz function with respect to |ψ⟩.Given a second state in the first inequality we used the fact that ∥P i ∥ ∞ = 1.In the second line, we used k times the following trick.Denote ψ = |ψ⟩⟨ψ|: and we used ψ ⊗k 1 = 1 for any k.Thanks to the fact that Tr[Ψ Λ P i ] is Lipschitz, and denoting as Tr Ψ (k) Λ P i the Haar average over the input state, we know that: where C = (18π 3 ) −1 .We need to bound the probability that I Λ (ψ) ≥ ϵ.Using the same trick as in Corollary 2, we can write: Note that the following inequality holds then where we used the fact that T Λ12 Tr ).We thus find that for ϵ = 2 −1/3n , one has: where we used the fact that (asymptotically): Thus: The calculation becomes more intricate for k > 2. However, making the assumption that Λ is symmetric between the copies of |ψ⟩, with |Λ| = kλ qubits, and setting d Λ ≡ 2 kλ , we can use where the sum is over the conjugacy class c(π) of the symmetric group S k .As evidenced by the above formula, the order of the trace depends on the conjugacy class c.For example for c(π) = ( 12), (23), . . .one has c = 1, for c(π) = (1234), (1243), . . .one has c = 3 and so on.Finally: Thus, we have: provided that ϵ > O(2 −n ).By applying Levy's lemma thanks to the typicality of Tr[Ψ Λ ], choosing ϵ = 2 −1/3n we have: B Proof of Theorem 2 and Proposition 1
B.1 Proof of Theorem 2
Let us consider the following quantity where θ A and θ B are such that θ A = θ B + êAB l AB with l AB ∈ Ω(1/ poly(n)).Our goal will be to show that the variance of this quantity is at most polynomially vanishing.First, we will use the following notation θ n ≡ θ A and θ 0 ≡ θ B .Moreover, it is useful to further divide the path in the parameter space by a sequence of single parameter changes so that where m ∈ O(poly(n)).Here we have defined where êi is a vector with a single one and where at least one l i is such that sin 2 (l i ) ∈ Ω(1/ poly(n)).Note that we can guarantee both m ∈ O(poly(n)) and that there exists an l i such that sin 2 (l i ) ∈ Ω(1/ poly(n)) from the fact that we have at most a polynomial number of parameters, and that θ A and θ B are at most polynomially close.The previous allows us to note that where we defined ∆f i+1,i := f (θ i+1 ) − f (θ i ).Taking the variance of Eq. (75) we have Here we recall that In what follows we will use the following lemmas, which we will prove at the end of this appendix( See App.D).
Let us now go back to Eq. (76).Note that where in the second line we have used Lemmas 2 and 3. Thus, we have Using the fact that θ i+1 = θ i + êi l i , and leveraging the parameter shift-rule for computing gradients [81,82], we then get and where we have defined θi = θ i−1 + êi li 2 .Then, recalling that m ∈ O(poly(n)) and that there exists an l i such that sin 2 (l i ) ∈ Ω(1/ poly(n)), we can use Lemma 4 to find
B.2 Proof of Proposition 1
Here we note that in the previous section, where we have proved Theorem 2 we have shown that the absence of barren plateaus (through Lemma 4) implies cost values anti-concentration.In this section, we prove the converse.First, we note that by anti-concentration we mean that for any set of parameters θ B and θ A = θ B + êAB l AB with l AB ∈ Ω(1/ poly(n)) one has Then, let us use the fact that where we have used the parameter-shift rule, and where θ + = θ + ± êν π 2 such that êν is a unit vector with a one at the ν-th entry.Then, we have that since the difference between θ + and θ − is O(1), we can use Eq.(85) to find which completes the proof of Proposition 1.
C Isospectral twirling and proof of Proposition 3,Thereom 3 and Proposition 4 In this section, we aim to prove Proposition 3, Theorem 3, Corollary 5, and Proposition 4.
C.1 Isospectral twirling
We first, review the useful notion of the Isospectral twirling introduced in Refs.[77,78].Consider an Hamiltonian H written in its spectral decomposition H = k E k Π k , where Π k are its eigenvectors and E k are its eigenvalues.Consider the time-evolution generated by H, i.e., W (t) = exp(−iHt) = k e −iE k t Π k .Denote U(n) the unitary group on n qubits.Define the following ensemble of isospectral unitary evolutions: whose representative element is H.The Isospectral twirling of order k, denoted as R (2k) (W (t)) is the 2k-fold Haar channel of the operator W ⊗k,k (t) := W ⊗k (t) ⊗ W †⊗k (t): Using the Weingarten functions [83][84][85], one can compute the isospectral twirling as: where S 2k is the symmetric group of order 2k, T π is the unitary representation of the permutation π ∈ S 2k , and which governs the behavior of many figures of merit of random isospectral Hamiltonians, as noted in [77,78].While the expression for Isospectral twirling of order k, for k ≥ 2 is cumbersome to report, we do recall the expression for the Isospectral twirling for k = 1: where T 12 is the swap operator between the two copies of i.e., all the eigenvalues E k are independent identically distributed Gaussian random variables with zero mean and standard deviation 1/2.Define c2k (t) := c 2k (t) d 2k , then the average of the normalized 2k-spectral form factors c2k (t) reads Let us now build a notation useful throughout the following proofs.Let F [W (t)] be a scalar function of the unitary evolution W (t) = exp (−iHt).Then we denote with E GDE the Isospectral twirling of the scalar function F [W (t)] followed by the average over the GDE ensemble of Hamiltonian, namely: In the following section, we use the techniques introduced in order to prove Proposition 3.
C.3 Proof of Theorem 3
To prove the theorem, we use the well-known Markov inequality: let x be a positive semi-definite random variable with average µ, then: To prove the concentration consider |L s (H, θ, t)| for H ∈ {H G , H S } defined in the main text.Then by Jensen's inequality, we have: Note that, one can write Then, taking the isospectral twirling, one has: where the Isospectral twirling operator R (4) (t) can be found in Eq. (165) of Ref. [77].Taking the average over the GDE spectra, one has where the result is obtained after some algebra, with the hypothesis of ψ ≡ ψ A ⊗ ψ B and that Tr[O] = 0. Note that the above result is obtained for H ∈ {H G , H S }; we indeed exploited the fact that |A| ≪ |B|, and thus d B = O(d).Using Markov's inequality Eq. (98), the result can be readily derived.
C.4 Proof of Theorem 4 C.4.1 An anti-concentration inequality
To prove the theorem, we make use of Cantelli's inequality: let x be a random variable with average µ and standard deviation σ.Then: To bound the measure I Λ (ψ), we make use of the bound proven in [86] between the Hilbert-Schmidt distance and the trace distance, namely: where ψ Λ := TrΛ[|ψ⟩⟨ψ|], and d Λ = 2 |Λ| .Note that where P (ψ Λ ) ≡ Tr[ψ and, because of Eq. (103), one has: Thus, from the above chain of inequality, it is clear that it is sufficient to obtain a bound on the anticoncentration of the purity P (ψ Λ ).Denoting P GDE := E GDE [P (ψ Λ )], and we can use Eq.(102) to write: Thanks to the right/left invariance of the Haar measure, we can insert unitaries of the form where ψ i.e., we can compute the average purity over GDE Hamiltonian on the average product state between Λ and Λ.A straightforward computation leads us to Following the calculations in Sec.3.3.1 of Ref. [77], the result can be proven to be: Computing the above expression is trickier than computing P GDE because the latter involves averaging over the 8-th fold power of G.It is well known that the permutation group S 8 contains 8! elements, making the calculation too expensive.It is also known that purity and OTOCs possess many similarities (see Refs. [87][88][89][90][91]).Indeed, the strategy to do the calculation will be: where c2k (t) is the normalized 2k-spectral form factor defined in Eq. (91).The strategy is thus to write (113) in terms of 8−OTOCs.First, note that we can write the swap operator as: and, in the same fashion, we can write the state |0⟩⟨0| as: Note that the sum is over P i for i = 1, . . ., 4 and runs over the subgroup P ∈ {1, Z} n ; moreover we defined Pi := U G P i U † G for i = 1, . . ., 4 to light the notation.It is useful for the following to split the identity part of the sum, and write P 2 (ψ Λ ) as: where each term on the first sum is different from 1.To write it in terms of OTOCs, let us use the following identity: let A and B two operators, and P Pauli operators then: where the summation is on the whole Pauli group.Thus Here, Q, K, L are labeling global Pauli operators, running on the whole Pauli group.In order to recover OTOCs, we still need to split the sum of Q, K, L between the identity and the other non-identity Paulis: where we adopted the following convention: the sum contains the sum over all the Pauli's appearing on the summation with the exception on the identity, running on their respective support; more precisely, P 1 , . . ., P 4 runs in the subgroup P i ∈ {1, Z} n but the identity, Q, K, L run over the whole Pauli group but the identity, while P a Λ , P b Λ run over the whole Pauli group defined on Λ qubits but the identity.Using Eq. (112), from Eq. (119), one thus arrives to After further algebraic simplifications, one gets: In order to properly use Eq.(114), one needs to be sure to absorb all the term O(d −1 ).The final result is: Hence, we can readily compute Q GDE by using Eq.(94), and obtain: which concludes the proof.
D Useful Lemmas
Before proceeding to prove Lemmas, we recall that if a given two-qubit gate V in the HEA forms a 2-design, one can employ the following element-wise formula of the Weingarten calculus in Refs.[85,93] to explicitly evaluate averages over V up to the second moment: where v ij are the matrix elements of V , and Here the integration is taken over U(2), the unitary group of degree 2. possessing support only on the j−qubit, and then write U (θ) as in Eq. (3) layer by layer, as where V k are the unitaries acting on each layer k = 1, . . ., D. For sake of simplicity, we drop the parameter dependence.Each V k can be further decomposed as: where each unitaries V α , α = 1, . . ., n/2m, acts on m qubits, being n a multiple of m (m even), with periodic boundary conditions; so that either (i) V 1 acts on the first m qubits, V 2 acts on the second m qubits, up to V n/2m acting on the last m qubits; or (ii) V 1 is acting from qubit m/2 + 1 to qubit 3m/2, V 2 is acting from qubit 3m/2 + 1 to qubit 2m, up to V n/2m acting from qubit n − m/2 + 1 to qubit m/2.At the first layer, only one V (1) α for some α is acting on σ (i) j : (1) α (134)
Figure 1 :
Figure1: (a) Both VQA and QML models train parametrized quantum circuits U (θ) to minimize either a cost function C(θ) for VQAs, or a loss function L(θ) for QML models.While VQAs start from some fiduciary, easy to prepare state |ψ0⟩, in QML one uses states from dataset |ψs⟩ ∈ S as input of the parametrized circuit U (θ).Both models exploit the power of classical optimizers for the minimization task.(b) The architecture of a HEA seeks to minimize the effect of hardware noise by following the topology, and using the native gates, of the physical hardware.Specifically, we consider HEA as a one-dimensional alternating layered ansatz of two-qubit gates organized in a brick-like fashion.In the figure, we show how a first layer of gates is implemented at time t1 while a second layer at time t2.At the end of the computation, a local operator is measured.
Figure 2 :
Figure 2: The sketch shows the light-cone of a local measurement operator at the end of a shallow HEA.Here we can see how the support of the local operator grows with the depth D of the HEA.Since the HEA gates act on neighboring qubits, the support increases no more than 2D.
Let us consider a VQA or QML task from Definitions 1-2, where the ansatz for the parametrized quantum circuit U (θ) is a shallow HEA with depth D ∈ O(log(n)) (see Fig. 1(b)).Moreover, let the function f (θ) = C(θ), L s (θ), be either the cost or loss function in Eqs.(1) and (2).
Corollary 2 .
Let |ψ⟩ ∼ µ Haar be a Haar random state, D ∈ O(log(n)), and max i supp(P i ) ∈ O(log(n)).Here µ Haar is the uniform Haar measure over the states in the Hilbert space.Then I
Proposition 1 .
Let f (θ) be a VQA cost function or a QML loss function where U (θ) is a shallow HEA with depth D ∈ O(log(n)).If the values of f (θ) anti-concentrate according to Theorem 2 and Eq.(23), then Var θ [∂ ν f (θ)] ∈ Ω(1/ poly(n)) for any θ ν ∈ θ and the loss function does not exhibit a barren plateau.Conversely, if f (θ) has no barren plateaus, then the cost function values anti-concentrate as in Theorem 2 and Eq.(23).
Figure 3 :
Figure 3: Schematic representation of the Hilbert space.The vast majority of states satisfy a volume law, and hence a shallow HEA cannot be used to extract information from them.From the set of states satisfying an area law, only a very small subset admits an efficient classical representation.For these states, the effect of a shallow HEA can be efficiently simulated.As such, there exists a Goldilocks regime where HEA can potentially be used to achieve a quantum advantage: non-classically-simulable area law states.
Figure 4 :
Figure4: Schematic representation of the trainability of two QML tasks with two different datasets S. The first task is trainable since the dataset S is composed of states ρi possessing an area law of entanglement.Conversely, the second task is untrainable being the dataset S composed of states ρi possessing volume law of entanglement.We remark that the first task can enjoy a quantum advantage since not all the area-law states are classically simulable, see Ref.[76] and Fig.3.
Figure 7 :
Figure7: Numerical results.We consider a problem where the input states |ψt⟩ of the HEA are determined by Eqs.(32) and(33) and where the loss function is given by(34).In panels (a) and (b) we respectively show (averaged over |ψt⟩ and θ) norm of the gradient ∂µL(θ) as a function of the evolution time t for different system sizes with n even and odd.Panel (c) shows norm saturation value Gsat of the results presented in (a) and (b) versus system size n.Panel (d) shows the norm of the gradient versus 1 − S(ρ2), where S(ρ2) is the entropy of two-qubit subsystem for an evolution time such that the saturation value Gsat is achieved.Different points correspond to different values of n.The results are averaged over 400 initial states |ψ0⟩ in Eq.(32) and two sets of angles θ for every initial state.
ter for Nonlinear Studies at Los Alamos National Laboratory (LANL).L.C. was partially supported by the U.S. DOE, Office of Science, Office of Advanced Scientific Computing Research, under the Accelerated Research in Quantum Computing (ARQC) program.M.C. was initially supported by the Laboratory Directed Research and Development (LDRD) program of LANL under project number 20230049DR.The authors also acknowledge support by the U.S. DOE through a quantum computing program sponsored by the LANL Information Science & Technology Institute.
Lemma 2 . 77 ) 3 . 78 ) 4 .
Let U (θ) be a shallow HEA with depth D ∈ O(log(n)), and O = i c i α O (i) α with O (i) α being traceless operators having support on at most two neighboring qubits.Then, E θ0 [f (θ i )] = 0 , ∀i .(Lemma Let U (θ) be a shallow HEA with depth D ∈ O(log(n)), and O = i c i α O operators having support on at most two neighboring qubits.Then, E θ0 [∆f i+1,i ∆f j+1,j ] = 0 , ∀i ̸ = j .(Lemma Let U (θ) be a shallow HEA with depth D ∈ O(log(n)) where each local two-qubit gates forms a 2-design on two qubits, and let O = i c i α O (i) α be the measurement composed of, at most, polynomially many traceless Pauli operators O (i) α having support on at most two neighboring qubits, and where i c 2 i ∈ O(poly(n)).Then, if the input state follows an area law of entanglement, we have t)T π ] are spectral functions of the representative Hamiltonian H. π = e (the identity) defines the 2k-spectral form factor c 2k (t) ≡ | Tr[W (t)]| 2k : ) to recover Eq. (27) it is sufficient to note that Tr[ψ s O(θ 0 )] = 1 because O(θ 0 ) ≡ S, and S |ψ s ⟩ = |ψ s ⟩ by definition.
(i) reduce each term of Eq. (113) to a sum of high-order OTOCs, and (ii) use the following asymptotic formula proven in [92]: let A 1 , . . ., A k , B 1 , . . ., B k non-identity Pauli operators, and let B l (t) ≡ W † G (t)B l W G (t): Tr l A l B l (t) G = Tr l A l B l c2k (t) + O(d −1 ) , (
Lemma 1 .
Let U (θ) be a HEA with depth D as in Fig.1(b), andO = i c i P i = i c i α O (i)α be an operator, as in Eq.(35).Then for anyO (i) α : | supp(U (θ)O (i) α U † (θ))| ≤ k∈C (i) Proof.Given O = i c i P i = i c i α O (i)α (from the clustering of qubits), to prove the statement, one has to look at a given operator O (i) α , having support on the cluster C (i) α defined in Sec. A. As a starting point, let us introduce the local Pauli operator σ (i) j
Figure 10 :
Figure 10: For the proofs, it is useful to divide the HEA into different parts: acting before or after Vi (a), or before, after and in-between Vi and Vj (b).
and thus, if |ψ⟩ follows a volume law of entanglement according to Definition 5 for any bipartition Λ ∪ Λ with |Λ| ∈ O(log(n)), one indeed has the exponential suppression of the information contained in Λ, i.e., I Λ (ψ) ∈ O(2 −n ).This motivates us to propose the following alternative definition for states following volume and area law of entanglement Definition 7 (Volume law vs.area law).Let |ψ⟩ be a state in a bipartite Hilbert space H Λ ⊗ HΛ.Let Λ be a subsystem composed of |Λ| qubits, and let Λ be its complement set.Let ψ Λ = TrΛ[|ψ⟩⟨ψ|] be the reduced density matrix on Λ.Then the state |ψ⟩ possesses volume law for the entanglement within Λ and Λ if [77,78]he Isospectral twirling of a scalar function depends upon the particular choice of the spectrum of H, one then averages over spectra of a given ensemble of Hamiltonians E. Relevant examples are the Gaussian unitary ensemble E ≡ GUE, the Poisson ensemble E ≡ P, or the Gaussian diagonal ensemble E ≡ GDE.As one can see, in this picture, spectra and eigenvectors become completely unrelated, since the average over the full unitary group erases the information about the eigenvectors.Although, in Refs.[77,78]many ensembles of Hamiltonians have been considered, in this paper, we are particularly interested in the GDE ensemble which is the simplest ensemble of Hamiltonians: the 2k-point spectral form factors can readily be computed.Let sp(H) := {E k } d k=1 , where d ≡ 2 n .The GDE ensemble is characterized by the following probability distribution for sp(H): H, and c 2 (t) is the 2-point spectral form factor in Eq. (91).One can consider the isospectral twirling of a scalar function of the time evolution operator W (t) (characterized by the operator of interest O), i.e., F O [W (t)], which can be written after the isospectral twirling asF O [G † W (t)G] G := Tr[T σ OR (2k) (t)],where T σ is a particular permutation operator.Its value (depending on the evolution time t) characterizes the average behavior in the ensemble E H of all those Hamiltonians sharing the same spectrum of H.
) via the Isospectral twirling technique.Recall L s (H G , θ, t) = Tr[W (t) |ψ s ⟩⟨ψ s | W † (t)O(θ)],where |ψ s ⟩ is a completely factorized state.We are interested in computing E GDE [L s (H G , θ, t)].Note that L s (H G , θ, t) can be written as: 2 Λ ] is the purity of the reduced density matrix ψ Λ .Let |ψ t ⟩ = exp (−iHt) |ψ s ⟩ the state resulting from the time evolution under a GDE Hamiltonian acting on a completely factorized state.Let Λ be a subsystem such that |Λ| = O(log(n)).Note that if Pr(P (ψ Λ In this section, we compute the second moment of the purity, i.e. ⟨P (ψ Λ ) 2 ⟩.Thanks to the left/right invariance of the Haar measure over G, and from the commutation with T Λ , we can compute the average of the second moment for any factorized state |ψ 0 ⟩ ≡ |ψ Λ ⟩ ⊗ |ψΛ⟩, by computing it with the input state |ψ 0 ⟩ ≡ |0⟩ ⊗n .Namely: ) .
this operation gives rise to 8 different terms, labeled t 1 , . . ., t 8 , each of which proportional to the spectral function c8 (t) thanks to Eq. (114).The coefficients are listed below: t 1 ≡ Tr[P a Λ P 1 P a Λ P 2 P b Λ P 3 P b Λ P 4 ] (120) t 2 ≡ Tr[P a Λ P 1 QP a Λ P 2 QP b Λ P 3 P b Λ P 4 ] | 18,288.2 | 2022-11-02T00:00:00.000 | [
"Computer Science",
"Physics"
] |
MICROSCOPIC STRUCTURE OF A DECREASING SHOCK FOR THE ASYMMETRIC K -STEP EXCLUSION PROCESS
The asymmetric k -step exclusion processes are the simplest interacting particle systems whose hydrodynamic equation may exhibit both increasing and decreasing entropic shocks under Euler scaling. We prove that, under Riemann initial condition with right density zero and adequate left density, the rightmost particle identifles microscopically the decreasing shock.
Introduction
The asymmetric k-step exclusion process is a conservative attractive process on X = {0, 1} Z that generalizes simple exclusion (general class of processes of this type was introduced in [G]). The hydrodynamic behavior of these processes were studied in [BGRS] (for a specific review see equally [FGRS]). One of the interesting features of these processes is that their macroscopic flux function is neither convex nor concave leading to both increasing and decreasing entropic shock solutions of the hydrodynamic equation. In this note we investigate the microscopic counterpart of a decreasing shock solution in the asymmetric nearest neighbor case, under Riemann initial condition with right density zero. Indeed, remember that the nearest neighbor simple exclusion process with an asymmetry to the right has a concave flux function, and its hydrodynamic equation can exhibit (only) an increasing shock (see for instance [KL], chapters VIII and IX). The microscopic structure of this shock was analyzed in a series of papers by P. Ferrari et al. (the first ones were [FKS] and [F]; see [L99] for a unified presentation and a complete reference list), and in more general settings in [R] and [S]. These authors proved that the shock was characterized by the evolution of a second class particle, which moved at the shock speed, and followed the characteristic lines and shocks of the hydrodynamic equation; moreover, under Riemann initial condition with densities λ (resp. ρ) to the left (resp. right) of the origin, the process seen by this second class particle possessed an invariant measure with asymptotic densities λ (resp. ρ) to the left (resp. right) of the origin. Unfortunately, we cannot adapt the techniques developed in those papers to k-step exclusion, because on the one hand jumps are not restricted to stricto sensu nearest-neighbor sites, and on the other hand both [R] and [S] rely on the concavity of the flux function. We point out that following along the same lines as we do here for k-step exclusion one can obtain the microscopic structure of (the increasing) shock for finite range non-nearest neighbor asymmetric exclusion process with (0, ρ) initial profile. We consider an asymmetric (probability p -resp. q -to jump to the right -resp. left -) starting with an initial measure µ λ,0 : i.e. a product measure with density λ to the left of (and at) the origin and 0 to its right. Our candidate for a microscopic object which identifies the shock is the rightmost particle (cf. [DKPS] where the asymmetric simple exclusion process was studied in the case of an increasing shock, with left density 0; there, the leftmost particle identified the shock). We prove that the rightmost particle evolves at speed v shock , and that the process seen by this particle has an invariant measure with asymptotic density λ to the left of the origin. We illustrate our method for the totally asymmetric 2-step exclusion process. When λ ∈ (0, 1/4) this corresponds to an initial shock profile for the hydrodynamic equation. The shock (discontinuity) at zero propagates at a speed v shock = (p − q)(1 + λ − 2λ 2 ) (see e.g. [FGRS]). Comments will be made to show how to extend the result to the asymmetric nearest neighbor case. We present our results in Section 2, and prove them in Section 3.
Remark: For the k-step asymmetric case with k > 2, the flux function starts out being convex, the shock speed and the allowed range of densities for a decreasing entropic shock are determined by the convex envelope of the initial part of the flux function. However our argument remains valid within suitable changes.
Notation and results
We denote by S(t) the evolution semi-group of the asymmetric two-step exclusion process (η t ) t≥0 on X = {0, 1} Z with generator L given by on all bounded cylinder functions f on X, where p = 1−q ∈ [0, 1]\{1/2}, η x,y is the configuration η where the states of sites x and y have been interchanged and η x,y,z is the configuration η where the states of sites x, y and z have been shifted; i.e. η x,y (z) = η(z) when z = x, y; η x,y (x) = η(y); η x,y (y) = η(x), and η x,y,z (w) = η(w) when w = x, y, z; η x,y,z (y) = η(x); η x,y,z (z) = η(y); η x,y,z (x) = η(z). Notice that we chose a 'pushing interpretation' (a particle may jump to its neighboring site pushing eventually a particle that could stand there provided the next neighboring site has a vacancy) of the evolution, so that particles always keep the same respective order. Like the simple exclusion process, the k-step exclusion process is attractive (with respect to the usual order on configurations, i.e. for η 1 , η 2 ∈ X, η 1 ≤ η 2 means that η 1 (x) ≤ η 2 (x) for all x ∈ Z), and has a one parameter family {ν α , α ∈ [0, 1]} of extremal invariant and translation invariant measures, where ν α is the Bernoulli product measure on X with density α ∈ [0, 1], i.e. with marginal ν α (η(x) = 1) = α for all x ∈ Z.
In the sequel we set p = 1 (total asymmetry); appropriate comments will be made for the 1/2 < p < 1 case (the case 0 ≤ p < 1/2 being symmetric).
In this note we consider the totally asymmetric 2-step exclusion process with initial measure µ λ,0 . Due to the pushing interpretation of the dynamics, it has a rightmost particle, of initial position Z 0 = Z(η), whose distribution G is geometric of mean (1/λ) − 1, and of position S(t)Z = Z t at time t. The 2-step exclusion induces a process seen by the rightmost particle, , with initial measure µ λ,0 . A configuration η in X is obtained from a configuration η on X distributed according to µ λ,0 by shifting it so that the rightmost particle is at the origin: x < 0} is distributed according to a product measure with density λ. Note that the process ( η t ) t≥0 has semi-group S(t) and generator L, where the last term in the definition of L is equal to zero since the process is supported on configurations with η(1) = 0, and for the same reason in the two previous terms 1 − η(1) = 1. We also observe that µ λ,0 τ Zt S(t) = µ λ,0 S(t).
For s ≥ 0, define 0,−1 : X → N as 0,−1 ( η s ) = 1 + η s (−1). It is the flux of holes crossing the bond between 0 and −1 at time s. This is also the rate at which the rightmost particle jumps right at time s. Indeed, Z s is the sum of Z 0 , and net change in the position of the rightmost particle in time s. This net change can then be obtained as a functional of the process ( X, µ λ,0 , S(t)): In the next section we will prove, using the previous notation, Since the set of all probability measures on the compact set X is compact, there exists an increasing sequence of times (t n ) n≥0 , t n → ∞, such that, a stationary measure for the ( η t ) t≥0 process (see [L85], Proposition I.1.8 (e)). As a consequence of Theorem 2.1, we obtain that µ (which has density 0 to the right of the origin) is asymptotically equal to (in the Cesáro sense) ν λ far to the left from the origin: Let µ be any invariant measure for a Markov process with semigroup S(t) starting from an initial measure µ λ,0 . Then then γ = ν λ .
Proofs
The k-step exclusion process is attractive, that is coordinatewise partial order between configurations is preserved by the k-step evolution. The process seen by the rightmost particle does not have this property. On the other hand it preserves a partial order between configurations which compares the number of holes between successive particles appropriately. We now introduce this partial order on configurations in X which will play a crucial role in the proof of Theorem 2.1.
We consider η ∈ X ⊂ X, which either have infinitely many particles to the right and left of the origin, or infinitely many particles to the left of the origin and no particles to the right of the origin. We label particles as follows: If there are infinitely many particles to the right as well as to the left of the origin, particles are labelled by their natural ordering on the line with X 0 (η) = 0. Let γ i (η) be the number of holes between i + 1-st and i-th particle, i.e. γ i (η) = X i+1 (η) − X i (η) − 1. If there are no particles to the right of the origin then we let γ 0 (η) = +∞ and X n (η) = γ n (η) = ∞ for all n ≥ 1. It is easy to show that γ i is a continuous function of η at all η such that γ i (η) < ∞. Given η 1 , η 2 ∈ X , (a) if η 1 and η 2 have infinitely many particles to the right and left of origin, (b) if X j (η 2 ) = ∞ for all j ≥ 1, and η 1 has infinitely many particles to the right and left of origin, This order extends to probability measures: We denote by M the set of bounded monotone (w.r.t. ) functions on X . Then, since the distribution of {η(x), x < 0} under µ λ,0 is product with density λ, we have µ λ,0 ν λ which means that, for any increasing f ∈ M, f d µ λ,0 ≥ f d ν λ . Moreover, if η 1 , η 2 ∈ X and η 1 η 2 then L(γ i (η 1 )) ≤ L(γ i (η 2 )), for all relevant i. It follows that if f ∈ M is increasing on X then so is S(t)f for all t > 0 since 1) S(t) is defined on X so that all configurations have a particle at the origin which remains at the origin because of the tagged particle evolution of S(t).
2) When one compares two configurations (from cases (b) and (c) in the definition of ) the fact that there is always a particle at the origin implies that the labelling of the γ's are unchanged by the evolution for both configurations.
In other words, S(t) is an attractive semi-group with respect to the partial order we have introduced, and using (3), µ λ,0 S(t) ν λ for all t ≥ 0, so that by (2), µ ν λ .
Remark:
The attractivity of S(t) can also be seen by using a particle to particle coupling described as follows: Let us denote by η t and ξ t the processes starting with initial measures µ λ,0 and ν λ respectively. We couple the two processes by requiring that the particles in η t and ξ t with the same labels i ∈ Z use the same clock for jumps if X i (η t ) < ∞ and X i (ξ t ) < ∞. Even though we have sketched the attractivity argument for the totally asymmetric case we point out that it can be extended to the asymmetric case straightforwardly.
Proof of Theorem 2.1.
Since µ is an invariant measure for S(t) we have Because for all n > m + 2, f n (τ 1 η 0,1 ) = f n (τ 1 η −1,0,1 ) = f n (τ 1 η 0,1,2 ) = f n−1 (η), we have and as L 0 (f n ) = L(f n ), for n ≥ m + 2 where we have used the commutativity of L and τ in next to the last step. This proves that µ ∞ is an invariant measure for the semi-group S(t). Since µ ∞ is a translation invariant measure by definition we have that µ ∞ is a convex combination of product measures (see [G] Theorem 1.3 which is a slight adaptation of the corresponding result for simple exclusion, see [L85], Theorem VIII.3.9 (a)). That is µ ∞ = 1 0 ν α dπ(α) where π is a measure on [0, 1]. Now we want to show that π((λ, 1]) = 0. Let η ∈ X . Recall that for i > 0, X −i (η) denotes the location of the i-th particle in η to the left of the origin. For all n < 0 define l n (η) = max{i ≥ 0 : The random variable l n (η) counts the number of particles in η which are in [n, 0] ∩ Z. Now since µ ν λ , there exists a coupling measure µ on {(η, ξ) ∈ X × X } of marginals µ and ν λ , with µ({γ i (η) ≥ γ i (ξ) : i < 0}) = 1. From this it follows that l n (η) ≤ l n (ξ) for all n < 0, µ almost surely. Define A = {η ∈ X : lim inf k→∞ N −1 k N k j=1 η(−j) > λ}. Let f (η) = η(−1). Then A is measurable with respect to the left tail sigma algebra of {η(i) : i ∈ Z} and τ −j A = A for all j ∈ N. We have for all k ≥ 1 µ almost surely. Therefore µ almost surely. Taking expectations and limit in k we get Since ν λ (A) = 0 we have that π((λ, 1]) = 0 and we conclude that µ ∞ is a convex combination of product measures with density at most λ. Now define as i,i−1 (η), i < 0, the flux of holes jumping across the −i-th particle to the left of the origin for the ( η t ) t≥0 process (we point out that the 2 in the second term in parenthesis comes from the fact that a hole in front of the −i-th particle can jump in between −i-th and −i − 1-st particle or behind −i − 1-st particle at the same rate). By an elementary computation, E ν λ ( i,i−1 (η)) = 1 + λ − 2λ 2 = v shock , for all i < 0. Since µ is an invariant measure for S(t) and i,i−1 − i−1,i−2 = L(γ i−1 ), we have E µ ( 0,−1 ) = E µ ( n,n−1 ) for all n < 0. This implies that We have used the fact that the shock speed is a monotone increasing function of the particle density α if α ∈ (0, 1/4) in the last line. Combining (7) and (6) we conclude that E µ ( 0,−1 ) = v shock = lim t→∞ E µ λ,0 (Z t /t) thus proving Theorem 2.1. Notice that if π([0, λ)) > 0 then the inequality in (7) would be strict, contradicting (6). Therefore π(.) is the Dirac measure concentrated on λ: Proof of Corollary 2.1.
Since we obtained the result of the corollary for µ in the proof of the Theorem, and the assumptions on µ that we needed are satisfied for any µ considered in the corollary, the result follows from the previous proof. | 3,747.8 | 2003-12-22T00:00:00.000 | [
"Mathematics"
] |
A mapping framework to characterize land use in the Sudan-Sahel region from dense stacks of Landsat data
We developed a land cover and land use mapping framework specifically designed for agricultural systems of the Sudan-Sahel region. The mapping approach extracts information from inter and intra annual vegetation dynamics from dense stacks of Landsat 8 images. We applied this framework to create a 30-m spatial resolution land use map with a focus on agricultural landscapes of northern Nigeria for 2015. This map provides up-to-date information with a higher level of spatial and thematic detail resulting in a more precise characterization of agriculture in the region. The map reveals that agriculture is the main land use in the region. Arable land represents on average 52.5% of the area, higher than the reported national average for Nigeria (38.4%). Irrigated agriculture covers nearly 2.2% of the total area, reaching nearly 20% of the cultivated land when traditional floodplain agriculture systems are included, above the reported national average (0.63 %). There is significant variability in land use within the region. Cultivated land in the northern section can reach values higher than 75%, most land suitable for agriculture is already under cultivation and there is limited land for future agricultural expansion. Marginal lands, not suitable for permanent agriculture, can reach 30% of the land at lower altitudes in the Northeast and Northwest. In contrast, the southern section presents lower land use intensity that results in a complex landscape that intertwines areas farms and larger patches of natural vegetation. This map improves the spatial detail of existing sources of LCLU information for the region and provides updated information of the current status of its agricultural landscapes. This study demonstrates the feasibility of multi temporal medium resolution remote sensing data to provide detailed and up-to-date information about agricultural systems in arid and sub arid landscapes of the Sahel region.
Introduction
In the last 30 monitoring is rapidly becoming operational [1][2][3][4], and a number of agricultural monitoring systems already forecasts yields and production for the main global regions [5][6][7].Yet these advances have been geographically uneven and, while there has been remarkable progress in some regions, others have received considerably less attention.The Sudan-Sahel region in sub-Saharan Africa is a paradigmatic example.Beginning with [8], there have been a number of initiatives have studied land cover and land use trends in the region using Earth observation data [9][10][11][12][13][14].Yet, the relatively limited a weight at global scale of its agricultural production and the technical challenges posed by its dynamic agricultural systems dominated by small-scale agriculture have slowed down the adoption of remote sensing monitoring systems and limited the amount and quality of available land use information.
Despite its relevance for human development and poverty alleviation, there is a lack of basic information on the distribution of cultivated land and the main land processes in the Sudan-Sahel region.The region is expected to experience major changes in the near future and land use information remains crucial in a region where agriculture represents the main livelihood strategy.
Fertility rates among the highest in the developing world [15], and the increase in the rural population will accelerate the expansion of cultivated land for subsistence farming.The steady growth of urban population will fuel the demand for agricultural products, a major driver of land use change [16].
The consequences of climate change will likely disrupt agricultural practices and agricultural production in the region.Expected temperature increases by 2050 will shorten crop-growing cycles, leading to severe yield reduction and threatening food production systems [17][18][19].Coping strategies to the changing conditions imposed by these two processes may lead to progressive land degradation, further compromising rural livelihoods and increasing their vulnerability to future internal and external shocks [20].For instance, forced by climate change-related productivity decreases, farmers may overexploit soils or expand cultivated lands into more marginal lands.
Several global RS-based LCLUC products include coarse resolution land use information for the Sudan-Sahel region (e.g.MODIS MCS12Q1 and Globcover 2005 and 2009).These products have been integrated with national statistics to generate a more robust outcome [21].However, while coarse spatial resolution data meet the observational requirements for the large agricultural regions of the world, they are not well suited for monitoring crops in regions with highly heterogeneous agricultural landscapes dominated by smaller farms such as those in the Sudan-Sahel region.
Furthermore, the static nature of these products fails to capture the changes over time of agricultural systems in the region [22,23].The opening of the Landsat archive in 2008-2009 [24] and more recently the launch of new medium resolution sensors (Landsat-8, Sentinel-2 DMC, etc.) offers unprecedented opportunities to study land-cover/land-use change (LCLUC) at higher spatial resolution.Several initiatives are already exploiting this new data with increasing processing capabilities to map agricultural landscapes with higher spatial detail, at scales more relevant to African agricultural processes.Gong et al. (2013) [25] global LCLU map at 30m spatial resolutions included a cropland class.Xiong et al. (2017) [26] produced a nominal 30-m cropland extent map of continental Africa by using Sentinel-2 and Landsat-8 data (GFSAD30AFCE).This product represents a major improvement for food and water security assessments in an African context and a first step towards exploring, not only cropland extent but, crop type, intensity and change.The European Space Agency released a prototype land cover 20-m map of Africa for 2016 with cropland as one of its classes (http://2016africalandcover20m.esrin.esa.int).
However, these products still present some limitations that hamper the extraction of information on land use dynamics for specific regions of Africa.Cropland systems across the continent are highly diverse and often adapted to very specific environmental conditions.The phenological signatures of the different land use types in a region can be very similar [26].As a consequence, mapping croplands at continental level requires large and up to date training and validating datasets [26].While highresolution imagery and crowdsourcing [27] are gaining ground as a source of training and validating data, training and validating field data in remote regions remain scarce, constraining the precision of supervised learning algorithms.Consequently, continental and global medium resolution products that implement a single mapping approach cannot always offer the flexibility to provide an accurate characterization of land use processes in specific regions of the continent.
Northern Nigeria presents a paradigmatic and concrete example of a data-poor region where land use information is crucial for human development.Poverty indicators of Nigeria over the last decade show a growing North-South divide [15].While poverty rates are only 16% in the South of the country, an estimated 50.2% of the population lives below the poverty line in the North, where up to seventy percent of households rely primarily on agriculture [15].Ongoing land cover change and land use processes compromise the main livelihood strategies of these households, reduce their adaptation alternatives and thus, increase their vulnerability.Yet, the lack of consistent and reliable land use information hinders the accurate characterization of the agricultural sector in northern Nigeria.As the linkages between climate change, crop failures, poverty, migration and conflict become more explicit in the research literature, it becomes urgent to monitor land use processes in the region as a first step to inform decision makers and design and implement efficient policy interventions in agricultural development and poverty alleviation.This work proposes a mapping approach specifically designed for agricultural systems of the Sudan-Sahel region aiming to overcome some of these limitations of global and continental scale remote sensing products in the region.This mapping approach makes extensive use of expert knowledge of vegetation dynamics and exploits dense stacks of Landsat 8 imagery to capture inter and intra annual vegetation dynamics and improve the characterization of the main land use types in the study region.
The proposed mapping framework is flexible and robust to operate with limited imagery, and can be easily and rapidly updated in successive years.We have applied this approach to produce a 30 m spatial resolution land cover land use map with emphasis on agricultural classes.This work aims to fill a gap of information and provide a precise and up-to-date assessment of agricultural systems in Northern Nigeria.
Materials and Methods
The area of study is Northern Nigeria, defined as the region lies on the 9.3 degrees latitude line and borders Niger to the North, Cameroon and Chad to the East and Benin and Niger to the West.This region covers an area of 494,000 km 2 and includes the states of Bauchi, Borno, Gombe, Jigawa, Kano, Katsina, Kebbi, Sokoto, Yobe and Zamfara, and parts of Adamawa, Kaduna, Kwara, Niger and Plateau.Northern Nigeria is part of the Sudan and Sahel savannas agro-ecological zones (Figure 1).These warm tropical arid and semiarid zones are characterized by clearly defined dry and rainy seasons following a strong rainfall latitudinal gradient.Annual precipitations in the region range from below 400 mm in the northeast to 1500 mm per year in the higher elevations of the south.The elevation in the study area ranges between 100 and 1,300 m above sea level.Lower elevations are found in the Benue River valley to the west and the Gongola river valley and the Chad Lake depression to the East.From these regions, there is a gradual transition to higher elevation areas in the central part of the study area in the Kaduna, Bauchi, Gombe, Kano and Katsina states.The highest elevations are found in the south, in the Jos plateau.acquired between 2014 and 2016 [28].The inclusion of images for a 3-year period served to capture interannual dynamics required to characterize some land surfaces in the study area.For each Landsat scene approximately 69 images were stacked during this period.Dust loaded airmass from the Sahara Desert (harmattan winds) can occur up to 100 days per year during the dry season in the Sudan-Sahel region.This dust modifies the spectral signatures of land surfaces and results in lower vegetation index values.In this context, given the difficulties of accurately estimating aerosol properties and its spatial distribution, TOA images were chosen over top of canopy images [29].Images were not discarded based on cloud coverage, since cloud free pixels in densely cloudy images could still provide valuable information within the proposed mapping framework.Digital elevation data from the Shuttle Radar Topography Mission (SRTM), [30] at 30 m spatial resolution was used to define floodplains.Given its focus on agricultural landscapes, urban areas were not directly mapped.
Instead, urban pixels were extracted from the Global Human Built-up and Settlement Extent (HBASE) Dataset from Landsat product [31].This product provides global 30 m spatial resolution information on settlement extent for year 2010.
We developed a land cover land use (LCLU) mapping framework adapted to agricultural landscapes of the Sudan-Sahel ecological region.This approach was applied to produce a 30 m resolution map of northern Nigeria for baseline year 2015.The mapping framework implemented a knowledgebased expert system (KBES) that relied on dense stacks of Landsat 8 images and exploits inter and intra annual vegetation dynamics and contextual information to map the main components of land surfaces in the study region [32].Recent research highlights the role of expert knowledge to advance remote sensing-based agricultural monitoring [33].KBES enable the inclusion of knowledge from experts in the field of the analysis even if they do not have remote sensing experience (e.g.field extension agents).These systems constitute a useful alternative when the lack of consistent training and validation data limits the use of supervised learning systems.Expert knowledge is commonly stored as a set of rules in a knowledge base.Subsequently, the information in the knowledge base is passed to an inference mechanism interprets that assigns class memberships to pixels.KBES have been successfully implemented in a number of applications such as protected areas conservation [34], crop classification [35][36][37][38] or urban mapping [39].
We built a knowledge base that established production rules using spectral, temporal and spatial constraints to identify the main agricultural systems and natural vegetation types in the study area [35].The production rules were not necessarily conclusive but provided a degree of evidence in favor of some class label [40].These rules were defined from the analysis of the spectral profiles during the vegetative cycles of the different land use types.Specific temporal windows were selected to maximize the separability between classes using expert knowledge in the seasonal dynamics of land surfaces in the study area (Figure 2).Similar strategies have been previously used for land cover and land use in tropical environments [41,42].The numerical values in the spectral rules were provided as thresholds empirically generated from observed data [43].These spectral thresholds were defined based on a training dataset of 1,750 points of known land use types randomly searched using visual interpretation of a combination of Landsat and very high-resolution imagery (Google Earth) for the period of study.The thresholds for each land use class were calculated from the statistical distribution of pixel values in the training dataset.The spectral rules were built based on Normalized Difference Vegetation Index (NDVI) images [8] calculated from each image in the original dataset.This vegetation index was chosen because, besides data compression, it facilitates the interpretation of land surface dynamics over time and the definition of spectral thresholds.The presence of burned areas in natural vegetation surfaces within the 3-year imagery epoch was interpreted as a sing of land cover transition, supporting the labeling as non-stable natural vegetation pixels.The rules in the knowledge base were designed to identify the following agricultural systems and natural vegetation types: 1) Stable Natural vegetation; 2) Non-stable natural vegetation; 3) Rain-fed agriculture; 4) Irrigation agriculture; 5) Bare soil; 6) Rivers and water bodies.Each of these components presents distinct seasonal dynamics (Figure 2).For instance, the vegetative cycle of natural vegetation and rain-fed agriculture follows closely rain patterns, with higher NDVI [41] values during the rainy season (July -October) and low values during the dry season.However, there are distinct differences between them.Rainy season NDVI values of rain-fed agriculture are comparatively lower than those of natural vegetation because planting densities do not commonly cover the ground completely.Equally, during the dry season, the exposed soils of cultivated lands result in lower NDVI values than natural vegetation, where dormant and dry vegetation covers the ground resulting in higher NDVI values.Irrigation agriculture relies on groundwater aquifers and irrigation systems that can result in extended growing seasons and several vegetative cycles within a year.Because of its higher planting densities, NDVI are commonly higher than those of rain-fed agriculture.Bare A detailed description of the rules in the knowledge base can be found in the Supplementary Materials.
Table 1.Description of mapped classes.
Class Type Description
Stable Natural vegetation Surfaces with natural vegetation and never cultivated during the 2014 -2016 period.
Non-stable natural vegetation Surfaces not cultivated in 2015, but likely to have been under cultivation in previous or subsequent years (fallow lands).Also land undergoing cover conversion.This class includes areas of sparse tree cover.
Rain-fed agriculture Cultivated land relying solely on rainfall for water supply.
Irrigation agriculture Cultivated land using mainly irrigation for water supply (groundwater, irrigation channels, etc.).
Bare soil Surfaces without vegetation cover.
Rivers and water bodies Surfaces covered by water during more than 6 months per year in the 2014 -2016 period.The rules were applied to all available images during the defined temporal windows and imposed initial preconditions for class membership.The KBES inference mechanism analyzed the set of rules, resolved potential redundancies and inconsistencies and made decisions about class membership using a majority rule criterion [32] (Figure 3).This approach reduced the potential impact of individual aerosol contaminated images in the mapping process.
An uncertainty flag was risen when the number of available observations during the temporal window was below a pre-established number due to cloud coverage (n=5).This was often the case for rain-fed agriculture based on observations during the rainy season temporal window.In these cases, alternative approach and applied the rules of the knowledge base to a NDVI maximum value composite from all images available during the temporal window.
The map was validated against an independent dataset of 754 ground points spread over the study region.Validation points were identified through a combination of visual interpretation of very highresolution images for year 2015 (Google Earth) and known locations in true color Landsat 8 images.
The analysis was carried out on per-scene basis.Outputs from individual scenes were subsequently mosaicked in a final map at 30 m resolution covering the whole study area.The implementation of the KBES and data processing was carried out in Google Earth Engine cloud-based platform.
Results
We created a 30-m spatial resolution LCLU map with a focus on agricultural landscapes in northern Nigeria for year 2015.The validation of the map against an independent dataset of ground points resulted in an overall accuracy of 0.91 and kappa coefficient of 0.89.Accuracies were consistent throughout the classes with individual user and producer accuracies above 0.82 and 0.85 respectively (Table 1).likely to be associated to fallow fields.Other surfaces such as bare soil and water occupy less than 2% of the land on total (Figure 6).
The spatial distribution of land use in the map illustrates the geographical variability of land uses within the region, closely related to precipitation and controlled by latitudinal and elevation gradients.The northern section of the study region presents larger extents of croplands, where cultivated fields dominate the landscape in which patches of trees, shrub and fallow land cover no more than 10 and 20% of the land.Natural vegetation is restricted to isolated remnants in marginal agricultural lands.In contrast, the southern section includes larger proportions of natural vegetation.
Cropland is less dominant and often part of a mosaic of cultivated fields and natural vegetation.
While the proportion of land under cultivation in the southern states is below 50% it can exceed 80% in some of the north states (Jigawa, Kano and Katsina).
Floodplains represent the most fertile agricultural land in arid and sub-arid ecosystems.Irrigated agriculture in the region is confined to river floodplains.Up to 74% of these floodplains are cultivated, from which nineteen percent is associated to major irrigation schemes, and the remaining 55% applies traditional forms of water management.States with complex hydrographic networks or large floodplains associated to the main rivers of the region have significantly higher irrigated areas than the regional average.Thus, while on average irrigation agriculture covers 2.2% of the land, it reaches up to 5% of the total area in Kaduna state.The spatial aggregation of the original 30-m product provides additional information about the structure of the landscapes in the study region and shows that areas dominated by croplands can still contain a significant share of natural vegetation (Figure 7, Figure 9).In the south of the study area this landscape fragmentation is associated with a mosaic structure while in the north it is related to agroforestry systems, where cultivated fields and low-density trees share the land.
Discussion
We have developed a Land Cover -Land Use mapping framework specifically designed for the Sudan-Sahel region.The approach relies on dense stacks of Landsat 8 imagery and seasonal metrics to address the peculiarities of the region and it is flexible and robust to operate with limited number of observations as a consequence of cloud and dust contamination.To overcome the scarcity of reliable in situ data in the region, this mapping framework uses extensive knowledge on phenological cycles and ecosystem processes in the Sudan-Sahel region and it relies on temporal windows that maximize the spectral separability of the relevant land surfaces [18].Finally, it applies an acquisition window of several years to allow a solid characterization of dynamic land use types that involve a multiyear cycle.This mapping framework has been applied to produce a 30-m spatial resolution map for northern Nigeria.Northern Nigeria lacks a comprehensive land survey scheme to collect agricultural data.As a consequence, LCLU information in the region is scarce and outdated.Without The map shows that production systems adapted to arid conditions in marginal and less productive areas (e.g.mosaics of cultivated fields and scattered trees) can reach 30% of the land at lower elevations.It also highlights the distinct characteristics of land use dynamics In the North when compared with the rest of the country.For instance, cropland represents on average 52.5% of the land in the North, reaching 80% in some states, while the reported national average for Nigeria is 38.4%.
From these figures we estimate that the cultivated land per person in the North is 0.31 person-hectare while the national average remains at 0.193 ha-person [45].The map also questions existing assessments of irrigated agriculture, covering nearly 2.2% of the land and 4.4% of the cultivated land in northern Nigeria (Figure 10), sensibly higher than the national average (0.63 %) [45].These intranational differences highlight the need to establish monitoring systems to provide region specific, reliable and up to date information to guide the design and implementation of interventions.This map also improves the thematic detail of existing medium resolution remote sensing-based products with cropland information for northern Nigeria.Continental and global remote sensing products commonly apply a uniform mapping framework that aim to retrieve consistent products and maximize overall accuracies over large regions.The application of empirical models to large regions implies a lack of flexibility to adapt to specific features and dynamics of sub regions.By restricting the geographical extent of the mapping area, we adapt the mapping framework to chart relevant land uses specific to the ecoregion and thus, improve thematic detail The advancement of cloud storage and computing capabilities and the increase in imagery from medium resolution sensors have led to the recent emergence of global-scale analysis and global medium spatial resolution products.Yet there are significant advantages of using cloud storage and computing capabilities for the study of smaller and coherent ecoregions, since it can enable the development of regional-based products that identify relevant key land use classes and land use processes, targeting specific questions and bringing analysis closer to managers and decision makers.
Conclusions
This work presents a mapping approach specifically designed for agricultural systems of northern Nigeria.This approach has the potential to be replicable in successive years and expanded to other locations within the Sudan-Sahel ecozone.The present work underscores the relevance of incorporating expert knowledge in the design mapping strategies in regions where other data sources are scarce.This work also highlights the importance of interannual land surface dynamic to achieve a robust characterization of land use in the study area, and it demonstrates the potential of dense stacks of medium resolution imagery to capture these dynamics.
The mapping framework was applied to produce a 30-m spatial resolution LCLU map with a focus on agricultural landscapes for northern Nigeria for year 2015.The map provides up-to-date information at higher spatial resolution, and an improved characterization of agriculture in a region with limited land use information where agriculture is the main livelihood strategy.
The map shows high farming intensity throughout the region and a cropland area significantly higher than in the rest of the country.Rain-fed cultivation systems already occupy most of the landscapes at the expense of natural vegetation and the majority of floodplains suitable for irrigation agriculture are already under use.The map also identifies the landscape variations associated with well-defined North-South gradients in the region.
In a region under increasing environmental and demographic stress, this work highlights the potential of multi temporal medium resolution satellite data to generate detailed and up-to-date land use information in the Sudan-Sahel region.This type of information is essential to design efficient policy interventions, improve famine-related early warning and response systems, and understand the links between land use and agricultural development, migration and conflict.
years, the importance of earth observation remote sensing for monitoring land use and agricultural development over large areas has steadily grown.Remote sensing-based cropland
Figure 1 .
Figure 1.States included in this work (Dark grey) and of Landsat scenes used in the study (Red line).
Figure 2 .
Figure 2. Upper plate: Illustration of seasonal dynamics of vegetative activity for main land cover and land use classes.Adapted from temporal profiles at known locations for visualization purposes.Lower plate: Temporal windows for each mapping components.
Figure 3 .
Figure 3. Flow chart of mapping framework for land cover land use map based on dense stacks of Landsat 8 imagery.
Figure 5 .
Figure 5. Examples of Google Earth high resolution imagery (Left side) and the corresponding 30 m resolution Landsat 8-based LCLU map for northern Nigeria (Right side).Dark green: stable natural vegetation; Light green: non-stable natural vegetation; Light yellow: rain-fed agriculture; Red: Irrigation agriculture.
Figure 7 .
Figure 7. Northern Nigeria LCLU map 1 km spatial aggregation.Proportion of natural vegetation in agriculture dominated areas.
specific and up to date information on the state of agriculture, preexisting and misguided narratives based on limited and uncertain data shape the discussion in the development community.The resulting map provides information about spatial distribution of land use in northern Nigeria with higher spatial resolution and thematic detail than existing remote sensing-based products.By mapping features more closely linked with the livelihood of rural population, this map provides valuable information for the design and implementation of effective rural development and poverty alleviation interventions.The framework can potentially be expanded to other areas within the Sudan-Sahel ecozone and replicated in successive years to monitor the evolution of agricultural systems over time, setting the basis for an agricultural monitoring.
Figure 8 .
Figure 8.Comparison of farming intensity in drylands Nigeria in the late 1970s [43] and rain-fed agriculture 1 km spatial aggregated map (2015).Color bar represents percentage of land under cultivation.The map describes a region of intense agricultural use in which most land suitable for agriculture is already under cultivation.Historical evidence[44] reveals that, large areas of the drylands of Nigeria
Figure 9 .
Figure 9. Northern Nigeria LCLU map 1 km spatial aggregation.Upper plate: distribution of temporary natural vegetation; Lower plate: distribution of permanent vegetation.Color bars represent percentage of natural vegetation.
Figure 10 .
Figure 10.Northern Nigeria LCLU map 1 km spatial aggregation.Upper plate: distribution of rainfed agriculture; Lower plate: distribution of irrigated agriculture.Color bar represents percentage of cultivated land.
soils present very low NDVI values all year around and water bodies and rivers negative NDVI values during the rainy season.Irrigation agriculture was mapped through a combination of spectral and contextual constraints as cultivated areas within floodplains whose vegetative cycle did not follow annual precipitation.Floodplains were identified areas of low slope extracted from a digital elevation model (SRTM) within the neighborhood of rivers and water bodies. have
irrigation agriculture covers about 2% of the land.Natural vegetation occupies 46% of the land from which 14% corresponds to stable natural vegetation cover and 24% to areas of non-stable vegetation cover.From the latter, 7.2% corresponds to land burnt at least once over the 2014 -2016 period, and suggests an ongoing land transformation process.The remaining 16.8% corresponds to land that, while covered by natural vegetation, was cultivated at some point during the period of study and is | 6,150.2 | 2019-01-01T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Charge Carriers Relaxation Behavior of Cellulose Polymer Insulation Used in Oil Immersed Bushing
Cellulose polymer insulation material is widely used in oil immersed bushing. Moisture is one of the important reasons for the deterioration of cellulose polymer insulation, which seriously threatens the safe and stable operation of bushing. It is significant to study the polarization and depolarization behavior of oil-immersed cellulose polymer insulation with different moisture condition under higher voltage. Based on polarization/depolarization current method and charge difference method, the polarization/depolarization current, interfacial polarization current and electrical conductivity of cellulose polymer under different DC voltages and humidity were obtained. Based on molecular-dynamics simulation, the effect of moisture on cellulose polymer insulation was analyzed. The results show that the polarization and depolarization currents become larger with the increase in DC voltage and moisture. The higher applied voltage will accelerate the charge carrier motion. The ionization of water molecules will produce more charge carriers. Thus, high DC voltage and moisture content will increase the interface polarization current. Increased moisture content results in more charge carriers ionized by water molecules. In addition, the invasion of moisture will reduce the band width of cellulose polymer and enhance its electrostatic potential, so as to improve its overall electrical conductivity. This paper provides a reference for analyzing the polarization characteristics of charge carriers in cellulose polymer insulation.
Introduction
The oil-immersed transformer is the core equipment of power transmission in the power grid, wherein safety and stability are significantly important to energy security and social stability. According to statistics, fault of bushing is one of the main causes for transformer faults, and the cellulose polymer paper, as the main insulation material of bushing, is the main factor contributing to the fault of bushing after moisture invaded [1][2][3]. To ensure the safe operation and stability of the transformer, it is necessary to analyze the time-domain relaxation behavior of carriers at high applied voltage with various moisture contents.
To date, some scholars have studied the effect of moisture on cellulose polymer paper. Wenyu Ye, et al. found that the structure of liquid-solid interface is determined by the interaction between insulating oil and cellulose polymer paper, which is based on the Van der Waals effect, and water molecules will gather at the interface because of the interaction of the liquid-solid interface at certain electric field value [4]. Guanwei Long, et al. have simulated the distribution of H+ and OH− for oil and cellulose polymer paper insulation when moisture invaded, explaining the effect of moisture on the cellulose polymer paper and oil insulation from a microscopic point of view [5]. Haoxiang Zhao et al. found that when charge carriers pass through the interface between cellulose polymer paper and oil, the insulating paper has an obstruction effect, while the existence of water molecules would produce more charge carriers and reduce the obstruction effect, resulting in lower conductivity of cellulose polymer paper and different dielectric response characteristics [6]. The above research shows that moisture will have a great influence on the dielectric properties of cellulose polymer paper.
In order not to damage the sealing performance of bushing, nondestructive testing methods based on dielectric relaxation theory have been widely used by scholars, among which polarization and depolarization current (PDC) method and frequency domain spectroscopy (FDS) method are the most popularly used [7][8][9]. Both methods have the advantages of simple operation and rich insulation information. However, FDS is usually used to study the dielectric response characteristics at different frequencies, focusing on the analysis of dielectric constant and dielectric loss factor. While PDC is used to analyze the charge accumulation and charge carrier relaxation behavior of cellulose polymer paper because it can measure the time domain dielectric response characteristics of insulating oil and cellulose polymer paper. Quanmin Dai et al. found that the dielectric response of bushing changes when moisture invaded, which provided a basis for judging the moisture content of cellulose polymer paper by PDC [10]. Feng Yang tested the cellulose polymer paper by PDC and found that the polarization and depolarization current increases with higher moisture content; they established a relationship between the moisture content of bushing and the PDC results [11]. T. K. Saha et al. proposed a method to calculate the conductivity of cellulose polymer paper by the polarization and depolarization current, providing a method to analyze charge accumulation by conductivity [12].
Although some scholars have studied the correlation between the dielectric characteristic parameters of bushing insulating paper in time domain and moisture, most of the research is based on low excitation voltage, resulting in low signal-to-noise ratio, which is easy to be interfered by environmental noise. Additionally, and yet worse, the cellulose paper in the bushing works at a higher voltage, which means the traditional low-voltage measurement results and rules are not applicable to a higher excitation voltage. In addition, there is little research on analyzing the charge accumulation at the interface of oil-paper insulation in bushing based on PDC measurement results. Therefore, it is necessary to analyze the time-domain relaxation behavior of the bushing insulating paper with various moisture contents at high voltage.
This paper studies the time domain relaxation behavior of charge carriers for cellulose polymer insulation used in oil immersed bushing under higher voltage. Firstly, the polarization and depolarization currents of cellulose polymer insulation with different moisture conditions are obtained. Then, the influence of moisture and applied voltage on the characteristics of interface polarization behavior is studied; the motion of the interface charge carriers during the process of polarization and depolarization and the relationship between the steady state time of electrical conductivity are analyzed. Finally, based on molecular dynamics simulation, the effect of moisture on the insulation properties of cel-lulose polymers is analyzed.
Experiment
Cellulose is a natural polymer compound, its chemical structure is a linear polymer composed of many β-D-glucopyranosyl groups connected to each other by 1, 4-β glycoside bonds. Cellulose polymer insulation is mainly composed of cellulose macromolecular chains composed of cellulose monomers, as shown in Figure 1. Because cellulose polymer insulation has excellent insulation performance, it is widely used in the field of high voltage insulation. The chemical structure of cellulose macromolecular chain is shown as Figure 1 [13,14]. insulation has excellent insulation performance, it is widely used in the field of high voltage insulation. The chemical structure of cellulose macromolecular chain is shown as Figure 1 [13,14]. In this paper, an oil immersed bushing with cellulose polymer insulation, where the maximum voltage is 40.5 kV, is tested. The structure diagram of the bushing is shown in Figure 2. The insulation performance of this bushing may be reduced due to moisture in the oil and cellulose polymer insulation. In order to study the influence of moisture on the time domain relaxation behavior, the cellulose polymer insulation and insulation oil are treated with moisture absorption before the bushing is manufactured and packaged.
PDC tests in different DC voltages are carried out; the schematic diagram and physical diagram of PDC test is shown in Figure 3. In this paper, the DC high voltage power supply(AU-20*60, Matsusada, Osaka, Japan) is used to excite the bushing, and the highprecision electrometer(6517 B, Keithley, OH, USA) is used to collect the current signal at the end screen of the bushing. The polarization/depolarization process of the bushing cellulose polymer can be further explained by analyzing the changes in the excitation voltage and response current. The charge and discharge process for bushing lasts for 3000 s, and the applied voltages are 200 V, 500 V, 1000 V, 2000 V and 4000 V, respectively, and, the environmental temperature and relative humidity are 20 °C and 62%, respectively.
Aluminum foil
Cellulose polymer insulation Insulation oil Conductive rod In this paper, an oil immersed bushing with cellulose polymer insulation, where the maximum voltage is 40.5 kV, is tested. The structure diagram of the bushing is shown in Figure 2. The insulation performance of this bushing may be reduced due to moisture in the oil and cellulose polymer insulation. In order to study the influence of moisture on the time domain relaxation behavior, the cellulose polymer insulation and insulation oil are treated with moisture absorption before the bushing is manufactured and packaged.
PDC tests in different DC voltages are carried out; the schematic diagram and physical diagram of PDC test is shown in Figure 3. In this paper, the DC high voltage power supply(AU-20*60, Matsusada, Osaka, Japan) is used to excite the bushing, and the highprecision electrometer(6517 B, Keithley, OH, USA) is used to collect the current signal at the end screen of the bushing. The polarization/depolarization process of the bushing cellulose polymer can be further explained by analyzing the changes in the excitation voltage and response current. The charge and discharge process for bushing lasts for 3000 s, and the applied voltages are 200 V, 500 V, 1000 V, 2000 V and 4000 V, respectively, and the environmental temperature and relative humidity are 20 • C and 62%, respectively. age insulation. The chemical structure of cellulose macromolecular chain is shown as Fig ure 1 [13,14]. In this paper, an oil immersed bushing with cellulose polymer insulation, where the maximum voltage is 40.5 kV, is tested. The structure diagram of the bushing is shown in Figure 2. The insulation performance of this bushing may be reduced due to moisture in the oil and cellulose polymer insulation. In order to study the influence of moisture on the time domain relaxation behavior, the cellulose polymer insulation and insulation oil are treated with moisture absorption before the bushing is manufactured and packaged.
PDC tests in different DC voltages are carried out; the schematic diagram and phys ical diagram of PDC test is shown in Figure 3. In this paper, the DC high voltage power supply(AU-20*60, Matsusada, Osaka, Japan) is used to excite the bushing, and the high precision electrometer(6517 B, Keithley, OH, USA) is used to collect the current signal a the end screen of the bushing. The polarization/depolarization process of the bushing cel lulose polymer can be further explained by analyzing the changes in the excitation voltage and response current. The charge and discharge process for bushing lasts for 3000 s, and the applied voltages are 200 V, 500 V, 1000 V, 2000 V and 4000 V, respectively, and, the environmental temperature and relative humidity are 20 °C and 62%, respectively.
Aluminum foil
Cellulose polymer insulation Insulation oil Conductive rod insulation has excellent insulation performance, it is widely used in the field of high voltage insulation. The chemical structure of cellulose macromolecular chain is shown as Figure 1 [13,14]. In this paper, an oil immersed bushing with cellulose polymer insulation, where the maximum voltage is 40.5 kV, is tested. The structure diagram of the bushing is shown in Figure 2. The insulation performance of this bushing may be reduced due to moisture in the oil and cellulose polymer insulation. In order to study the influence of moisture on the time domain relaxation behavior, the cellulose polymer insulation and insulation oil are treated with moisture absorption before the bushing is manufactured and packaged.
PDC tests in different DC voltages are carried out; the schematic diagram and physical diagram of PDC test is shown in Figure 3. In this paper, the DC high voltage power supply(AU-20*60, Matsusada, Osaka, Japan) is used to excite the bushing, and the highprecision electrometer(6517 B, Keithley, OH, USA) is used to collect the current signal at the end screen of the bushing. The polarization/depolarization process of the bushing cellulose polymer can be further explained by analyzing the changes in the excitation voltage and response current. The charge and discharge process for bushing lasts for 3000 s, and the applied voltages are 200 V, 500 V, 1000 V, 2000 V and 4000 V, respectively, and, the environmental temperature and relative humidity are 20 °C and 62%, respectively.
Aluminum foil
Cellulose polymer insulation Insulation oil Conductive rod
Polarization/Depolarization Principle and Charge Analysis for Composite Polymer Medium
By applying and removing DC voltages to the insulating medium, the dielectric response characteristics in time domain from the polarization and depolarization current can be extracted based on a PDC test. The schematic diagram is shown as Figure 4. By connecting the switch to point a and then point b, the insulating medium is charged and then discharged, these two processes are polarization and depolarization processes of the insulating medium, respectively, and the current generated in these two processes are called polarization current i pol and depolarization current i depol , correspondingly.
Polarization/Depolarization Principle and Charge Analysis for Composite Polymer Medium
By applying and removing DC voltages to the insulating medium, the dielectric response characteristics in time domain from the polarization and depolarization current can be extracted based on a PDC test. The schematic diagram is shown as Figure 4. By connecting the switch to point a and then point b, the insulating medium is charged and then discharged, these two processes are polarization and depolarization processes of the insulating medium, respectively, and the current generated in these two processes are called polarization current and depolarization current , correspondingly. For the insulating medium, the polarization process mainly includes dipole polarization and interface polarization. Due to the difference of dielectric constant and electrical conductivity of composite insulating mediums, the free charge will accumulate at the interface and will not dissipate easily, resulting in the asymmetry of polarization process and depolarization process. Polarization current consists of dipole relaxation current , dipole polarization current and interface polarization current . The depolarization current mainly consists of dipole relaxation current . In general, the dipole polarization current is equal to dipole relaxation current. The polarization current and depolarization current are shown in Equations (1) and (2).
Electrical conductivity can reflect the degree of moisture content of insulating materials. Charge difference analysis method (CDA) can effectively calculate the electrical conductivity of insulating materials by analyzing the change characteristics of the charge amount difference in charge and discharged process [15][16][17][18]. By integrating the difference of polarization and depolarization current curve over time, charge amount difference can be obtained, as shown in Equations (3) and (4) [19,20].
Thus the charge amount difference at any time is as Equation (5), and the difference between polarization current and depolarization current at any time is as Equation (6): For the insulating medium, the polarization process mainly includes dipole polarization and interface polarization. Due to the difference of dielectric constant and electrical conductivity of composite insulating mediums, the free charge will accumulate at the interface and will not dissipate easily, resulting in the asymmetry of polarization process and depolarization process. Polarization current consists of dipole relaxation current i conductance , dipole polarization current i dipole−pol and interface polarization current i inter f ace−pol . The depolarization current mainly consists of dipole relaxation current i dipole−depol . In general, the dipole polarization current is equal to dipole relaxation current. The polarization current and depolarization current are shown in Equations (1) and (2).
Electrical conductivity can reflect the degree of moisture content of insulating materials. Charge difference analysis method (CDA) can effectively calculate the electrical conductivity of insulating materials by analyzing the change characteristics of the charge amount difference in charge and discharged process [15][16][17][18]. By integrating the difference of polarization and depolarization current curve over time, charge amount difference can be obtained, as shown in Equations (3) and (4) [19,20].
Thus the charge amount difference at any time is as Equation (5), and the difference between polarization current and depolarization current at any time is as Equation (6): The charge amount difference is numerically the integral of the conductance current over time. As the conductance current changes little with time, the slope k of charge amount difference function with time is approximately constant, as shown in Equation (7).
of 15
From Equations (6) and (7), i dc (t i ) can be expressed as Equation (8): When the time of polarization and depolarization process is equal and long enough, the current difference at the final moment is equal to the conductance current, as shown in Equation (9). The electrical conductivity σ r of the composite insulating medium can be obtained from the conductance current, as shown in Equation (10).
Electrical Conductivity Calculation for Cellulose Polymer Insulation
The composite insulation structure in the bushing is composed of aluminum foil and oil immersed cellulose polymer wrapped closely, as shown in Figure 5. The electrical conductivity of the cellulose polymer insulation can be obtained by using simplified X-Y model when the electrical conductivity of the whole insulating material is known [20].
The charge amount difference is numerically the integral of the conductance current over time. As the conductance current changes little with time, the slope k of charge amount difference function with time is approximately constant, as shown in Equation (7).
From Equations (6) and (7), ( ) can be expressed as Equation (8): When the time of polarization and depolarization process is equal and long enough, the current difference at the final moment is equal to the conductance current, as shown in Equation (9). The electrical conductivity σr of the composite insulating medium can be obtained from the conductance current, as shown in Equation (10).
Electrical Conductivity Calculation for Cellulose Polymer Insulation
The composite insulation structure in the bushing is composed of aluminum foil and oil immersed cellulose polymer wrapped closely, as shown in Figure 5. The electrical conductivity of the cellulose polymer insulation can be obtained by using simplified X-Y model when the electrical conductivity of the whole insulating material is known [20]. As for simplified X model, the electrical conductivity of the composite insulating structure is shown as Equation (11), where X is the thickness ratio.
Since the electrical conductivity of aluminum foil is much greater than that of insulation oil and the thickness of aluminum foil is negligible compared with that of cellulose polymer insulation, which means X equals to 1, the electrical conductivity of cellulose polymer insulation can be simplified, and the electrical conductivity of the cellulose polymer is as shown in Equation (12). As for simplified X model, the electrical conductivity of the composite insulating structure is shown as Equation (11), where X is the thickness ratio.
Since the electrical conductivity of aluminum foil is much greater than that of cellulose polymer and the thickness of aluminum foil is negligible compared with that of cellulose polymer insulation, which means X equals to 1, the electrical conductivity of cellulose polymer insulation can be simplified, and the electrical conductivity of the cellulose polymer is as shown in Equation (12).
Modeling of Molecular Dynamics Simulation
In order to study the effect of moisture on the insulation properties of cellulose polymer, the models of cellulose without moisture and after moisture were constructed by using Materials Studio software [21].
Three cellulose chains with a degree of polymerization of 10 were used as the cellulose polymer insulation model without moisture, while 3.5% water molecules were added
Modeling of Molecular Dynamics Simulation
In order to study the effect of moisture on the insulation properties of cellulose polymer, the models of cellulose without moisture and after moisture were constructed by using Materials Studio software [21].
Three cellulose chains with a degree of polymerization of 10 were used as the cellulose polymer insulation model without moisture, while 3.5% water molecules were added into three cellulose chains with a degree of polymerization of 10 as the cellulose polymer insulation model with moisture. The two cellulose polymer insulation models are shown in Figure 6. First, 10,000 steps of geometric optimization were carried out for the two models by using the Steepest descent method. Then, the two models were annealed to make the model reach the most realistic condition; the annealing temperature was 300-500 K. Under Compass force field, constant-pressure and constant-temperature ensemble (NPT ensemble), with a constant number of molecules, pressure, and temperature, was used to balance each model and to make the model more reasonable with 500 ps. Based on the density functional theory (DFT), the calculations for two cellulose models were analyzed by DMol3 Tool in Materials Studio. The geometry optimization, molecule orbitals and electrostatic potential were calculated by employing the PBE function under the generalized gradient approximation (GGA) exchange-correlation term. The double numerical plus polarization (DNP) basis set was applied for setting the parameters of C, H, O atoms in this computational work. The all-electron method was adopted for core electron calculation.
Polarization and Depolarization Current Characteristics for Cellulose Polymer Insulation
The polarization and depolarization current of bushings with different moisture conditions and applied voltages are shown in Figures 7-9. For normal bushing, the polarization current will hardly change significantly with time under any polarization voltage. The depolarization current decreases rapidly to a stable value in the first 10 s. For dampened oil bushing, under any polarization voltage, the polarization current decreases slowly in the first 10 s, and then tends to be almost stable. The depolarization current decreases rapidly in the first 50 s, and with the passage of time, the decreasing trend gradually slows down and finally tends to be stable. For dampened cellulose polymer bushing, under any polarization voltage, the polarization current will gradually decrease with time in a short time, and then gradually tend to be stable. The depolarization current decreases rapidly in the first 100 s, and with the passage of time, the decreasing trend gradually slows down and finally tends to be stable. First, 10,000 steps of geometric optimization were carried out for the two models by using the Steepest descent method. Then, the two models were annealed to make the model reach the most realistic condition; the annealing temperature was 300-500 K. Under Compass force field, constant-pressure and constant-temperature ensemble (NPT ensemble), with a constant number of molecules, pressure, and temperature, was used to balance each model and to make the model more reasonable with 500 ps. Based on the density functional theory (DFT), the calculations for two cellulose models were analyzed by DMol3 Tool in Materials Studio. The geometry optimization, molecule orbitals and electrostatic potential were calculated by employing the PBE function under the generalized gradient approximation (GGA) exchange-correlation term. The double numerical plus polarization (DNP) basis set was applied for setting the parameters of C, H, O atoms in this computational work. The all-electron method was adopted for core electron calculation.
Polarization and Depolarization Current Characteristics for Cellulose Polymer Insulation
The polarization and depolarization current of bushings with different moisture conditions and applied voltages are shown in Figures 7-9. For normal bushing, the polarization current will hardly change significantly with time under any polarization voltage. The depolarization current decreases rapidly to a stable value in the first 10 s. For dampened oil bushing, under any polarization voltage, the polarization current decreases slowly in the first 10 s, and then tends to be almost stable. The depolarization current decreases rapidly in the first 50 s, and with the passage of time, the decreasing trend gradually slows down and finally tends to be stable. For dampened cellulose polymer bushing, under any polarization voltage, the polarization current will gradually decrease with time in a short time, and then gradually tend to be stable. The depolarization current decreases rapidly in the first 100 s, and with the passage of time, the decreasing trend gradually slows down and finally tends to be stable.
This phenomenon can be explained by the differing hydrophilic qualities of oil and cellulose polymer. Since the hydrophilic quality of cellulose polymer is much higher than that of insulation oil, when water is present, most water molecules in the insulation oil will migrate to the cellulose polymer with the invasion of moisture. As polar molecules, the more moisture infiltrates into the cellulose polymer insulation, the more polar molecules are involved in the polarization reaction, leading to a larger polarization intensity difference. The moisture content, on one hand, can increase the electrical conductivity of the insulation system, whereas, on the other hand, it can enhance the response speed of the interface polarization, aggravating the dielectric asymmetry of the oil-immersed cellulose polymer insulation system at the same time. Finally, the insulation resistance and capacitance increase, and the polarization process is aggravated [22][23][24]. Thus, the polarization and depolarization current become larger when the moisture content increases at the same applied voltage.
When the applied voltage is lower than 1000 V, the current fluctuates significantly with time. This is because the polarization process of oil-immersed cellulose polymer insulation system inside the bushing cannot be fully stimulated by low DC voltage in such a large size. Therefore, the polarization and depolarization currents of the insulation are small and susceptible to the interference of environmental noise, which has an obvious influence on the accuracy of the test results. When the applied DC voltage increases, the corresponding current becomes larger and the fluctuation of that decreases. The development trend of polarization and depolarization current is not affected by applied DC voltage, which means the applied DC voltage has little influence on the general trend of the dielectric polarization process. Thus, a high applied DC voltage test can not only improve the signal-to-noise ratio, but also reflect the polarization process of bushing more clearly, resulting in a more accurate and effective test result. cellulose polymer. Since the hydrophilic quality of cellulose polymer is much higher than that of insulation oil, when water is present, most water molecules in the insulation oil will migrate to the cellulose polymer with the invasion of moisture. As polar molecules, the more moisture infiltrates into the cellulose polymer insulation, the more polar molecules are involved in the polarization reaction, leading to a larger polarization intensity difference. The moisture content, on one hand, can increase the electrical conductivity of the insulation system, whereas, on the other hand, it can enhance the response speed of the interface polarization, aggravating the dielectric asymmetry of the oil-immersed cellulose polymer insulation system at the same time. Finally, the insulation resistance and capacitance increase, and the polarization process is aggravated [22][23][24]. Thus, the polarization and depolarization current become larger when the moisture content increases at the same applied voltage.
When the applied voltage is lower than 1000 V, the current fluctuates significantly with time. This is because the polarization process of oil-immersed cellulose polymer insulation system inside the bushing cannot be fully stimulated by low DC voltage in such a large size. Therefore, the polarization and depolarization currents of the insulation are small and susceptible to the interference of environmental noise, which has an obvious influence on the accuracy of the test results. When the applied DC voltage increases, the corresponding current becomes larger and the fluctuation of that decreases. The development trend of polarization and depolarization current is not affected by applied DC voltage, which means the applied DC voltage has little influence on the general trend of the dielectric polarization process. Thus, a high applied DC voltage test cannot only improve the signal-to-noise ratio, but also reflect the polarization process of bushing more clearly, resulting in a more accurate and effective test result. cellulose polymer. Since the hydrophilic quality of cellulose polymer is much higher than that of insulation oil, when water is present, most water molecules in the insulation oil will migrate to the cellulose polymer with the invasion of moisture. As polar molecules, the more moisture infiltrates into the cellulose polymer insulation, the more polar molecules are involved in the polarization reaction, leading to a larger polarization intensity difference. The moisture content, on one hand, can increase the electrical conductivity of the insulation system, whereas, on the other hand, it can enhance the response speed of the interface polarization, aggravating the dielectric asymmetry of the oil-immersed cellulose polymer insulation system at the same time. Finally, the insulation resistance and capacitance increase, and the polarization process is aggravated [22][23][24]. Thus, the polarization and depolarization current become larger when the moisture content increases at the same applied voltage. When the applied voltage is lower than 1000 V, the current fluctuates significantly with time. This is because the polarization process of oil-immersed cellulose polymer insulation system inside the bushing cannot be fully stimulated by low DC voltage in such a large size. Therefore, the polarization and depolarization currents of the insulation are small and susceptible to the interference of environmental noise, which has an obvious influence on the accuracy of the test results. When the applied DC voltage increases, the corresponding current becomes larger and the fluctuation of that decreases. The development trend of polarization and depolarization current is not affected by applied DC voltage, which means the applied DC voltage has little influence on the general trend of the dielectric polarization process. Thus, a high applied DC voltage test cannot only improve the signal-to-noise ratio, but also reflect the polarization process of bushing more clearly, resulting in a more accurate and effective test result.
Interface Polarization Characteristics of Cellulose Polymer Insulation
The interface polarization current of the three bushings can be calculated by Equations (1)-(3), as shown in Figures 10 and 11. Figure 10 shows that the interface polarization current of the bushing with different moisture condition decreases to zero with time. While for normal bushing, the interface polarization current does not change significantly and always tends to be zero. Compared with dampened oil bushing, the initial interface polarization current of the bushing with dampened cellulose polymer insulation is larger and takes longer time to decay to zero. Figure 11 reflects that with the increase in applied DC voltage, the interface polarization current increases, and the current decreases slowly with the time.
The insulation system of the bushing is composed of aluminum foil and cellulose polymer, and the dielectric constants of both are different. When applying DC voltage in the insulation system, the interface polarization process occurs, and an interface polarization current is generated. When the voltage increases, the electric field becomes larger, accelerating the motion of the charge carriers. When the moisture infiltrates into the insulation medium, the water molecules are ionized to produce more charge carriers under the effect of electric field. This explains the characteristic of interface polarization current under the influence of moisture content and applied DC voltage. The migration process of charge carriers at the cellulose polymer interface is shown in in Figure 12.
Interface Polarization Characteristics of Cellulose Polymer Insulation
The interface polarization current of the three bushings can be calculated by Equations (1)-(3), as shown in Figures 10 and 11. Figure 10 shows that the interface polarization current of the bushing with different moisture condition decreases to zero with time. While for normal bushing, the interface polarization current does not change significantly and always tends to be zero. Compared with dampened oil bushing, the initial interface polarization current of the bushing with dampened cellulose polymer insulation is larger and takes longer time to decay to zero. Figure 11 reflects that with the increase in applied DC voltage, the interface polarization current increases, and the current decreases slowly with the time.
The insulation system of the bushing is composed of aluminum foil and cellulose polymer, and the dielectric constants of both are different. When applying DC voltage in the insulation system, the interface polarization process occurs, and an interface polarization current is generated. When the voltage increases, the electric field becomes larger, accelerating the motion of the charge carriers. When the moisture infiltrates into the insulation medium, the water molecules are ionized to produce more charge carriers under the effect of electric field. This explains the characteristic of interface polarization current under the influence of moisture content and applied DC voltage. The migration process of charge carriers at the cellulose polymer interface is shown in in Figure 12. (b) Depolarization current.
Interface Polarization Characteristics of Cellulose Polymer Insulation
The interface polarization current of the three bushings can be calculated by Equations (1)-(3), as shown in Figures 10 and 11. Figure 10 shows that the interface polarization current of the bushing with different moisture condition decreases to zero with time. While for normal bushing, the interface polarization current does not change significantly and always tends to be zero. Compared with dampened oil bushing, the initial interface polarization current of the bushing with dampened cellulose polymer insulation is larger and takes longer time to decay to zero. Figure 11 reflects that with the increase in applied DC voltage, the interface polarization current increases, and the current decreases slowly with the time.
The insulation system of the bushing is composed of aluminum foil and cellulose polymer, and the dielectric constants of both are different. When applying DC voltage in the insulation system, the interface polarization process occurs, and an interface polarization current is generated. When the voltage increases, the electric field becomes larger, accelerating the motion of the charge carriers. When the moisture infiltrates into the insulation medium, the water molecules are ionized to produce more charge carriers under the effect of electric field. This explains the characteristic of interface polarization current under the influence of moisture content and applied DC voltage. The migration process of charge carriers at the cellulose polymer interface is shown in in Figure 12.
Charge Carriers and Electrical Conductivity Analysis of Cellulose Polymer Insulation
The charge difference spectrum of normal bushing, dampened oil bushing and dampened cellulose polymer paper bushing under different voltages are shown in Figure 13, Figure 14 and Figure 15, respectively. The results show that the charge difference curves of the three bushings have similar characteristics. According to Equation (8), the slope of the charge difference curve is approximately equal to the conductance current, indicating that the accumulation rate of the charge difference tends to be constant. With the increase in the excitation voltage, the faster is the accumulation rate of the charge difference. However, under the same excitation voltage, the slope of the charge difference curve of the normal bushing is always less than that of the moistened bushing, indicating that water will also affect the accumulation rate of the charge difference.
Charge Carriers and Electrical Conductivity Analysis of Cellulose Polymer Insulation
The charge difference spectrum of normal bushing, dampened oil bushing and dampened cellulose polymer paper bushing under different voltages are shown in Figure 13, Figure 14 and Figure 15, respectively. The results show that the charge difference curves of the three bushings have similar characteristics. According to Equation (8), the slope of the charge difference curve is approximately equal to the conductance current, indicating that the accumulation rate of the charge difference tends to be constant. With the increase in the excitation voltage, the faster is the accumulation rate of the charge difference. However, under the same excitation voltage, the slope of the charge difference curve of the normal bushing is always less than that of the moistened bushing, indicating that water will also affect the accumulation rate of the charge difference.
Charge Carriers and Electrical Conductivity Analysis of Cellulose Polymer Insulation
The charge difference spectrum of normal bushing, dampened oil bushing and dampened cellulose polymer paper bushing under different voltages are shown in Figures 13-15, respectively. The results show that the charge difference curves of the three bushings have similar characteristics. According to Equation (8), the slope of the charge difference curve is approximately equal to the conductance current, indicating that the accumulation rate of the charge difference tends to be constant. With the increase in the excitation voltage, the faster is the accumulation rate of the charge difference. However, under the same excitation voltage, the slope of the charge difference curve of the normal bushing is always less than that of the moistened bushing, indicating that water will also affect the accumulation rate of the charge difference.
The slopes of the polarization/depolarization charge amount difference at 200 V, 1000 V and 4000 V are shown in Figure 16. It is indicated that the slopes at larger applied DC voltage are much higher and more distinguished than those at lower applied DC voltages, which indicates that the moisture condition can be confirmed by the slope of charge amount difference curve at high applied DC voltage in PDC test. The slopes of the polarization/depolarization charge amount difference at 200 V, 1000 V and 4000 V are shown in Figure 16. It is indicated that the slopes at larger applied DC voltage are much higher and more distinguished than those at lower applied DC voltages, which indicates that the moisture condition can be confirmed by the slope of charge amount difference curve at high applied DC voltage in PDC test. The electrical conductivity change behavior of cellulose polymer insulation with different moisture conditions is shown in Figure 17. It is shown that the electrical conductivity increases with time and tends to be stable after 1000 s. The stable value of electrical conductivity is the real electrical conductivity σ. The geometric capacitance of the insulation in the bushing is 228 pF, thus the real electrical conductivity of normal, bushings with dampened oil and dampened cellulose polymer insulation are 7.9748 × 10 −13 S/m, 9.7884 × 10 −13 S/m and 1.1672 × 10 −12 S/m, respectively. The results indicate that electrical conductivity increases when the moisture content in the insulation material is higher, leading to the deterioration of the insulation performance.
For normal bushing, electrical conductivity is stabilized the fastest because the interface polarization current dissipates faster with time, resulting in the electrical conductivity current taking less time to stabilize. However, the polarization process can be influenced by the invasion of moisture when the insulation material is dampened, resulting in a longer time that the interface polarization current dissipates and the higher value of the interface polarization current. Additionally, moisture content can also lead to a longer time that the conductance current takes to be stable and can decrease the initial value of polarization and depolarization charge amount difference. The polarization current increases significantly while the depolarization current is less affected under the influence of moisture content, leading to the increase in slope of polarization/depolarization charge amount difference. Because the electrical conductivity is linear with the slope of the polarization/depolarization charge amount difference curve, the electrical conductivity of the bushing with dampened oil and dampened cellulose polymer in the stable state is larger than that of normal bushing. These reasons explain that normal bushing has a larger electrical conductivity than that of bushing with dampened oil and cellulose polymer insulation, and that the electrical conductivity of normal bushing takes less time to stabilize. The electrical conductivity change behavior of cellulose polymer insulation with different moisture conditions is shown in Figure 17. It is shown that the electrical conductivity increases with time and tends to be stable after 1000 s. The stable value of electrical conductivity is the real electrical conductivity σ. The geometric capacitance of the insulation in the bushing is 228 pF, thus the real electrical conductivity of normal, bushings with dampened oil and dampened cellulose polymer insulation are 7.9748 × 10 −13 S/m, 9.7884 × 10 −13 S/m and 1.1672 × 10 −12 S/m, respectively. The results indicate that electrical conductivity increases when the moisture content in the insulation material is higher, leading to the deterioration of the insulation performance.
Molecular Dynamics Analysis Based on Band Structure and Electrostatic Potential
The band structure of unmoistened cellulose and moistened cellulose is shown in Figure 18. It can be seen that the band width (ΔE) of unmoistened cellulose is 5.657 eV, while the band gap width of moistened cellulose is 5.546 eV. The wider the band width, the weaker the electrical conductivity. The energy band width of cellulose is narrowed after the water content of cellulose is increased, so the moistened cellulose polymer insulation electrical conductivity is stronger. For normal bushing, electrical conductivity is stabilized the fastest because the interface polarization current dissipates faster with time, resulting in the electrical conductivity current taking less time to stabilize. However, the polarization process can be influenced by the invasion of moisture when the insulation material is dampened, resulting in a longer time that the interface polarization current dissipates and the higher value of the interface polarization current. Additionally, moisture content can also lead to a longer time that the conductance current takes to be stable and can decrease the initial value of polarization and depolarization charge amount difference. The polarization current increases significantly while the depolarization current is less affected under the influence of moisture content, leading to the increase in slope of polarization/depolarization charge amount difference. Because the electrical conductivity is linear with the slope of the polarization/depolarization charge amount difference curve, the electrical conductivity of the bushing with dampened oil and dampened cellulose polymer in the stable state is larger than that of normal bushing. These reasons explain that normal bushing has a larger electrical conductivity than that of bushing with dampened oil and cellulose polymer insulation, and that the electrical conductivity of normal bushing takes less time to stabilize.
Molecular Dynamics Analysis Based on Band Structure and Electrostatic Potential
The band structure of unmoistened cellulose and moistened cellulose is shown in Figure 18. It can be seen that the band width (∆E) of unmoistened cellulose is 5.657 eV, while the band gap width of moistened cellulose is 5.546 eV. The wider the band width, the weaker the electrical conductivity. The energy band width of cellulose is narrowed after the water content of cellulose is increased, so the moistened cellulose polymer insulation electrical conductivity is stronger.
Molecular Dynamics Analysis Based on Band Structure and Electrostatic Potential
The band structure of unmoistened cellulose and moistened cellulose is shown in Figure 18. It can be seen that the band width (ΔE) of unmoistened cellulose is 5.657 eV, while the band gap width of moistened cellulose is 5.546 eV. The wider the band width, the weaker the electrical conductivity. The energy band width of cellulose is narrowed after the water content of cellulose is increased, so the moistened cellulose polymer insulation electrical conductivity is stronger. The electrostatic potential of unmoistened cellulose and moistened cellulose is shown in Figure 19. The red is the positive electrostatic potential, and the blue is the negative electrostatic potential. It can be seen that with the increase of water content in cellulose, water molecules would increase the positive electrostatic potential in the model, so the polarity of the model increases.
Due to the existence of a large number of hydroxyl groups in cellulose, these hydroxyl groups can easily form hydrogen bonds with each other. It is precisely because of the existence of hydrogen bonds that cellulose has strong intermolecular and intramolecular interaction forces, and the ability of cellulose to resist external damage is also closely The electrostatic potential of unmoistened cellulose and moistened cellulose is shown in Figure 19. The red is the positive electrostatic potential, and the blue is the negative electrostatic potential. It can be seen that with the increase of water content in cellulose, water molecules would increase the positive electrostatic potential in the model, so the polarity of the model increases.
Due to the existence of a large number of hydroxyl groups in cellulose, these hydroxyl groups can easily form hydrogen bonds with each other. It is precisely because of the existence of hydrogen bonds that cellulose has strong intermolecular and intramolecular interaction forces, and the ability of cellulose to resist external damage is also closely related to the concentration of hydrogen bonds. The molecular simulation results show that the presence of water destroys the intramolecular and intermolecular hydrogen bonds in the cellulose chain, and forms a new hydrogen bond interaction with the oxygen atom on the cellulose hydroxyl group. Water mainly acts on the hydroxyl and glycoside bonds of cellulose and destroys their stability. Therefore, the motion of charge carriers in the insulating paper cellulose is more intense under the action of the electric field. This is consistent with the conclusion in [25] that charge carriers have a significant accelerating effect on the hydrolysis of cellulose.
that the presence of water destroys the intramolecular and intermolecular hydrogen bonds in the cellulose chain, and forms a new hydrogen bond interaction with the oxygen atom on the cellulose hydroxyl group. Water mainly acts on the hydroxyl and glycoside bonds of cellulose and destroys their stability. Therefore, the motion of charge carriers in the insulating paper cellulose is more intense under the action of the electric field. This is consistent with the conclusion in [25] that charge carriers have a significant accelerating effect on the hydrolysis of cellulose. In conclusion, with the invasion of moisture, the electrical conductivity and polarity of cellulose polymer insulation will increase. Under the excitation of DC electric field, the polarization/depolarization process is more intense, which is specifically reflected in the experiment by the polarization/depolarization current being greater. In addition, the electrostatic potential of cellulose polymer is stronger after moisture invasion, indicating that its electrical conductivity characteristics are better under the action of electric field, that is, the electrical conductivity is greater, which verifies the calculated electrical conductivity of cellulose polymer insulation in different oil-immersed bushings above.
Conclusions
This paper studies the time domain relaxation behavior of charge carriers for cellulose polymer insulation used in oil-immersed bushing, and the characteristics of dielectric response and motion of charge carriers have been analyzed. The conclusions drawn are as follows: 1. The polarization/depolarization current of bushings increases when the applied voltage becomes higher. High applied voltage can decrease the influence of noise. Moisture can aggravate the polarization/depolarization process, leading to higher polarization/depolarization current. 2. When applied voltage is higher, the speed of charge carriers increases, leading to the increase of interface polarization current. Water molecules are ionized under the effect of electric field, producing more charge carriers, thus increasing the interface polarization current as well. With the increase of interface current, the relaxation behavior will be further intensified. In conclusion, with the invasion of moisture, the electrical conductivity and polarity of cellulose polymer insulation will increase. Under the excitation of DC electric field, the polarization/depolarization process is more intense, which is specifically reflected in the experiment by the polarization/depolarization current being greater. In addition, the electrostatic potential of cellulose polymer is stronger after moisture invasion, indicating that its electrical conductivity characteristics are better under the action of electric field, that is, the electrical conductivity is greater, which verifies the calculated electrical conductivity of cellulose polymer insulation in different oil-immersed bushings above.
Conclusions
This paper studies the time domain relaxation behavior of charge carriers for cellulose polymer insulation used in oil-immersed bushing, and the characteristics of dielectric response and motion of charge carriers have been analyzed. The conclusions drawn are as follows: 1.
The polarization/depolarization current of bushings increases when the applied voltage becomes higher. High applied voltage can decrease the influence of noise. Moisture can aggravate the polarization/depolarization process, leading to higher polarization/depolarization current.
2.
When applied voltage is higher, the speed of charge carriers increases, leading to the increase of interface polarization current. Water molecules are ionized under the effect of electric field, producing more charge carriers, thus increasing the interface polarization current as well. With the increase of interface current, the relaxation behavior will be further intensified.
3.
The charge amount difference of polarization and depolarization current is linearly aligned with the time, and the slope becomes larger with the increase in moisture, which is more obvious under high applied voltage. Water molecules produce more charge carriers after being ionized and provide more paths for charge movement.
4.
With the invasion of moisture, the band width of cellulose polymer insulation becomes narrower, and its electrostatic potential increases, which improves the electrical conductivity of cellulose polymer insulation, and the polarization characteristics under DC electric field are more significant. | 11,067 | 2022-01-01T00:00:00.000 | [
"Physics"
] |
Crystal Structure of DNA Replication Protein SsbA Complexed with the Anticancer Drug 5-Fluorouracil
Single-stranded DNA-binding proteins (SSBs) play a crucial role in DNA metabolism by binding and stabilizing single-stranded DNA (ssDNA) intermediates. Through their multifaceted roles in DNA replication, recombination, repair, replication restart, and other cellular processes, SSB emerges as a central player in maintaining genomic integrity. These attributes collectively position SSBs as essential guardians of genomic integrity, establishing interactions with an array of distinct proteins. Unlike Escherichia coli, which contains only one type of SSB, some bacteria have two paralogous SSBs, referred to as SsbA and SsbB. In this study, we identified Staphylococcus aureus SsbA (SaSsbA) as a fresh addition to the roster of the anticancer drug 5-fluorouracil (5-FU) binding proteins, thereby expanding the ambit of the 5-FU interactome to encompass this DNA replication protein. To investigate the binding mode, we solved the complexed crystal structure with 5-FU at 2.3 Å (PDB ID 7YM1). The structure of glycerol-bound SaSsbA was also determined at 1.8 Å (PDB ID 8GW5). The interaction between 5-FU and SaSsbA was found to involve R18, P21, V52, F54, Q78, R80, E94, and V96. Based on the collective results from mutational and structural analyses, it became evident that SaSsbA’s mode of binding with 5-FU diverges from that of SaSsbB. This complexed structure also holds the potential to furnish valuable comprehension regarding how 5-FU might bind to and impede analogous proteins in humans, particularly within cancer-related signaling pathways. Leveraging the information furnished by the glycerol and 5-FU binding sites, the complexed structures of SaSsbA bring to the forefront the potential viability of several interactive residues as potential targets for therapeutic interventions aimed at curtailing SaSsbA activity. Acknowledging the capacity of microbiota to influence the host’s response to 5-FU, there emerges a pressing need for further research to revisit the roles that bacterial and human SSBs play in the realm of anticancer therapy.
Introduction
Nucleobases play a pivotal role as fundamental constituents within nucleic acids, orchestrating the replication of genetic information across all biological systems [1].The accurate synthesis of nucleotides stands as a critical linchpin for the survival and proliferation of both eukaryotic and prokaryotic cells [2].The structural alteration of nucleobases holds the potential to exert substantial impacts, displaying potent biological effects [3].Throughout an expansive range encompassing anticancer [4], antiviral [5], antibacterial [6], anti-inflammatory [7], and antitumor activities [8], numerous derivatives of uracil have garnered longstanding employment.One standout exemplar in this domain is the FDAapproved anticancer agent, 5-fluorouracil (5-FU) [4].In 5-FU, the hydrogen situated at the C5 position of uracil is supplanted by a fluorine atom, culminating in a fluoropyrimidine configuration.This modification empowers 5-FU to effectively target the enzyme thymidylate synthase (TSase) for anticancer chemotherapy [9].The cytotoxic influences of 5-FU emerge through its adept ability to impede the operation of TSase, induce RNA miscoding, and activate apoptosis.In the dynamic landscape of drug development, although numerous novel agents have been conceived, 5-FU persists as a cornerstone within the arsenal of chemotherapeutic modalities.It prominently features in systemic treatments for an array of cancers spanning the gastrointestinal tract, breast, head, and neck [9].Notably, 5-FU manifests diverse interactions with over a dozen distinct proteins, including dihydropyrimidinase (DHPase) [10], an enzyme involved in the pyrimidine degradation [11].Strikingly, 5-FU-induced toxicity has been found in asymptomatic patients with DHPase deficiency.Such patients, undergoing anticancer therapy with 5-FU, confront severe toxicity manifestations, including instances of mortality [12].Beyond the confines of human genetics [12,13] and the associations with human gene products [14], the intricate web of host-microbiota interactions ushers in additional dimensions to the 5-FU narrative [15][16][17].Evidently, the active gut microbiota, equipped with the capacity to synthesize bromovinyluracil, can exert profound regulatory influences on systemic 5-FU concentrations [17,18].This unforeseen modulation results in an adverse outcome, as demonstrated by a tragic occurrence where 5-FU exposure triggered the demise of 16 patients in Japan [17,18].Given the intricate tapestry of interactions that envelop 5-FU, the imperative emerges to comprehensively construct its interactome.Such a feat is essential for in-depth analyses of clinical pharmacokinetics and toxicity profiles [19][20][21][22].The holistic elucidation of the multifaceted relationships woven by 5-FU holds the promise of enhancing our grasp of its intricate dynamics, paving the way for refined therapeutic strategies and personalized medicine.To attain this objective, the initial stride involves the identification of additional proteins that interact with 5-FU.
DNA is subject to constant challenges, such as replication errors, DNA damage, and the formation of secondary structures [23].To ensure its integrity and stability, cells have evolved an intricate network of proteins involved in DNA metabolism [24][25][26].Singlestranded DNA (ssDNA)-binding protein (SSB) is one of central players in these processes, safeguarding and manipulating ssDNA during various cellular events [27].Through its multifaceted roles in DNA replication, recombination, repair, replication restart, and other cellular processes, SSB emerges as a critical player in maintaining genomic integrity [28][29][30].SSB binds specifically to ssDNA with high affinity, preventing its re-annealing, protecting it from nucleases, and promoting its accessibility to other DNA-binding proteins [31].SSBs are conserved across organisms, from bacteria [32] to humans [33,34], highlighting their fundamental importance.SSBs have undergone extensive investigation in eubacteria, with a notable focus on the Escherichia coli SSB (EcSSB) [31].The majority of SSBs adopt homotetrameric configurations for activity [35].This arrangement involves four oligonucleotide/oligosaccharide-binding folds (OB folds) coalescing to constitute a DNA-binding domain [36,37].Beyond their DNA-binding functions, SSBs also establish interactions with a multitude of DNA-metabolism proteins, collectively forming the SSB interactome [38].EcSSB comprises two primary domains: an N-terminal ssDNAbinding/oligomerization domain (SSBn) and a flexible C-terminal domain for proteinprotein interactions (SSBc).Within EcSSBc, a further division into two sub-domains emerges, namely an intrinsically disordered linker (IDL) and an acidic tip [39].Bacterial SSBs from distinct sources share moderate sequence homology, especially within the SSBn domain, encompassing roughly the initial 110 residues.The ubiquity of this homology designates SSBn as a prospective common target for devising inhibitors against various SSBs [40][41][42][43].Diverging from E. coli, which harbors a solitary SSB variant (EcSSB), certain bacteria like Staphylococcus aureus exhibit a more complex scenario with the presence of three paralogous SSBs, specifically referred to as SaSsbA [44,45], SaSsbB [46], and SaSsbC [47].Of these, SaSsbA may be an EcSSB counterpart due to its possession of an acidic tip and sequence resemblance to EcSSBn.Given this alignment, the ongoing pursuit of molecules that bind to and inhibit SaSsbA holds notable promise for future applications in combating pathogens [44,48].Although the identification of SaSsbB as a protein that binds to 5-FU is established [49], it holds significance to investigate whether 5-FU can also interact with SaSsbA, and delving into the mode of this interaction is a valuable pursuit.
Although 5-FU has been under investigation for over six decades [50,51], its interactions with proteins remain challenging to predict.In this study, we identified SaSsbA as a fresh addition to the roster of 5-FU binding proteins, and, thus, the 5-FU interactome was extended to include this essential DNA replication protein.In our pursuit of comprehending the precise interaction sites between 5-FU and SaSsbA, we solved the complexed crystal structure at a resolution of 2.3 Å (PDB ID 7YM1).The structure of SaSsbA complexed with glycerol was also determined at 1.8 Å (PDB ID 8GW5).The binding sites within the structure of this OB-fold protein emerge as prime contenders for potential drug design efforts.To further characterize the binding affinity and confirm the interacting sites of 5-FU with SaSsbA, fluorescence quenching and mutational analysis were conducted.In addition, we also compared the 5-FU binding modes between SaSsbA and SaSsbB.The interaction of SaSsbB with 5-FU relies on specific residues T12, K13, T30, F48, and N50 [49], and these residues remain conserved within SaSsbA [45,46].Given that SaSsbA exhibits structural parallels with SaSsbB, one might initially infer that 5-FU's binding capabilities would align, and the 5-FU binding mode of SaSsbA would echo that of SaSsbB.However, our investigation unveiled a disparity between their 5-FU binding sites.Accordingly, this complexed crystal structure unfurls a molecular insight into the distinct manner in which the anticancer drug 5-FU binds to a cognate protein, even when the structural scaffold appears analogous yet divergent.
Sequence Analysis of SaSsbA
According to the nucleotide sequence available on NCBI, the projected monomeric SaSsbA protein spans 167 amino acid residues (aa), with a calculated molecular mass of 19 kDa.The alignment consensus of sequenced SSB homologs, facilitated by ConSurf analysis, delineated the extent of variability at distinct positions along the sequence (Figure 1).It became apparent that the aa 107-148 segment (teal) in SaSsbA lacks conservation within the SSB homologs.Drawing from insights garnered from the EcSSB-ssDNA complex [52], it emerges that four crucial aromatic residues, namely W40, W54, F60, and W88, universally preserved within most SSB families as F/Y/W, engage in ssDNA binding through stacking interactions.Correspondingly, in SaSsbA, the corresponding residues manifest as F37, F48, F54, and Y82.A notable absence in SaSsbA is the W residue.Moreover, the EcSSB's significant C-terminal tail DDDIPF, which plays a role in protein-protein interaction, undergoes a modification to DDDLPF in SaSsbA.The GGRQ motif postulated as a regulatory switch for ssDNA binding [53], which in SaSsbA could be replaced by the GGQR motif (aa 112-115).In the context of the PXXP motifs within EcSSB, positioned at aa 139 (PQQP), 156 (PQQS), and 161 (PAAP), and acknowledged for their role in mediating protein-protein interactions [39], their presence is notably absent in SaSsbA (Figure 1).While SaSsbA is presumed to be a primary SSB homolog similar to EcSSB in structure and function, divergences arise in their gene locations [45].In the genetic map of S. aureus, the ssbA gene resides flanked by rpsF (encoding the ribosomal protein S6) and rpsR (encoding ribosomal protein S18) genes, all encompassed within one operon regulated by the SOS response [54].The scenario deviates in E. coli, where the ssb gene is found adjacent to the uvrA gene, positioned distant from that S. aureus operon.Instead, the prib gene (coding for PriB, an ssDNA-binding protein) [55] is flanked by the rpsF and rpsR genes in the genetic map of E. coli.The rationale behind the necessity for the evolution of distinct SSBs in particular species remains a subject that warrants elucidation.Sequence analysis of SaSsbA reveals an alignment consensus of sequenced SSB homologs through ConSurf analysis, which effectively showcases the extent of variability exhibited at each position along the sequence.In this depiction, amino acid residues that span a wide range of variability are depicted in teal, while those that remain highly conserved are marked in burgundy.Residues F37, F48, F54, and Y82, which potentially participate in ssDNA binding through stacking interactions, are denoted by asterisks.Of significance is the corresponding GGRQ motif found in SaSsbA, which is highlighted within a red box.The PXXP motifs seen in EcSSB are notably absent in SaSsbA.Another notable contrast arises in the critical C-terminal tail.While the DDDIPF sequence of EcSSB plays a vital role in protein-protein interaction, this sequence takes the form of DDDLPF in SaSsbA.
Crystallization of the Glycerol-Bound SaSsbA and the SaSsbA-5-FU Complex
SaSsbA with a His tag was overexpressed in E. coli through heterologous expression and subsequently isolated from the soluble supernatant using Ni 2+ -affinity chromatography.The purified SaSsbA was concentrated to a concentration of 20 mg/mL, with the addition of the cryoprotectant glycerol to attain a final concentration of 25%.This glycerol-enhanced formulation allowed for storage at −20 • C. Previously, we observed that crystals of apo-SaSsbA could be cultivated at room temperature using the hanging drop vapor diffusion method in a solution composed of 22% PEG 4000, 100 mM HEPES, and 100 mM sodium acetate at pH 7.5 [45].Efforts were initially directed towards soaking and cocrystallization of SaSsbA (20 mg/mL) with 5-FU (200 µM) under identical crystallization conditions to those for apo-SaSsbA.The goal was to obtain crystals of the SaSsbA-5-FU complex, yet these attempts proved unfruitful.Subsequent rescreening was undertaken utilizing commercial crystallization kits.Under these new conditions (JBScreen Classic 2, Jena Bioscience, Jena, Germany), crystals of the SaSsbA-5-FU complex emerged at room temperature within a mixture containing 16% PEG 4000, 100 mM Tris-HCl, and 200 mM MgCl 2 at pH 8.5.For glycerol-bound SaSsbA, crystallization occurred in a solution containing 30% PEG 4000, 100 mM HEPES, and 200 mM CaCl 2 at pH 7.5 (Table 1).
Crystal Structure of Glycerol-Bound SaSsbA
The crystal structure of glycerol-bound SaSsbA was successfully determined at a resolution of 1.8 Å (Table 1).The crystals of glycerol-bound SaSsbA belonged to the P4 1 2 1 2 space group, featuring cell dimensions with a = 88.22Å, b = 88.22Å, and c = 58.00Å.This structure of SaSsbA with glycerol (PDB ID 8GW5) was elucidated via molecular replacement, utilizing the apo-SaSsbA as a model (PDB ID 5XGT).Examination of a Ramachandran plot revealed no presence of unallowed regions (outliers) within this structure.While SaSsbA is typically active as tetramers [45], this glycerol-bound structure contained only two monomers of SaSsbA within each asymmetric unit (Figure 2A).In contrast to the crystal structure of apo-SaSsbA [45], this structure displayed two additional amino acid residues, namely P105 and K106, as indicated by the blue mesh (Figure 2A), solely in the subunit A of SaSsbA.The C-terminal region in SaSsbA, comprising aa 107-167 in the subunit A and aa 105-167 in the subunit B, was not observed.This suggests that the C-terminal region of SaSsbA exhibits dynamic behavior, a feature reminiscent of EcSSB [56].For this SaSsbA dimer, the electron density mostly exhibited satisfactory quality.Nonetheless, some sections remained disordered and unobserved, including aa 38-44 (loop L 23 ) in subunit A and aa 40-42 (loop L 23 ) in subunit B within the ternary structure of this glycerol-bound SaSsbA.In congruence with the apo form, the overall architecture of this glycerol-bound SaSsbA monomer remained characteristic of an OB-fold structure, marked by a β-barrel (composed of 5 β-strands) crowned with an α-helix.It is noteworthy that SaSsbA did not encompass the β6 strand, a component found in numerous other SSBs such as those from E. coli [52], Salmonella enterica [41], Klebsiella pneumoniae [53], and Pseudomonas aeruginosa [40,42,46,57,58].In tetrameric SSBs, the β6 strand has been proposed to be involved in mediating diverse protein-DNA and protein-protein interaction specificities among distinct SSBs [59].Within this structure of an SaSsbA dimer, two glycerol molecules (designated as Glycerol 1 and Glycerol 2) were present.However, these glycerol molecules exhibited distinct binding patterns to SaSsbA (see below).
Crystal Structure of SaSsbA Complexed with 5-FU
The complexed crystal structure of SaSsbA with 5-FU (PDB ID 7YM1) was successfully determined at a resolution of 2.3 Å (Table 1) with molecular replacement employing the apo-SaSsbA as a model (PDB ID 5XGT).The crystals of the SaSsbA-5-FU complex were categorized under the P4 1 2 1 2 space group, showcasing cell dimensions of a = 88.09Å, b = 88.09Å, and c = 57.78Å.The completeness exceeded 99%.Upon scrutinizing a Ramachandran plot, there were no regions featuring unallowed conformations (outliers) within this structure.The electron density mostly exhibited satisfactory quality.However, aa 39-44 (loop L 23 ) in subunit A and aa 39-41 (loop L 23 ) in subunit B within the ternary structure of the SaSsbA-5-FU complex were disordered and unobserved.In both subunits, the range of aa 105-167 was also not detected.The individual monomer within this complexed SaSsbA revealed the characteristic OB-fold structure, featuring a β-barrel comprising 5 β-strands capped with an α-helix.Within this complex structure of a SaSsbA dimer, a single glycerol molecule and one 5-FU molecule were encapsulated (Figure 2B).The positioning of this glycerol molecule, designated as Glycerol 3, within the SaSsbA-5-FU complex closely mirrored that of Glycerol 1 within the glycerol-bound SaSsbA.
Glycerol 1 Binding Mode of SaSsbA
The presence of the cryoprotectant glycerol within the protein solution led to its binding to SaSsbA.In this study, three distinct glycerol binding sites within our two structures were found.Notably, the binding sites for Glycerol 1 and Glycerol 3 exhibited similarity.As revealed by the crystal structure (PDB ID 8GW5), Glycerol 1 is sandwiched by SaSsbA monomers A and B (Figures 2A and 3A).A comprehensive analysis of the interactions transpiring between Glycerol 1 and SaSsbA was undertaken, resulting in the identification of multiple residues that came within contact distance (<4 Å) with the glycerol molecule.Among these interacting residues were F48 (Subunit B), N50 (Subunit B), S79 (Subunit A), R80 (Subunit A), F91 (Subunit A), V92 (Subunit A), and T93 (Subunit A).Delving into the nature of these interactions, hydrogen bonds formed between the ligand and SaSsbA were meticulously examined by leveraging PLIP (the protein-ligand interaction profiler) [60].In light of the interactions identified through PLIP, it was established that the main chains of R80 and V92, along with the side chains of S79 and T93, partook in hydrogen bonding with Glycerol 1 (Figure 3B).Within a contact distance of <4 Å, M1 (Subunit A), L2 (Subunit A), N3 (Subunit A), R4 (Subunit A), T36 (Subunit B), F37 (Subunit B), R76 (Subunit A), and D98 (Subunit A) were engaged in binding Glycerol 2. Structural analysis via PLIP revealed hydrogen bonds formed between the main chains of M1, N3, and T36, as well as the side chain of R76, and Glycerol 2. (E) The binding site of Glycerol 3 within SaSsbA was revealed by the SaSsbA-5-FU complex structure (PDB ID 7YM1).Glycerol 3 (forest) was nestled between SaSsbA monomers A (hot pink) and B (deep blue).The binding mode for Glycerol 3 exhibited similarities to that of Glycerol 1. 5-FU (orange) was also showcased within this structure.(F) Depiction of the binding mode for Glycerol 3. Through interactions detected using PLIP, it was determined that hydrogen bonds were formed between the main chains of R80 and V92, along with the side chain of T93, and Glycerol 3.
Glycerol 2 Binding Mode of SaSsbA
Similar to Glycerol 1, which finds itself ensconced between SaSsbA monomers A and B, Glycerol 2 also forms interactions with both monomers (Figure 3C).Nonetheless, it is important to note that their binding poses and spatial arrangements between Glycerol 1 and 2 differed significantly.The interacting residues associated with Glycerol 1 and Glycerol 2 displayed complete variation (PDB ID 8GW5).Specifically, M1 (Subunit A), L2 (Subunit A), N3 (Subunit A), R4 (Subunit A), T36 (Subunit B), F37 (Subunit B), R76 (Subunit A), and D98 (Subunit A) were observed to participate within contact distance (<4 Å) in binding interactions with Glycerol 2 (Figure 3D).A notable aspect of Glycerol 2 binding was the inclusion of a water molecule in the interaction.This water molecule, in conjunction with M1 (Subunit A) and T36 (Subunit B), also contributed to interactions with 5-FU, facilitated through a hydrogen bonding network.The structural evaluation conducted via PLIP underscored that the main chains of M1, N3, and T36, as well as the side chain of R76, were pivotal components in the hydrogen bonding network, fostering the binding of Glycerol 2 (Figure 3D).
5-FU Binding Mode of SaSsbA
SsbA, a crucial DNA replication protein, plays multifaceted roles in nucleic acid metabolism [45,54,61,62].Prior to this study, it remained uncertain whether the FDAapproved clinical drug 5-FU [4], renowned as a prominent pyrimidine derivative in anticancer therapy, could indeed interact with SsbA.Consequently, the complexed crystal structure of SaSsbA with 5-FU was meticulously established to pinpoint the binding site and delve into the binding mechanism (Figure 4A).The electron density corresponding to 5-FU exhibited a well-defined clarity (Figure 4A).The arrangement of 5-FU was discernible, notably due to the positioning of its substituent (Figure 4A).A comprehensive scrutiny was carried out to decipher the interactions between 5-FU and SaSsbA (Figure 4B).Residues R18, P21, V52, F54, Q78, R80, E94, and V96, positioned within a contact distance of <4 Å, were instrumental in the binding of 5-FU.Through analysis conducted using PLIP [60], it was revealed that a water molecule also participated in the binding of 5-FU, facilitated by E94 in SaSsbA, which engaged in water-molecule-mediated hydrogen bonding (Figure 4B).Based on interactions discerned via PLIP, the four side chains of R18, Q78, R80, and E94 were observed to form hydrogen bonds with 5-FU (Figure 3B).The electrostatic potential surface of the SaSsbA complexed with 5-FU unveiled that 5-FU effectively occupied the groove within SaSsbA (Figure 4C), a site significant for single-stranded DNA binding (Figure 4D).The positive (blue) and negative (red) charge distributions underscored that several critical basic residues on the SaSsbA surface, which are exposed to the solvent and collectively form a binding pathway conducive for accommodating ssDNA binding ssDNA (gold).This complex structure insightfully revealed that the presence of 5-FU in the groove potentially influences the wrapping of ssDNA by SaSsbA.The residues engaged in interactions with 5-FU are depicted in gray.An orange mesh, contoured at 1 σ, illustrates the presence of 5-FU within the groove of SaSsbA monomer A (hot pink).The electron density for these interactive residues is also distinctly visible (light pink mesh, contoured at 1 σ).(B) Depiction of the binding mode for 5-FU.Residues R18, P21, V52, F54, Q78, R80, E94, and V96, situated within a contact distance of <4 Å, played pivotal roles in binding with 5-FU.The corresponding interactive distances are also indicated (Å).Based on the interactions identified via PLIP, the side chains of R18, Q78, R80, and E94 engaged in hydrogen bonding with 5-FU (highlighted in black).
(C) The electrostatic potential surface portrayal of the SaSsbA complexed with 5-FU elucidates the distribution of positive (blue) and negative (red) charges.Notably, 5-FU (orange) occupies a groove within SaSsbA, potentially pertinent to ssDNA binding.(D) The superimposed structures of the SaSsbA-5-FU complex and the EcSSB-ssDNA complex (PDB ID 1EYG).The crystal structures of SaSsbA and EcSSB exhibit similarity.For clarity, the EcSSB structure is omitted.The distribution of positive (blue) and negative (red) charges showcases a collection of fundamental basic residues on the surface of SaSsbA, which are exposed to the solvent and collectively form a binding pathway conducive for accommodating ssDNA binding (gold).Our complex structure highlights that the 5-FU presence within the groove may potentially affect and regulate the ssDNA wrapping phenomenon by SaSsbA.
Comparative Analysis of 5-FU Binding Sites in Different Binding States of SaSsbA
In this study, we identified that residues R18, P21, V52, F54, Q78, R80, E94, and V96 within SaSsbA engage in interactions with 5-FU.Furthermore, upon comparing the complexed structures of SaSsbA monomers A and B (PDB ID 7YM1), distinct binding site configurations emerged between the 5-FU-bound state (monomer A) and the Glycerol 3-bound state (monomer B).This comparison revealed noteworthy conformational changes (Figure 5A).When accommodating 5-FU (Figure 5B), the positions of R18 and P21 experienced shifts of approximately 5.2 and 7.3 Å, respectively.Moreover, V52, F54, Q78, and E94 exhibited angular shifts of 180, 22, 23, and 52 • , respectively, due to 5-FU binding.In addition, the sizes of the binding groove differed between monomer A (the 5-FU-bound state) and monomer B (the Glycerol 3-bound state) (Figure 5C).While the OB folds exhibited a similar appearance, it was observed that their sizes were divergent.Structurally, the angles between strands β1 and β4 in monomer A and B were measured at 41.2 • and 65.9 • , respectively (Figure 5C).
Comparative Structural Analysis of 5-FU Binding Sites in the 5-FU-Bound and Unbound States of SaSsbA
Having previously determined the crystal structure of the apo-form of SaSsbA [45], we have a basis for comparing the structural aspects of the 5-FU binding sites between the 5-FU-bound state (PDB ID 7YM1) and the unbound state (PDB ID 5XGT) of SaSsbA.Through superimposition of these structures, R18 in the apo-form SaSsbA was significantly shifted by a distance of 7.2 Å and an angle of 107.3 • upon binding of 5-FU (Figure 5D).Accordingly, the side chain of R18 is likely a pivotal element in facilitating 5-FU binding.
Structure-Based Mutational Analysis
Our complex structure of SaSsbA with 5-FU has elucidated the binding mode and identified the interactive residues.Notably, a substantial conformational change was observed in R18 upon 5-FU binding, resulting in a shift of its side chain position by 7.2 Å and an angular alteration of 77.4 • .This suggests that R18 might play a role in initiating or mediating the 5-FU binding process.Since R18 is highly conserved among SSB homologs (Figure 1), we generated an alanine substitution mutant (Table 2), which was subsequently purified and analyzed to investigate its contribution to binding (Figure 6).The strength of interaction between the R18A mutant and 5-FU was assessed through fluorescence quenching and compared to that of the wild-type SaSsbA (WT).The quenching phenomenon involves the formation of a complex that diminishes the protein's fluorescence intensity.SaSsbA exhibited prominent intrinsic fluorescence, with a peak wavelength at 339 nm upon excitation at 279 nm (Figure 6A).As concentrations of 5-FU were incrementally added to the SaSsbA solution, the intrinsic fluorescence underwent gradual quenching (Figure 6A).Upon introducing 200 µM of 5-FU, the intrinsic fluorescence of SaSsbA was reduced by 83.8% (Table 3).The binding of 5-FU induced a red shift of the SaSsbA emission wavelength from 339 nm to 347 nm (about 8 nm), as indicated by the change in λ max (Table 3).These observations collectively confirm the formation of a stable complex between SaSsbA and 5-FU.Comparable to SaSsbA, the R18A mutant also exhibited strong intrinsic fluorescence, with a peak wavelength at 339 nm upon excitation at 279 nm (Figure 6B).However, the addition of 200 µM 5-FU resulted in only a 34.9% reduction in the intrinsic fluorescence of R18A (Table 3).Furthermore, the λ max of R18A shifted only minimally from 339 nm to 340 nm upon exposure to 200 µM 5-FU.Analysis of the titration curves (Figure 6C) facilitated the determination of K d values of 497.6 ± 13.5 µM for R18A and 55.9 ± 0.7 µM for WT (Table 3).These experimental findings observed from structural and functional investigations collectively underscore the significance of R18 in SaSsbA as a crucial residue for 5-FU binding.The decrease in the intrinsic fluorescence of SaSsbA was measured with a spectrofluorometer (Hitachi F-2700; Hitachi High-Technologies, Tokyo, Japan).The K d was obtained using the following equation:
Distinct 5-FU Binding Modes in SaSsbA and SaSsbB
Unlike the case of E. coli, which possesses a singular type of SSB (EcSSB), certain bacteria, particularly some Gram-positive bacteria [54], harbor two paralogous SSBs, namely SsbA and SsbB.Recently, we elucidated the crystal structure of SaSsbB [46], as well as its complex with 5-FU [49].Consequently, the complex structure of SaSsbB is available for a comparative analysis of the 5-FU binding mode in relation to SaSsbA (PDB ID 7YM1) and SaSsbB (PDB ID 7D8J).Given the structural similarity between SaSsbA and SaSsbB, one might naturally assume a congruent 5-FU binding mode.However, intriguingly, our complexed structure revealed distinct 5-FU binding configurations for SaSsbA and SaSsbB (Figure 7).In the complexed structure, specific residues including T12, K13, T30, F48, and N50 of SaSsbB were identified to interact with 5-FU, forming an integral part of the binding site (Figure 7A) [49].Particularly noteworthy is the essential stacking interaction between the pyrimidine ring of 5-FU and the aromatic ring of F48 in SaSsbB, which underpins the drug-protein interaction.While these residues are entirely conserved in SaSsbA, they did not engage in interactions with 5-FU (Figure 7A).Upon superposition analysis, a considerable distance of 18.8 Å was evident between these divergent 5-FU binding sites (Figure 7B).Furthermore, the residues in SaSsbA that interact with 5-FU are similarly conserved in SaSsbB; however, they do not collectively form a 5-FU binding site in SaSsbB (Figure 7C).Given the perfect conservation of these 5-FU binding residues in both SaSsbA and SaSsbB, the substantial dissimilarity observed warrants further investigation to ascertain whether other inherent species-specific differences contribute to this phenomenon.Additional biophysical studies are warranted to comprehensively explore these disparities.crystal structures are colored in gray.The residues responsible for 5-FU binding in SaSsbA and SaSsbB are shaded in orange and green, respectively.(B) Superposition of the 5-FU-bound structures of SaSsbA (hot pink) and SaSsbB (aquamarine).The 5-FU molecules in SaSsbA (PDB ID 7YM1) and SaSsbB (PDB ID 7D8J) are represented in orange and purple-blue, respectively.The distance between these distinct 5-FU binding sites measures 18.8 Å. (C) Disparate 5-FU binding sites.Despite the perfect conservation of these 5-FU binding residues in SaSsbA (gray) and SaSsbB (aquamarine), each protein exhibits a preferred binding site for 5-FU.
Discussion
Metabolic reprogramming is the strategy adopted by cancer cells to expedite their proliferation, resist the effects of chemotherapy, invade tissues, metastasize, and endure within nutrient-scarce microenvironments [2].Various uracil derivatives have long been harnessed as pyrimidine-based antimetabolites in the battle against cancer [63].Chief among these agents is 5-FU [4], a prominent fluoropyrimidine drug esteemed for its role in targeting TSase during anticancer chemotherapy [9].Over the past 60 years, chemotherapeutic agents designed to thwart thymidylate biosynthesis have emerged as stalwarts in cancer treatment.In addition, the synergistic administration of 5-FU alongside other chemotherapeutic agents amplifies treatment efficacy and overall survival rates, particularly in cancers involving the head, breast, and neck [64].However, some mechanistic details, including signaling pathways, remain unexplained [65][66][67].It is important to recognize that the purview of 5-FU's interactions goes beyond merely engaging human TSase.For example, other human proteins, including dihydroorotase, PARP (procyclic acidic repetitive protein), VEGFR1 (vascular endothelial growth factor receptor 1), and CASP-3 (caspase-3 protein), are also known to interact with 5-FU [14,68].Moreover, the intricate interplay between microbiota and chemotherapeutic drugs, such as 5-FU, holds the potential to influence host responses, further adding to the complexity of the landscape [17].Consequently, a comprehensive elucidation of the complete 5-FU interactome is imperative, serving as the foundation for exhaustive clinical pharmacokinetic assessments and toxicity analyses [21,22].The holistic elucidation of the multifaceted relationships woven by 5-FU holds the promise of enhancing our grasp of its intricate dynamics, paving the way for refined therapeutic strategies and personalized medicine.
In this study, our findings have unveiled SaSsbA's capacity to engage in interaction with the anticancer drug 5-FU (Figure 6).In comparison with SaSsbB, a paralogous protein of SaSsbA in S. aureus, the K d values for 5-FU binding to SaSsbA and SaSsbB are 55.9 ± 0.7 µM (Table 3) and 152.8 ± 2.5 µM [49], respectively.Based on the fact that the K d value of human dihydroorotase bound to 5-FU is 91.2 ± 1.7 µM [14], the hierarchy of binding affinities for 5-FU can be delineated as follows: SaSsbA > human dihydroorotase > SaSsbB.This outcome may imply that, in scenarios where 5-FU enters the human system, it exhibits a preference for binding to the bacterial DNA replication protein SaSsbA within bacterial cells, as opposed to its interaction with the human enzyme dihydroorotase.However, it is important to underscore that this supposition necessitates comprehensive validation through a thorough investigation spanning biochemical and cellular dimensions.Considering the potential diversity of the gut microbiome across individuals, it remains imperative to ascertain the binding affinities of 5-FU to any feasible proteins within the human body, encompassing locales like the gastrointestinal tract and bloodstream.Such investigations are crucial to facilitate comprehensive comparisons and subsequent clinical analyses.
For the investigation of the binding mode, we solved the complexed crystal structure of SaSsbA with 5-FU at a resolution of 2.3 Å (Table 1).The interaction between 5-FU and SaSsbA was found to involve R18, P21, V52, F54, Q78, R80, E94, and V96 (Figure 4).Unexpectedly, this pattern of interactive residues deviated entirely from those identified in the 5-FU binding sites of SaSsbB [49].In contrast to SaSsbA, where the interaction with 5-FU relies on a distinct set of residues, SaSsbB's 5-FU interaction hinges on T12, K13, T30, F48, and N50 (Figure 7).Notably, the stacking interaction between the aromatic ring of F48 and the pyrimidine ring of 5-FU assumes a pivotal role in the drug-protein interaction within SaSsbB [49].Intriguingly, despite the presence of these identical residues within both SaSsbA and SaSsbB, none of them (T12, K13, T30, F48, and N50) participate in 5-FU binding within SaSsbA.Biochemically, reconciling the divergence in 5-FU binding sites, despite the residue conservation between SaSsbA and SaSsbB, presents a challenge.Although the OB folds share a striking visual resemblance, we noted a subtle difference in the size of the binding groove between SaSsbA and SaSsbB.This structural divergence is underscored by the angles between strands β1' and β4 in their monomers A, which measure 41.2 • and 65.9 • , respectively (Figure 5).This variance in binding groove width may potentially influence the mechanisms governing 5-FU binding.A noteworthy observation pertains to the substantial conformational alteration that accompanies 5-FU binding (Figure 5).This observation led us to propose a hypothesis wherein these seemingly identical residues result in divergent 5-FU binding modes.Within various contexts, OB folds can exhibit broad ligand-binding capabilities, targeting both single-stranded DNA and proteins [35].This is evident in cases such as the tumor suppressor BRCA2, where two OB folds bind to ssDNA while a third participates in protein-protein interactions rather than ssDNA binding [69].ssDNA bound by Pseudomonas aeruginosa SSB (PaSSB) only occupies half of the binding sites of two OB folds rather than four OB folds through the ssDNA-binding mode (SSB) 3:1 [57,58].Similarly, in RPA, two different binding modes involve two and four OB folds, respectively [70].Insights gleaned from the SaSsbA-glycerol complex structure (Figure 3) indicate that Glycerol 1 and 2 do not necessarily need to occupy corresponding sites within monomers A and B. Previous experimental observations through single molecule experiments [71,72] also suggest that the unoccupied OB fold within SSB could adopt an open conformation to facilitate sliding, and, therefore, the ligand-binding groove within its OB-fold structure could be regulated to accommodate the requirements of dynamic binding processes (Figure 7).Consequently, it becomes plausible to consider that even when similar sites exist, 5-FU may bind to different locations due to the influence of these adaptable binding grooves.
Differing from some enzymes characterized by a single active site, the binding behavior of the DNA replication protein SaSsbA, which engages with diverse ssDNA and proteins, can introduce unpredictability into its ligand binding site(s).A multitude of solventexposed surfaces on SaSsbA functions as binding sites for both ssDNA and partner proteins, further complicating the task of foreseeing which specific pocket serves as the binding site.The prospect of utilizing docking tools, like MOE Dock [73], to anticipate a protein's ligand binding site presents itself as a viable avenue to explore.The analysis conducted through MOE Dock highlighted five preferred binding modes (Figure 8).However, none of these predicted sites (Table 4) aligned precisely with the binding site evident in the complexed crystal structure of SaSsbA with 5-FU (Figure 2).Therefore, it becomes evident that the generation of additional complexed crystal structures remains an imperative in facilitating more robust binding analyses and supporting the development of structurebased approaches to drug design.A and B are colored in light pink and light blue.Molecular docking of 5-FU was performed using the crystal structure of the apo-form SaSsbA (PDB ID 5XGT).The binding affinities of SaSsbA with 5-FU, considering all feasible binding orientations, were evaluated using the S score.Utilizing MOE-Dock (version 2019.0102)software, we identified and numbered the five most favorable binding modes for 5-FU.The five preferred binding modes of 5-FU are indicated as 1-5.Additionally, the position of 5-FU in the complexed crystal structure of SaSsbA (PDB ID 7YM1) was included for reference.It is believed that all current cells trace their lineage back to a shared ancestor, suggesting that fundamental principles learned from experiments conducted with one cell type possess broad applicability across diverse cells.This perspective implies that the mechanisms governing essential cellular activities, such as DNA replication, transcription, and translation, should exhibit similarities across various cell types.Nonetheless, in response to challenging environmental circumstances, organisms tend to evolve new enzymes or auxiliary components to enhance their survival prospects and adaptive capabilities throughout evolutionary processes.In contrast to the situation in E. coli and many other bacteria, which feature a single SSB, certain microorganisms like S. aureus and other Gram-positive bacteria manifest multiple paralogous SSBs, including SsbA [74], SsbB [59], and SsbC [47].Intriguingly, the positioning of the ssbA gene in the S. aureus genetic map does not align with the location of the ssb gene in E. coli and other Gram-negative bacteria [53].Notably, this corresponding position in E. coli is occupied by priB [53], another variant of SSB [55,75].Given the diversity of primosomal proteins with which these SSBs interact [76,77], SaSsbA and EcSSB confront an array of binding partners within their respective cellular contexts.This intricacy suggests that the presence of these distinct SSBs might necessitate their co-evolution with partner proteins, enabling the development of species-specific functions to address survival demands and secure a competitive edge.This co-evolutionary dynamic might elucidate the lack of conservation in PXXP motifs and amino acid residues within the IDL among different SSBs, including SaSsbA (Figure 1) [39].Furthermore, intriguing disparities come to light, such as myricetin's inhibitory effect on PaSSB but not on SaSsbA [44] or Klebsiella pneumoniae SSB [40].Even within proteins that share structural similarities, as demonstrated by SaSsbA and SaSsbB, their 5-FU binding modes exhibit complete divergence (Figure 7).Consequently, it is conceivable that 5-FU could bind to these distinct SSBs present in both human and microorganisms, subsequently influencing various cellular signaling pathways.Despite these observations, additional research is indispensable to elucidate the precise mechanisms underpinning the recognition of 5-FU binding sites and the rationale behind the evolution of these diverse SSBs within specific species.
Protein Expression and Purification
The expression vector pET21b-SaSsbA [45] was transformed into E. coli BL21 (DE3) cells and grown in LB medium at 37 • C. The overexpression was induced by incubating with 1 mM isopropyl thiogalactopyranoside for 9 h.Recombinant SaSsbA was purified from the soluble supernatant by using Ni 2+ -affinity chromatography.The recombinant protein was eluted with a linear imidazole gradient and dialyzed against a dialysis buffer (20 mM Tris-HCl and 0.1 M NaCl, pH 7.9; Buffer A).Protein concentration was measured using Biorad protein (Bradford) assay.The protein purity remained at >97% as determined using SDS-PAGE.
Site-Directed Mutagenesis
The SaSsbA mutant was generated according to the QuikChange site-directed mutagenesis kit protocol (Stratagene; LaJolla, CA, USA), by using the wild-type plasmid pET21b-SaSsbA as a template.The presence of the mutation was verified by DNA sequencing.The recombinant mutant proteins were purified using the protocol for the wild-type SaSsbA by Ni 2+ -affinity chromatography.
Crystallization Experiments
Purified SaSsbA was concentrated to 20 mg/mL with addition of the cryoprotectant glycerol to a final concentration of 25% for storage at −20 • C. The crystals of the SaSsbA-5-FU complex appeared at room temperature through hanging drop vapor diffusion in 16% PEG 4000, 100 mM Tris-HCl, 200 µM 5-FU, and 200 mM MgCl 2 at pH 8.5.For the SaSsbAglycerol complex, the crystals were grown in 30% PEG 4000, 100 mM HEPES, 200 mM CaCl 2 at pH 7.5.These crystals reached full size in 7-13 days and validated in the beamline 15A of the National Synchrotron Radiation Research Center (NSRRC; Hinchu, Taiwan).
X-ray Diffraction Data and Structure Determination
Data were collected in the beamline 15A using an Rayonix MX300HE CCD Area Detector at NSRRC.Data sets were indexed, integrated, and scaled using HKL-2000 [78] and XDS [79].The initial phase, density modification, and model building were performed using the AutoSol program in the PHENIX [80].The iterative model building and structure refinement were performed using Refmac in the CCP4 software suite (version v7.1.008)[81] and phenix.refine in the PHENIX software suite (Phenix1.19.1-4122) [82].The initial phase of SaSsbA complexed with 5-FU was determined through the molecular replacement software Phaser MR (Phenix1.19.1-4122) [83] by using SaSsbA (PDB ID 5XGT) as a search model.The correctness of the stereochemistry of the models was verified using MolProbity [84].
Fluorescence Quenching
The K d value of purified SaSsbA was determined using the fluorescence quenching method previously described for the DHOase [85] and DHPase [86,87].Briefly, an aliquot of the compound was added into the solution containing SaSsbA (1 µM) and 50 mM HEPES at pH 7.0.SaSsbA displayed strong intrinsic fluorescence with a peak wavelength of 339 nm when excited at 279 nm at 25 • C. The decrease in the intrinsic fluorescence of SaSsbA was measured at 339 nm with a spectrofluorometer (Hitachi F-2700; Hitachi High-Technologies, Japan).The K d was obtained using the following equation: ∆F = ∆F max − K d (∆F/ ).Data points are an average of 2-3 determinations within a 10% error.
MOE-Dock Analysis
The binding analysis of 5-FU to SaSsbA was carried out using MOE-Dock (version 2019.0102)[73].The binding capacity was also calculated using MOE-Dock.The crystal structure of SaSsbA (PDB ID 5XGT) was used [45].Before docking, any water molecules present in the crystal structure were removed using MOE.To ensure accuracy, a 3D protonation step followed by energy minimization was applied to add hydrogen atoms to the protein structure.The binding modes were generated and predicted through the MOE-Dock tool and visualized by PyMOL.
Conclusions
SsbA represents a captivating molecular apparatus that orchestrates a multitude of indispensable processes vital for maintaining DNA integrity [88].This study has revealed SaSsbA's hitherto unknown capability of binding to the anticancer drug 5-FU, thereby expanding the roster of proteins within the 5-FU interactome to encompass this pivotal DNA replication protein (Figure 6).In light of the results derived from mutational and structural analyses, it became evident that SaSsbA's mode of binding with 5-FU diverges from that of SaSsbB (Figure 7).Given the insights offered by the glycerol and 5-FU binding sites (Figures 3 and 4), our complexed SaSsbA structures underscore the likelihood that several of these interactive residues could be suitable targets for drug interventions aimed at inhibiting SaSsbA activity.This complexed structure also holds the potential to furnish valuable comprehension regarding how 5-FU and its pyrimidine derivatives might bind to and impede analogous OB-fold proteins in humans, particularly within cancer-related signaling pathways [89].Acknowledging the capacity of microbiota to influence the host's response to 5-FU, there emerges a pressing need for further research to revisit the roles that bacterial and human SSBs play in the realm of anticancer therapy.
Figure 1 .
Figure1.Sequence analysis of SaSsbA reveals an alignment consensus of sequenced SSB homologs through ConSurf analysis, which effectively showcases the extent of variability exhibited at each position along the sequence.In this depiction, amino acid residues that span a wide range of variability are depicted in teal, while those that remain highly conserved are marked in burgundy.Residues F37, F48, F54, and Y82, which potentially participate in ssDNA binding through stacking interactions, are denoted by asterisks.Of significance is the corresponding GGRQ motif found in SaSsbA, which is highlighted within a red box.The PXXP motifs seen in EcSSB are notably absent in SaSsbA.Another notable contrast arises in the critical C-terminal tail.While the DDDIPF sequence of EcSSB plays a vital role in protein-protein interaction, this sequence takes the form of DDDLPF in SaSsbA.
Figure 2 .
Figure 2. Crystal Structures of SaSsbA.(A) The crystal structure of SaSsbA featuring two glycerol molecules is displayed.The SaSsbA monomer adopts an OB-fold structure, composed of a β-barrel comprising 5 β-strands, capped with an α-helix.These two glycerol molecules are designated as Glycerol 1 (splitpea) and Glycerol 2 (green).Different SaSsbA monomers are colored in light pink and light blue.In contrast to the crystal structure of apo-SaSsbA, this complexed structure displayed two additional amino acid residues, namely P105 and K106, as indicated by the blue mesh.(B) The complexed crystal structure of SaSsbA with one 5-FU molecule and one glycerol molecule is presented.Different SaSsbA monomers are colored in hot pink and deep blue.Notably, 5-FU was localized solely in monomer A, with no presence in monomer B. The glycerol molecule is labeled as Glycerol 3 (forest).
Figure 3 .
Figure 3. Glycerol binding modes.(A) The binding site of Glycerol 1 within SaSsbA was unveiled through the structure of glycerol-bound SaSsbA (PDB ID 8GW5).Glycerol 1 (splitpea) was positioned between SaSsbA monomers A (light pink) and B (light blue).Residues engaging with Glycerol 1 are colored in yellow.The composite omit map (presented as blue mesh, contoured at 1 σ) indicated the presence of Glycerol 1 within a cavity formed at the SaSsbA monomers A and B interface.(B) Depiction of the binding mode for Glycerol 1. Residues F48 (Subunit B), N50 (Subunit B), S79 (Subunit A), R80 (Subunit A), F91 (Subunit A), V92 (Subunit A), and T93 (Subunit A), situated within a contact distance of <4 Å, were instrumental in binding Glycerol 1.The interactive distances are also shown in Å.Based on interactions detected by PLIP, hydrogen bonds were formed between the main chains of R80 and V92, as well as the side chains of S79 and T93, and Glycerol 1 (indicated in black).(C) The binding site of Glycerol 2 within SaSsbA was unveiled by the structure of glycerol-bound SaSsbA (PDB ID 8GW5).Similar to Glycerol 1, Glycerol 2 (green) interacted with both SaSsbA monomers A (light pink) and B (light blue).However, the binding poses and locations between Glycerol 1 and Glycerol 2 exhibited distinctions.(D) Depiction of the binding mode for Glycerol 2.Within a contact distance of <4 Å, M1 (Subunit A), L2 (Subunit A), N3 (Subunit A), R4 (Subunit A), T36 (Subunit B), F37 (Subunit B), R76 (Subunit A), and D98 (Subunit A) were engaged in binding Glycerol 2. Structural analysis via PLIP revealed hydrogen bonds formed between the main chains of M1, N3, and T36, as well as the side chain of R76, and Glycerol 2. (E) The binding site of Glycerol 3 within SaSsbA was revealed by the SaSsbA-5-FU complex structure (PDB ID 7YM1).Glycerol 3 (forest) was nestled between SaSsbA monomers A (hot pink) and B (deep blue).The binding mode for Glycerol 3 exhibited similarities to that of Glycerol 1. 5-FU (orange) was also showcased within this structure.(F) Depiction of the binding mode for Glycerol 3. Through interactions detected using PLIP, it was determined that hydrogen bonds were formed between the main chains of R80 and V92, along with the side chain of T93, and Glycerol 3.
Figure 4 . 5 -
Figure 4. 5-FU interaction mode.(A)The binding site for 5-FU within SaSsbA was unveiled through the SaSsbA-5-FU complex structure (PDB ID 7YM1).This complexed structure of an SaSsbA dimer contained one glycerol molecule (Glycerol 3; forest) and one 5-FU molecule (orange).The residues engaged in interactions with 5-FU are depicted in gray.An orange mesh, contoured at 1 σ, illustrates the presence of 5-FU within the groove of SaSsbA monomer A (hot pink).The electron density for these interactive residues is also distinctly visible (light pink mesh, contoured at 1 σ).(B) Depiction of the binding mode for 5-FU.Residues R18, P21, V52, F54, Q78, R80, E94, and V96, situated within a contact distance of <4 Å, played pivotal roles in binding with 5-FU.The corresponding interactive distances are also indicated (Å).Based on the interactions identified via PLIP, the side chains of R18, Q78, R80, and E94 engaged in hydrogen bonding with 5-FU (highlighted in black).(C)The electrostatic potential surface portrayal of the SaSsbA complexed with 5-FU elucidates the distribution of positive (blue) and negative (red) charges.Notably, 5-FU (orange) occupies a groove within SaSsbA, potentially pertinent to ssDNA binding.(D) The superimposed structures of the SaSsbA-5-FU complex and the EcSSB-ssDNA complex (PDB ID 1EYG).The crystal structures of SaSsbA and EcSSB exhibit similarity.For clarity, the EcSSB structure is omitted.The distribution of positive (blue) and negative (red) charges showcases a collection of fundamental basic residues on the surface of SaSsbA, which are exposed to the solvent and collectively form a binding pathway conducive for accommodating ssDNA binding (gold).Our complex structure highlights that the 5-FU presence within the groove may potentially affect and regulate the ssDNA wrapping phenomenon by SaSsbA.
Figure 5 .
Figure 5. Comparative analysis of 5-FU binding sites in different states of SaSsbA.(A) Superposition of monomer A (hot pink) and monomer B (deep blue) within the 5-FU complexed SaSsbA structure (PDB ID 7YM1).Residues in monomer A and B are depicted in gray and yellow, respectively, with the presence of a 5-FU molecule shown in orange.(B) Comparison of the 5-FU binding sites between the 5-FU-bound and glycerol 3-bound states of SaSsbA.Binding of 5-FU led to spatial shifts of R18 and P21 by distances of 5.2 and 7.3 Å, respectively.Additionally, V52, F54, Q78, and E94 underwent angular shifts of 180, 22, 23, and 52 • , respectively, upon 5-FU binding.(C) Evaluation of the sizes of the binding groove in monomer A and B. Despite the comparable appearance of their OB folds, variations in their sizes were noted.Notably, the structural angles between strands β1 and β4 in monomer A and B were found to be 41.2 and 65.9 • , respectively.(D) Superposition of the 5-FU-bound (hot pink; PDB ID 7YM1) and unbound (pale yellow; PDB ID 5XGT) states of SaSsbA.Residues within the 5-FU-bound and unbound states are presented in gray and wheat hues, respectively.A 5-FU molecule is visualized in orange.Remarkably, the side chain of R18 experienced a considerable shift, spanning a distance of 7.2 Å and an angular shift of 107.3 • upon 5-FU binding.
Table 2 .
Primers used for construction of the plasmid.the designated site for mutation site.
Figure 7 .
Figure 7. Distinct 5-FU binding modes in SaSsbA and SaSsbB.(A) Alignment of the sequences of SaSsbA and SaSsbB, with secondary structural elements indicated.Unobserved amino acids in these
Figure 8 .
Figure 8.Molecular docking of 5-FU with SaSsbA.Comparison between 5-FU molecules extracted from the complexed SaSsbA structure and the outcomes of docking simulations.SaSsbA monomers
Table 1 .
Data collection and refinement statistics.
Table 3 .
Binding parameters of SaSsbA WT and the mutant R18A. | 11,244.8 | 2023-10-01T00:00:00.000 | [
"Medicine",
"Chemistry"
] |
Somatic PRKAR1A mutation in sporadic atrial myxoma with cerebral parenchymal metastases: a case report
Background Atrial myxomas are generally considered benign neoplasms. The majority of tumors are sporadic and less than 10% are associated with an autosomal dominant condition known as the Carney complex, which is most often caused by germline mutation in the gene PRKAR1A. Whether this gene plays a role in the development of sporadic myxomas has been an area of debate, although recent studies have suggested that some fraction of sporadic tumors also carry mutations in PRKARIA. Extra-cardiac complications of atrial myxoma include dissemination of tumor to the brain; however, the dissemination of viable invasive tumor cells is exceedingly rare. Case presentation We present here a 48-year-old white woman who developed multiple intracranial hemorrhagic lesions secondary to tumor embolism that progressed to ‘false’ aneurysm formation and invasion through the vascular wall into brain parenchyma 7 months after resection of an atrial myxoma. Whole exome sequencing of her tumor revealed multiple mutations in PRKAR1A not found in her germline deoxyribonucleic acid (DNA), suggesting that the myxoma in this patient was sporadic. Conclusions Our patient illustrates that mutations in PRKAR1A may be found in sporadic lesions. Whether the presence of this mutation affects the clinical behavior of sporadic tumors and increases risk for metastasis is not clear. Regardless, the protein kinase A pathway which is regulated by PRKAR1A represents a possible target for treatment in patients with metastatic cardiac myxomas harboring mutations in the PRKARIA gene.
Background
Primary intracardiac neoplasms are rare tumors, with an estimated prevalence of between 0.02 and 0.25% of the population [1][2][3]. Atrial myxomas originating in the left atrium make up most of these tumors [4,5]. Greater than 90% of atrial myxomas are sporadic and the rest are the result of a hereditary condition known as the Carney complex. Carney complex is inherited in an autosomal dominant fashion due in most cases to inactivating mutations of the PRKAR1A gene and is characterized by pigmented lesions of the skin, myxomas (cardiac and cutaneous), and multiple endocrine tumors [6]. Mutations in PRKAR1A were previously not thought to be responsible for the development of isolated, sporadic cardiac myxomas, but genetic alterations in this gene have recently been identified in a minority of such tumor samples [7,8]. Cardiac myxomas are considered to be benign and cured by complete surgical resection of the cardiac lesion. Recurrences have been observed, but are much more likely to be seen in cases of familial myxoma (12-22% recurrence rate) than sporadic myxoma (1-3%) [4,9].
Despite their generally benign nature, cardiac myxomas may have devastating consequences due to their location and ability to spread through the blood. Embolic events occur in 30 to 40% of patients with cardiac myxomas and the central nervous system (CNS) is the most frequent site of embolism [4,10,11]. This generally manifests as ischemic events, but aneurysmal dilation due to tumor invasion into cerebral vessel walls and resultant intracerebral hemorrhages are also seen [12]. In very rare cases, metastatic disease with clear-cut invasion into the CNS parenchyma is observed. The mechanism behind viable, invasive tumor cell dissemination to the CNS from tumors with benign histopathology is not well understood. There is currently no evidence to the best of our knowledge that patients with tumors due to Carney complex are more likely to experience metastatic consequences. Due to the very small number of patients experiencing CNS metastases, there is no standardized management when they do occur.
Here we describe a rare case of a patient who developed progressive brain metastases as a delayed consequence of tumor embolism a year after removal of an atrial myxoma. Whole exome sequencing of tumor tissue from her heart and brain revealed multiple somatic mutations in PRKAR1A. Additional tissue from an earlier benign cystadenoma and buccal swab showed no similar mutation. Our patient had no known family history of myxomas or Carney complex.
Clinical course
Our patient is a 48-year-old white woman with a previous history of multiple ovarian cystadenomas requiring a total hysterectomy and bilateral oophorectomy. Her initial neurological symptoms started in April 2016, when she developed daily headaches. Magnetic resonance imaging (MRI) done at an outside institution showed small scattered fluid-attenuated inversion recovery (FLAIR) hyperintensities and areas of susceptibility weighted imaging (SWI) signal intensity in her bilateral occipital lobes, right frontal lobe, and left parietal lobe (see Fig. 1a-n). They were believed at the time to be a consequence of prior trauma. In September 2016, she reported continued headaches and significant fatigue, which prompted workup with a transthoracic echocardiogram (TTE). This revealed a 3.5 × 2.5 × 2.5 cm mass within the left atrium which was felt to most likely represent a myxoma. She underwent successful resection of the lesion shortly thereafter (at another institution), with the outside pathology report confirming an atrial myxoma. Postoperative TTE showed no concern for residual disease. She was well until April 2017, when she experienced recurrent headaches that were now associated with new symptoms of fingertip numbness. A repeat MRI was performed (Fig. 1) and revealed multiple small, enhancing hemorrhagic lesions throughout her bilateral parietal, frontal, and occipital lobes. She then underwent conventional angiography which revealed multiple "mycotic-like" aneurysms in the right anterior and middle cerebral arterial distributions and left middle and posterior cerebral arterial distributions. An infection workup and repeat TTE were both negative. The lesions were followed with serial imaging; 6 months later in October 2017, she experienced an acute event with severe headache, leftsided visual field changes, and severe dizziness while driving, necessitating calling for emergency assistance. She was then transferred to our institution via helicopter for continued management.
An examination at the time of transfer was remarkable for short-term memory impairment as well as an incomplete homonymous hemianopia to the left. There were no appreciable skin lesions. Repeat imaging showed significant increase in size of the previously multiple enhancing hemorrhagic lesions with surrounding edema (Fig. 1). A systemic workup revealed no evidence of additional lesions or recurrent cardiac tumor: computed tomography (CT) of her chest, abdomen, and pelvis; transesophageal echocardiography (TEE); and CT cardiac angiography.
A repeat cerebral angiogram was performed and again showed multiple "mycotic-like" aneurysms, a few of which had decreased in size since the prior examination. Given the unusual nature of her case, it was decided to obtain a brain biopsy to guide treatment. Examination of the histologic sections of the resected myxoma and recuts of the paraffin blocks requested from the outside institution confirmed the diagnosis and revealed clusters of rounded hyperchromatic cells displaying a lack of cohesion, forming small collections of cells separating from the surface of the myxoma ( Fig. 2a-g). A biopsy of a right frontal lobe lesion confirmed suspicion of cerebral metastasis from the myxoma and myxomatous infiltration of cerebral blood vessels resulting in pseudoaneurysm formation and tumor cell migration into the parenchyma ( Fig. 3; see "Histopathologic analysis" section for detailed description). Based on available reports in the literature and the rapid enlargement of her CNS lesions, she underwent whole brain radiation with hippocampal sparing (3750 cGy in 15 fractions). In light of recent studies, 6 months of memantine was administered to facilitate the prevention of cognitive dysfunction following whole brain radiotherapy [13,14]. An MRI performed 3 months post radiation (March 2017) showed decrease size of dominant occipital lesions and stability of the remaining lesions.
The lesions have remained stable on the most recent MRIs: December 2018 (1 year post treatment) and June 2019 (18 months post treatment). At the present time, our patient is engaged in cognitive, ocular, and physical therapy to ameliorate the disabilities caused by the cerebral metastases and her subsequent treatments. Whole exome analysis was then performed on tissue from cardiac myxoma, brain metastasis, and ovarian cystadenoma revealing somatic mutations in PRKAR1A within the cardiac tumor and brain metastasis, but not in the tissue from the ovarian cystadenomas.
Histopathologic analysis
Cardiac surgery at an outside hospital revealed a 3.5 × 2.5 × 2.5 cm myxoma consisting of a complex papillary structure comprising grape-like clusters organized into an arborizing network ( Tumor tissue obtained from the brain biopsy in an imaging-confirmed area at Yale New Haven-Smilow Hospital was studied by microscopy and immunohistochemistry. Staining with an anti-calretinin antibody revealed an arterial vessel with its endothelial lining replaced by calretininpositive myxoma cells that had infiltrated focally into (See figure on previous page.) Fig. 1 a-n Timeline of patient: representative images. Representative images from April 2017 (7 months after resection of atrial myxoma) with fluid-attenuated inversion recovery (a, b) and gradient recalled echo sequences (c, d) showing small hemorrhagic lesions throughout the bilateral frontal, parietal, and occipital lobes. Follow-up imaging from October 2017 shows interval increase in size of numerous lesions on fluid-attenuated inversion recovery (e, f) and susceptibility weighted imaging (g, h). i (right internal carotid artery) and j (left internal carotid artery) are representative images from cerebral angiogram done in October 2017, demonstrating multiple mycotic aneurysms. Fluid-attenuated inversion recovery (k, l) and susceptibility weighted imaging (m, n) sequences from repeat magnetic resonance imaging of the brain in March 2018 (post-radiation) Fig. 2 a-g Left atrial myxoma. a In situ photograph of the myxoma at the time of surgery consisting of a complex papillary structure comprising grape-like clusters organized into an arborizing network. b Photograph of the excised atrial myxoma consisting of a tree-like structure with several arborizing branches. c Six micron section of the myxoma whole mount illustrating the arborizing network of grape-like clusters converging on a fibrous stalk. d, e Low (d) and high (e) power images illustrating lepidic cells lining the surfaces (d) and cells in a myxoid matrix forming abortive clusters of vessel-like structures (e). f, g Intermediate (f) and high (g) power images of occasional clusters of rounded cells at and near the surfaces displaying a lack of cohesion, forming small groups of cells separating from the surfaces of the myxoma adjacent brain parenchyma (Fig. 3a, b). The subendothelium of the vessel was replaced with myxoid matrix and myxoma cells that were seen invading through the media, breaching the internal and external elastic lamina (dashed lines; Fig. 3b, c). The myxoma tumor cells also stained positively for SMA and CD31 (data not shown).
In 1997 our patient underwent bilateral ovarian biopsies at an outside hospital, from which the diagnosis of papillary serous cystadenomas was made. The slides were reviewed at the time of her current admission and the diagnosis confirmed. Sections cut from the requested paraffin blocks of the cardiac myxoma and the ovarian tissues from the outside hospitals were utilized for whole exome sequencing.
Whole exome sequencing
Methods Deoxyribonucleic acid (DNA) was extracted from tumor tissue microdissected from sections after deparaffinization. The concentration of tumor cells in the regions dissected was estimated from adjacent sections cut from the paraffin blocks. DNA from buccal swabs (Isohelix) was prepared by methods similar to those used for tissue, except for skipping deparaffinization and reversal of formalin crosslinking. The DNA was resuspended in a buffer solution (10 mM Tris, 0.1 mM EDTA) and DNA concentration determined using the Qubit® 2.0 Fluorometer (Thermo Fisher Scientific).
Libraries of DNA fragments for sequencing were prepared by multiplex polymerase chain reaction (PCR) of the genomic DNA. The Ion AmpliSeq™ Exome RDY 4×2 panel (Thermo Fisher Scientific) was used for whole exome sequencing of the heart valve, left ovary, right ovary, and brain samples. The amounts of input genomic DNA for whole exome sequencing were 80 ng for the heart valve, 100 ng for the brain, and 60 ng for the ovarian sample. The Ion AmpliSeq Comprehensive Cancer Panel (Thermo Fisher Scientific) (primer pools 2 and 3 only) was used to prepare libraries for targeted DNA sequencing of the heart valve and right ovary (10 ng input DNA for each pool).
The PCR, amplicon digestion, and barcode adapter ligation were performed using Eppendorf® Mastercycler® Pro S Thermal Cyclers (Eppendorf). Libraries were quantitated by quantitative PCR (qPCR) using the Ion Library TaqMan® Quantitation Kit (Thermo Fisher Scientific) and the ViiA 7 real-time PCR system (Thermo Fisher Scientific). DNA libraries from tumor and buccal swab germline DNA were separately barcoded, mixed in a 3:1 tumor to germline ratio, and sequenced together.
Sequences obtained for fragments were aligned to the hg19 reference sequence using the Ion Torrent Suite™ software version 5.2 (Thermo Fisher Scientific). Variant calling and annotation were performed by the Ion Re-porter™ software version 5.0 (Thermo Fisher Scientific) using the AmpliSeq™ Exome tumor-normal pair workflow and the AmpliSeq™ CCP single sample workflow for whole exome and targeted sequencing, respectively. Called variants were individually inspected and evaluated using the Integrative Genomics Viewer (IGV).
Results
The microdissected tissue from the heart was estimated to contain approximately 90% tumor cells. The whole exome sequencing of DNA from this lesion had a mean read depth of 223.3 nucleotides, along with a mean read depth of 171.8 for the accompanying germline DNA. The assembled sequence showed 46 somatic mutationsvariants not found in the germline DNA collected by buccal swabwithin or closely bordering the coding sequences of 39 different genes. Attention was focused on 12 mutations that were non-synonymous, and had a variant allelic fraction (VAF) of ≥ 10 with a read depth of > 50×. Four of these mutations involved the PRKAR1A gene (one missense mutation, one insertion, one deletion, and a 5× amplification; Table 1). In addition, a synonymous mutation was found within PRKAR1A. None of the other mutations were in genes that have been generally implicated in tumor cell proliferation or survival.
Three of the mutations in PRKAR1A (the missense and synonymous mutations, and a five-base pair deletion) were tightly clustered within the gene and were present at comparable VAFs (44-55%). The phase of a 20-base pair insertion present at 12% could not be directly determined with respect to these three mutations based on next generation sequence analysis, in which DNA is sequenced in small fragments (average read length 175 nucleotides).
To confirm the presence of the PRKAR1A mutations and better assess the VAFs for these alterations, targeted sequencing was carried out on the same lesional DNA using pools of multiplex primers from the Ion Ampli-Seq™ Comprehensive Cancer Panel that separately covered the regions of the mutations within the gene. The mutations detected by whole exome sequencing were again found by targeted sequence with roughly the same VAF score for the five-base pair deletion and at a slightly reduced level for the missense mutation (VAF 41%, 250/ 617 reads and 29%, 249/863 reads, respectively). The 20base pair insertion was found at an identical VAF (12%, 1027/8409 reads). Resequencing of DNA from normal tissue using the Comprehensive Cancer Panel failed to detect any of the PRKAR1A mutations, with an average read depth of 1852 covering the regions containing those mutations.
The lesional tissue examined from the brain had a much lower concentration of tumor cells, estimated to be approximately 20%. All of the PRKAR1A mutations identified in the heart tissue were present in the brain tissue, albeit at much lower VAFs (the 5-base pair deletion at 7%, the missense and synonymous mutations at 6% each, and the 20-base pair insertion at 1.9%).
Mutations within the PRKAR1A gene were not detected in DNA from the cystadenoma of the right ovary Table 1 PRKAR1A mutations detected within the tissues CCP Ion AmpliSeq™ Comprehensive Cancer Panel, CNV copy number variation, VAF variant allelic fraction, WES whole exome sequencing. Read depth is the total number of reads covering the site of the mutation, except for the amplification, for which read depth refers to the mean read depth across the entire amplified region. The mean read depth for the amplified region in normal DNA was 107. The amplified region in both the heart and brain lesions contains PRKAR1A and the gene WIPI1 using the Comprehensive Cancer Panel. Attempts to sequence DNA extracted from the much less abundant tissue available from the cystadenoma of the left ovary yielded results that did not meet quality metrics.
Discussion and conclusions
In general, atrial myxomas are pathologically benign lesions; however, their malignant potential has been described in small case series and individual case reports in the literature [4,9,[15][16][17][18][19][20][21][22][23][24][25]. The CNS is the most common location for remote growth of myxomatous material. Autopsy series and retrospective reviews of patient cohorts have estimated the incidence of cerebral parenchymal metastases to be between 1.8 and 4.5% of patients with myxomas [12,15]. Risk factors leading to metastasis are poorly understood and the clinical and pathological features of patients with cerebral parenchymal metastases are varied. In several case reports of myxoma metastases, metastatic lesions appeared more cellular and pleomorphic than the original tumor, suggesting there had been malignant transformation [26]. In another report of a patient with sarcomatous-appearing metastases and a history of benign cardiac myxoma, a retrospective review of the initial cardiac tissue revealed an area at the periphery of the tumor that had more malignant features, suggesting a possible "malignant myxoma" at the outset [15]. Other patients had metastatic disease at distant sites from typical benign-appearing myxomas without any aggressive features on histopathology [16,21].
As demonstrated in our case, myxoma cells from tumor emboli to the CNS vasculature can attach to the endothelial wall, weaken the endothelium, and invade the elastic lamina leading to vascular wall dilatation and so called "oncotic-aneurysm" formation [27]. The natural history of these aneurysms is not certain given their rarity and varied outcomes. Lesions have been reported to behave anywhere on the spectrum from selfresolving to enlarging and may cause both ischemic and hemorrhagic events [28,29]. In rare circumstances, as evidenced by our patient's pathology, myxoma cells may penetrate through the vessel wall of these aneurysms and into the surrounding brain tissue, leading to the formation of parenchymal lesions. This mechanism has been demonstrated previously and other case reports of brain metastases describe cerebral aneurysms on imaging in association with parenchymal lesions [12,30]. However, not all reported cases of brain metastases describe these imaging findings. Factors affecting the ability of tumor cells to disseminate to the brain, penetrate blood vessel walls, and proliferate within the brain remain to be identified. Prior work has suggested that production of IL-10 by the tumor may play a role, but further studies are needed [25,31].
Genetic analysis of our patient's tissue showed multiple mutations within the PRKAR1A gene in both the brain metastasis and cardiac tumor sample, but not within tissue derived from a benign ovarian cystadenoma or a buccal DNA swab. While ovarian cystadenomas may be a feature of Carney complex, the absence of PRKAR1A mutations in the cystadenoma suggests that her heart tumor was a sporadic myxoma. The failure to find the PRKAR1A mutations in the material collected by buccal swab is consistent with this conclusion, although there is a very small possibility that our patient is mosaic for a mutation at a very low level of mutant cells and that the buccal mucosa and ovarian tissues were spared the mutation present in other somatic tissues. The PRKAR1A gene encodes the type 1A regulatory subunit of the cyclic adenosine monophosphate (cAMP)-dependent enzyme protein kinase A (PKA) and functions as a tumor suppressor [32]. The PRKAR1A protein exerts a repressive effect on the kinase activities of the PKA complex. Mutations inactivating this subunit lead to increased cell proliferation in cAMP-responsive tissues and are believed to play a role in tumorigenesis. All of the intragenic mutations detected within the PRKAR1A gene in this case (with the exception of the synonymous mutation) should inactivate the protein encoded by the allele containing the mutation. Both indel mutations result in frameshifts and the missense mutation is predicted to have a deleterious effect on the structure and function of the protein, according to the SIFT and PolyPhen prediction programs. Whether both alleles are affected by these mutations is not possible to determine by next generation sequencing, so the extent to which PRKAR1A activity is lost within the tumor cells is not known. However, the 5× amplification (five copies of the PRKAR1A gene) suggests that the allele carrying the deletion, missense, and synonymous mutations is the allele that underwent amplification, resulting in four copies of that allele within the tumor cells of the heart. This ratio was somewhat lower (3×) in the metastatic tumor analyzed from the brain. If this interpretation is true, both PRKAR1A alleles had suffered deleterious mutations. (The amplification of the tumor suppressor is unexpected and unexplained. Amplification is usually associated with oncogenes, not tumor suppressors like PRKAR1A, and may coexist with activating mutations within the amplified genes).
The VAFs for the mutations in genes other than PRKAR1A found in the tumor cells (not shown)all approximately 25%are consistent with a tumor cell concentration of approximately 50% in the tissue analyzed (despite a higher estimate by histopathology), assuming that these mutations are heterozygous. That the insertion mutation in PRKAR1A is lower (12%) probably reflects the likely amplification in the opposite allele. The absence of detectable PRKAR1A mutations in the germline DNA from the buccal swab and right cystadenoma of the ovary argues against mosaicism for Carney complex, although ovarian cystadenomas have been noted to be more prevalent in patients with Carney complex compared to the general population. In this case, the ovarian cystadenomas resected many years earlier were probably sporadic and coincidental with the later development of a cardiac myxoma.
There has been debate as to whether mutations in PRKAR1A play a role in the development of isolated cardiac myxoma. Initially, genetic analyses conducted on small patient cohorts with isolated cardiac myxomas showed that mutations were not found in these cases. In two recent studies, PRKAR1A mutations were found in sporadic cardiac myxomas. In one of these studies, 33 out of 103 tumors lacked expression of the PRKAR1A protein and nine patients had mutations in PRKAR1A identified [7]. A subsequent study found that seven out of eight sporadic tumors had mutations in PRKAR1A on whole exome sequencing analysis [8]. Whether this mutation alters the clinical course of sporadic myxomas is not clear. On histopathologic examination, myxomatous tissues from hereditary and isolated myxomas appear very similar. Although a cardiac myxoma associated with Carney complex is more likely to recur, there are no data to the best of our knowledge that these tumors are more likely to metastasize. In addition, prior case reports on brain metastases have not included whole exome sequencing data from the tumor tissues.Unfortunately, our patient's lesions grew quickly and caused significant neurological deficits, prompting the rapid initiation of treatment. As brain metastases are an infrequent complication of a rare tumor, there is no standard accepted treatment and therapeutic strategies derive from experience with small numbers of patients. There are as yet no available agents to target the PRKAR1A mutation. Therapy with doxorubicin has been attempted in a patient with an oncotic aneurysm but the aneurysm continued to enlarge despite treatment [28]. Altundag et al. treated a patient similar to ours with whole brain radiotherapy and achieved stability for at least 4 years (the patient was still alive and stable at time of publication) [16]. Another report described decrease in size of multiple parenchymal lesions with whole brain radiotherapy [33]. A third patient was treated with whole brain radiotherapy followed by chemotherapy with ifosfamide and doxorubicin and achieved long-term control [34]. Our patient's lesions showed partial response to whole brain radiotherapy and she has now been stable by imaging and clinical criteria for 1 year.
Our case illustrates a rare occurrence of atrial myxoma with brain parenchymal metastases and highlights the ability of myxoma cells to migrate though cerebral vessel walls, form an oncotic aneurysm, and then invade the brain parenchyma. It also provides further evidence that PRKAR1A mutation can occur in sporadic myxomas. The precise molecular features and risk factors that allow myxoma cells to invade surrounding tissue have yet to be elucidated. Among genes recognized as having functions related to tissue invasion and metastasis, none were found to be mutated within the tissue of the heart or brain lesions. Whether the PRKAR1A mutation in sporadic myxomas alters the clinical course of disease or increases the likelihood of metastasis is not clear. Further information on the genotype-phenotype correlation of PRKAR1A mutations is needed to help predict the clinical behavior of tumors with this abnormality. In addition, the PKA pathway regulated by PRKAR1A offers a possible target for systemic therapy for tumors harboring PRKAR1A mutations, in particular those tumors associated with the most serious complication of cardiac myxomas: cerebral metastases. | 5,677 | 2019-12-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Assessment of clay materials for suitability in drilling mud formulation from part of Ondo State, South-West Nigeria
Bentonite used for drilling operations in the oil and gas industry in Nigeria is mainly imported into the country. This project evaluated the efficiency of additives and their function in enhancing the rheological and flow properties of local bentonite clay, obtained from Ibule-soro in Ondo State, Nigeria. X-Ray diffraction analysis of clay samples from previous research around the study area indicated large amount of silica, alumina, and iron contents suggesting that the clays were kaolinite in nature. The clay samples were analyzed for their rheological properties and subsequently compared with the imported bentonite with American Petroleum Institute (API) specifications as standard. The results obtained showed that the local bentonite exhibited low viscosity and high filtration loss. Therefore, to enhance the quality of the clay, it was beneficiated with sodium carbonate (Na2CO3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{Na}}_{2} {\text{CO}}_{3}$$\end{document}) and carboxymethyl cellulose (CMC). Seven different formulations were made: (20 g of Imported bentonite), (20 g of local Bentonite Clay), (20 g of local Bentonite Clay + 3.3 g of Na2CO3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{Na}}_{2} {\text{CO}}_{3}$$\end{document} + 10 g of CMC), (25 g of local Bentonite Clay + 4.2 g of Na2CO3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{Na}}_{2} {\text{CO}}_{3}$$\end{document} + 10 g of CMC), (30 g of local Bentonite Clay + 5.0 g of Na2CO3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{Na}}_{2} {\text{CO}}_{3}$$\end{document} + 10 g of CMC), (35 g of local Bentonite Clay + 5.8 g of Na2CO3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{Na}}_{2} {\text{CO}}_{3}$$\end{document} + 10 g of CMC), and (40 g of local Bentonite Clay + 6.7 g of Na2CO3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{Na}}_{2} {\text{CO}}_{3}$$\end{document} + 10 g of CMC).The addition of additives (CMC and Na2CO3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{Na}}_{2} {\text{CO}}_{3}$$\end{document}) reduced the calcium content via the cation exchange process and enhanced the rheological properties of the mud samples. The research work revealed that beneficiation of local bentonite with sodium carbonate, the addition of polymer (CMC), and an increase in clay concentration influenced the rheological and flow properties of mud samples. With proper beneficiation of local bentonite in Nigeria, they can be made suitable for drilling operations in the oil and gas industry.
Introduction
The Nigerian economy is known to be hugely dependent on the oil and gas industry as far as foreign exchange is concern. Over the years, researchers have confirmed that drilling activities performed by oil companies either needed to import either the materials required for the fluids formulation or customized drilling fluid designed to fit the necessities of the Niger Delta formations. The related expense of the import of these materials can amount to millions of dollars per year, causing harm to the country's economy (Afolabi et al. 2017). The importation of bentonite for drilling of wells in the oil and gas industry has continually diverted huge sum of foreign exchange that could be budgeted for the socioeconomic stability of Nigeria (Dewu et al. 2011). Nigeria bentonite clay has no notable utilization despite its substantial deposits at various locations in the country because of its difficulties being excessive fluid loss and low swelling index (Falode et al. 2007). Due to these difficulties or more, bentonite used in Nigeria for drilling activities is mainly imported into the country (Apogu-Nwosu et al. 2011). Appropriate measures have not been made regarding the alteration of this clay before it can be utilized in the preparation of drilling mud. It is also vital and proper to enhance the properties of this clay. This particular need has increased the research on the utilization of local clay in the application of drilling fluid in the oil and gas industry. As the interest for bentonite clays rises, there is a need to improve the Nigerian bentonite properties to correspond to the API standard (Afolabi et al. 2017). Bentonite is also known as Montmorillonite clay. It is an absorbent aluminum phyllosilicate clay that contains mainly montmorillonite. It was named after the Cretaceous Benton Shale near Rock River, Wyoming, by Wilbur. C. Knight in 1898 (Hosterman and Patterson 1992). Its various types are named after the dominant element, such as calcium (Ca), potassium (K), aluminum (Al), and sodium (Na). There are two major types of bentonite; sodium and calcium bentonite. Most local bentonites are calcium rich, while foreign bentonites are sodium bentonite. In order to be used in an industrial application, such as drilling mud, they must be turned sodic and have a high swelling capacity. In this process, clay is treated with Na 2 CO 3 , causing a double exchange reaction in which the Ca2 + cations of the clay combine with ( CO 3 ) 2− coming from the sodium carbonate in an aqueous solution, becoming calcium carbonate (Brito et al. 2018). Sodium carboxymethylcellulose (CMC) is primarily a fluid loss reducer but also produces viscosity in freshwater and saline muds whose salt content does not exceed 50,000 mg/L (Bleler et al. 1993). CMC is generally available in a high or low viscosity type. Either grade provides effective fluid loss control (Hughes and Jones 1990). The temperature limit of CMC is 121 °C, and is not subjected to bacterial degradation (Lummus and Azar 1986). The improvement of bentonite using CMC raises the viscosity, reducing the loss of drilling fluid and maintaining proper flow properties under conditions of moderate temperature, salinity and pressure, providing improvements in the required technological properties (Brito et al. 2018). In stratigraphy and tephrochronology, entirely devitrified (weathered volcanic glass) ash-fall beds are known as potassium bentonite with its dominant clay species being illite. Kaolinite can be intermittently dominant as montmorillonite and illite alternative clay species. Bentonite can be utilized as additives in filtration control and viscosity. Bentonite can be generally obtained from weathered volcanic ash, mainly in the occurrence of water (Magzoub et al. 2017). Sodium montmorillonite are viable bentonite ores which differ broadly in quality and quantity of the swelling clay. Calcium montmorillonites are bentonite ores of less significant value and can be treated to meet API specifications by the addition of some main additives such as sodium carbonate, CMC, starch or polyphosphates, and long chain synthetic polymers. It is, therefore, an essential constituent of drilling fluid as it limits invasion of drilling fluid in the wellbore and prevents mud cake formation (Akintunde 2012). The deposit of bentonite clay cuts across different regions around Nigeria. A couple of regions of the country may have a larger number of deposits than the others. Previous work carried out on Nigerian bentonite indicated that they are low-grade calcium montmorillonite hence the need for beneficiation to improve its quality using sodium salt as part of the process. This project was designed to evaluate the efficiency of additives in enhancing the rheological and flow properties of local bentonite clay, obtained from Ibule-soro in Ondo State, Nigeria. The specific objectives are to; • Source, process, and characterize the local raw clay. • Determine the physiochemical properties of the local bentonite material. • Beneficiate the local bentonite clay with sodium carbonate ( Na 2 CO 3 ) and carboxymethyl cellulose (CMC) to upgrade it to API standard. • Examine the impact of the stepwise increase in the concentration of the mud and additives on the drilling fluid rheological properties.
Location of study area
The local bentonite used in this study was obtained from Ibule-Soro town in Ifedore local government area of Ondo State, Southwestern Nigeria ( Fig. 1). Its geographical coordinates are longitude 5°7′0″ E (5.1166667938232), latitude 7°18′0″ N (7.3000001907349), and elevation of 1,237ft (377 meters) above sea level. The area is accessible by roads and footpaths, and it occupies about 0.83 km 2 in aerial extent. Topographically, the area is characterized by a relatively rugged, undulating topography with outcrops of charnockites, migmatite gneiss with other gneissic rocks as highlands, which range between 600 ft and 1500 ft above sea level. It is situated within the Precambrian Basement Complex with the outcrops (Fig. 2), which are predominantly gneiss and migmatite (Temitope and Opeyemi 2012).
Materials and method
The materials used for this work include; (i) Raw, non-beneficiated bentonite clay local bentonite clay, obtained from Ibule-soro in Ondo State, Nigeria. Its appearance is whitish/greyish in color. (ii) Imported/Foreign Bentonite clay used as standard. (iii) Additives: sodium carbonate ( Na 2 CO 3 ) and carboxymethyl cellulose (CMC).
Quantitative analysis of the chemical compositions of clay materials was carried out by Olubayode et al. (2016) on both processed and unprocessed clay, from Ondo State, Kano State, and some other southwestern States in Nigeria (Table 1 and Fig. 3). This they did with the aid of the laboratory instrument, EDX 3600B Energy Dispersive X-ray Fluorescence (EDXRF) spectrometer. From their results, they observed a large amount of silica, alumina, and iron contents suggesting that the clays were kaolinite in nature and could be used for a variety of purposes.
The clay material used for this study was dried under moderate conditions and was crushed by pounding in a mortar (Fig. 4). Sieve analysis was carried out on the crushed clay with the use of a sieve shaker and setting the working time to 120-130 s using different mesh sizes from 500 microns mesh to 300 microns mesh to 150 microns mesh to 75 microns mesh to obtain fine particles (Fig. 5). The experimental procedure involved the addition of a calcium bentonite sample to a sodium carbonate aqueous solution to form a bentonite suspension, which was heated and stirred continuously to form sodium bentonite and calcium carbonate. Calcium bentonite is converted to sodium bentonite by combining chemical (addition of sodium carbonate), mechanical (agitation), and thermal (heating) treatment procedures.
The sodium carbonate solution was formulated by dissolving sodium carbonate powder (soda ash) in distilled water. This solution served as a source of carbonate and sodium ions for an ion exchange process with Ca-bentonite, where the calcium can be precipitated as calcium bentonite (Magzoub et al. 2017). The addition of calcium bentonite sample to the already prepared sodium carbonate solution forms a bentonite suspension. The provided Ca-bentonite was treated with Na 2 CO 3 making use of a sodium carbonate/ bentonite weight ratio of 1:6 by varying the sodium carbonate content and bentonite concentration. For this experiment, five different samples were formulated with varying sodium carbonate/bentonite weight ratio while maintaining a standard measurement of 350 ml of distilled water (standard laboratory barrel). The range of sodium carbonate/bentonite weight ratio is as follows: (a) 3.3 g of Na 2 CO 3 to 20 g of local calcium bentonite. (b) 4.2 g of Na 2 CO 3 to 25 g of local calcium bentonite. (c) 5.0 g of Na 2 CO 3 to 30 g of local calcium bentonite. (d) 5.8 g of Na 2 CO 3 to 35 g of local calcium bentonite. (e) 6.7 g of Na 2 CO 3 to 40 g of local calcium bentonite.
Following the preparation of the bentonite suspension, the bentonite suspension was heated and stirred continuously with the aid of a magnetic stirrer and a magnetic stirrer hot plate (Fig. 6), for 3 h maintaining a temperature of 70 °C and a speed of 45 rpm (revolution per time).
The continuous heating and stirring increased the bentonite particle size or platelet, speeds up the swelling process, and caused the expansion of the bentonite platelets for increased swelling and enhanced ion exchange process. It brought about the movement of sodium ions to the surface of the bentonite layer, allowing increased Na + activation and an increase in the Na/Ca ratio and further enhanced the rheological properties (Magzoub et al. 2017). After the continuous heating and stirring process, 10 g of the CMC additive was added to the already heated bentonite suspensions and stirred for 10 min with the use of a magnetic stirrer to form a bentonite-CMC suspension and to achieve a homogenous dispersion where the polymer chains are well confined by the clay particles. The rheological properties of drilling mud consist of; Plastic viscosity, Apparent viscosity, Yield point, mud density, specific gravity, and alkalinity. The experiment was carried out on the local bentonite, foreign bentonite (as control), and local bentonite with sodium carbonate and CMC (Fig. 7). The main equipment used for this experiment was a Baroid Rotary Viscometer. It is a coaxial viscometer with a set speed of 600 RPM, 300 RPM, 200 RPM, 100 RPM, and 3 RPM (GEL) that can be switch-selectable with the RPM handle. From this experiment, the rheological properties calculated included; The gel strength was determined at 10 s after the clay was mixed in the cup and at 10 min after the mud was mixed in the cup. 3 RPM speed in a rotary viscometer was utilized to decide gel strength. The readings recorded from this experiment include; The experiment for the determination of the filtration properties was also carried out on the local bentonite, foreign bentonite, and local bentonite with sodium carbonate and CMC. The main apparatus utilized for this experiment was a standard filter press comprising of; mud reservoir (top Gel Strength, 10 s (lb∕100 ft2) = maximum dial deflection after 10 s Gel Strength, 10 min (lb∕100 ft2) = maximum dial deflection after 10 min cap, cell, rubber gasket, and base top) mounted in a frame, filter paper, graduated glass cylinder, and a pressure source (compressed nitrogen cylinder) using a standard cell of 100 psi for 30 min at room temperature (27 °C). pH determination indicates the acid or alkaline property of the drilling mud. In drilling fluid, the acidity and alkalinity can be estimated by determining the hydrogen ion concentration.
For an aqueous solution, the pH meter measures the electro potential created amid a particular glass electrode and a reference electrode. This experiment was also carried out on the local bentonite, foreign bentonite, and local bentonite with sodium carbonate and CMC. were also found to be extremely low when compared with the foreign bentonite. Also worthy of note was the very high fluid loss value obtained from the filtration test performed on the local bentonite clay. Tables 3 and 4 show the result of the rheological parameters of the mud when varied quantity of the pure bentonites (20 g, 25 g, 30 g, 35 g, and 40 g) were beneficiated with 10 g of CMC (polymer) and different concentrations of Na 2 CO 3 (3.3 g, 4.2 g, 5.0 g, 5.8 g, and 6.7 g respectively).
Results and discussion
The volume of the local bentonite and salt concentration was varied to examine their effect on the rheological properties of the mud, and also to investigate the effect of polymer (CMC) on different concentrations of the local bentonite. From the results, the rheological properties of the beneficiated local bentonite were slightly improved. From Table 3, it was observed that out of the five samples, the mud had the least improvement in its rheological properties when the local bentonite (20 g + 350 ml of distilled water) was beneficiated with 3.3 g of Na 2 CO 3 and 10 g of CMC.
Gradual enhancement of the rheological properties of the samples was observed as the volume of the local bentonite, and salt concentration increased. Also, it was observed from Tables 3 and 4 that the filtration loss of the beneficiated samples reduced significantly when compared with the unbeneficiated local bentonite (Table 2). Table 4 showed the best improvement in terms of filtration loss (41.5 ml) when 40 g of mud was beneficiated using 350 ml of distilled water, 6.7 g of Na 2 CO 3 , and 10 g of CMC.
The presence of CMC in the samples assisted the reduction in filtrate loss. The plastic viscosity, yield point, apparent viscosity values of the different mud samples depended on the 600 rpm and 300 rpm readings. And these viscosity values were influenced by the addition of CMC to the mud samples. The increase in the volume of bentonite concentration leads to a greater influence of CMC on the mud samples viscosity. The beneficiation of local bentonite with soda ash led to the improvement of the swelling clay capacity, the ability of the clay particles to flocculate, bringing about enhanced mudflow properties. The improvement of the treated bentonite can be attributed to some reasons. First and foremost, the difference in mineralogy influenced by the depositional environment can influence the mud properties (Omole et al. 2013). Secondly, the use of the heating (thermal) and stirring (agitation) treatment procedure can also improve the properties of the clay as this procedure results in an increase in bentonite particle size or platelet, enhances the conversion of calcium-based bentonite to sodium-based bentonite, increase in Na + ∕Ca 2+ ratio and allow Na + activation and swelling. Thirdly, the conversion of calcium smectite to sodium smectite enlarges the space among the particles of the clay. Na + is known to be a monovalent cation and can combine to a charge deficient area and can create separated sheets when dispersed in water unlike Ca 2+ which is a divalent cation that cannot combine to two negative charges but focuses on one sheet and hence combines two sheets together. Fourthly, the improvement of mud properties can be as a result of their free swell volume and proved by their physiochemical properties in terms of cation exchange ability, expandable and nonexpandable minerals (Magzoub et al. 2017).
A high mud density manages the formation pressure and improves the stability of the wellbore. Figure 8a is a histogram plot showing the comparison between the densities of the standard bentonite, local bentonite, and the five different test samples. The different dial readings were compared with that of the imported bentonite, and it was observed that there was a progressive rise in the densities of the mud samples. The addition of a viscosifier to a mud sample can lead to an increase in the mud density. An increase in mud density was greatly influenced by adding carboxymethyl cellulose (CMC) to the different samples. Increase in the volume of the bentonite resulted in a higher effect of CMC on the mud samples increasing the mud density of the samples.
Specific gravity defines the density or weight of fluid compared to the density of an equal volume of water at a specified temperature. Figure 8b is a plot showing the comparison between the specific gravity of the imported bentonite, local bentonite, and the five samples. The different dial readings were compared with the standard bentonite, and it was observed that there was also an increase in the specific gravity of the different samples. The increase in the specific gravity of samples was influenced by the addition of CMC to the five samples at different concentrations of bentonite and sodium carbonate.
pH is a measure of the concentration of hydrogen ions in aqueous solution. If the water used in the preparation of a drilling mud is too hard or the pH value is not within the range of 8.5-9.5, then the mud will take a longer period to hydrate, or it might not hydrate fully. Figure 9a shows a histogram plot comparing the pH of the imported bentonite, local bentonite, and the five different samples. A critical look at the plot indicated an increase in the pH of the mud samples. This increase occurred as a result of the beneficiation of the mud samples by the addition of sodium carbonate ( Na 2 CO 3 ) . Sodium carbonate is alkaline in nature, as it is a strong base. The higher the bentonite and sodium carbonate concentrations, the higher the pH values of the mud samples. The conversion of calcium carbonate to sodium carbonate through ion exchange can also influence the pH of the mud samples.
A mud viscosity illustrates the amount of resistance of the fluid to shear stress. The viscosity of the drilling fluid can be improved upon by treatment with polymers such as CMC. Figure 9b shows the comparison between the viscosities at 600 rpm of the imported bentonite, local bentonite, and the five different samples. From the chart, it was observed that there were generally poor values of viscosity when compared with the standard mud sample. However, with beneficiation, there was an improvement in the viscosity of the treated mud samples when compared with the untreated local bentonite. CMC is suitable for increasing viscosity of the clay suspension and stabilizing the clay suspension. The higher the bentonite concentration, the higher the polymer (CMC) effect on the viscosity of the mud samples. From the result, the most improved viscosity was observed in sample five (40 g of bentonite + 6.7 g of Na 2 CO 3 + 10 g of CMC) having the highest concentration of bentonite. Figure 10a shows the comparison between the viscosities at 300 rpm of the imported bentonite, local bentonite, and the five different samples. The viscosity of the beneficiated clay was also low relative to the standard bentonite. The addition of CMC to the different mud samples led to a slight increase in the viscosity of the treated local samples compared to the untreated local bentonite. From the chart, sample five (40 g of local bentonite + 6.7 g of Na 2 CO 3 + 10 g of CMC) had the most improved viscosity at 300 rpm. The higher the bentonite 1 3 concentration, the greater the polymer (CMC) effect on the viscosity of the mud samples. The result also revealed higher viscosity of the mud at 600 rpm when compared with the values obtained at 300 rpm. The resistance of the flow of fluids due to mechanical friction in the drilling mud, such as the shape and size of solid, concentration of solid, viscosity of the fluid phase in the continuous phase is known as Plastic viscosity. Figure 10b shows the comparison between the plastic viscosities of the standard bentonite, local bentonite, and the five different samples. The values used for the plot was obtained from the difference between measurements at 600 rpm and measurements at 300 rpm. The plastic viscosity was relatively low when compared with the standard, and no significant improvement was observed with an increase in the concentration of Na 2 CO 3 and CMC. The measure of the viscosity of a fluid at a given shear rate is known as Apparent viscosity. Apparent viscosity is half of the 600 rpm dial value. The apparent viscosity values are dependent on the 600 rpm dial values. Therefore, the highest value at 600 rpm will yield the highest apparent viscosity. Figure 11a shows the comparison between the apparent viscosities Clay + 6.7 g of Na2CO3 + 10 g of CMC) had the highest apparent viscosity value. An increased bentonite concentration led to increased apparent viscosity. Compared with the imported bentonite, the values were low, and there was no significant improvement in apparent viscosity when beneficiated with Na 2 CO 3 and CMC.
The resistance of the initial fluid flow or the needed stress to move the fluid is known to be the yield point. The yield Figure 11b shows the comparison between the yield point of the standard bentonite, local bentonite, and the five different samples. From the plot, the different yield point values were obtained subject to the values of viscosity at 300 rpm and plastic viscosity. Since there was a slight improvement in the reading of viscosity at 300 rpm and plastic viscosity, the samples yield point slightly improved when compared with the imported bentonite. The capability of the drilling mud to suspend drill cuttings when circulation is ceased is shown by its gel strength. Higher gel strength results in a high tendency of the drilling mud to suspend drill cuttings and vice versa. It is measured once the drilling mud has been at rest for a particular period (10 s). The gel strength at 10 s indicates the attractive forces (gelation) strength in a drilling fluid below static conditions. Figure 12a shows the comparison between the gel strength Fig. 12 Histogram showing the gel strength at 10 min (a) and the fluid loss (b) of samples at 10 s of the standard bentonite, local bentonite, and the five different samples. The result here indicates that the gel strength of the different samples was constant with beneficiation but declined when treated with 25 g and 30 g of local bentonite. This shows that at initial gel strength, there is little stress required for the movement of mud. The values were obtained at a fixed speed of 3 rpm in the viscometer. Generally, the gel strength was low when compared with the standard. Figure 12b is a histogram showing the comparison between the fluid loss of the imported bentonite, local bentonite, and the five different samples. The lower the fluid loss, the more suitable the drilling mud and vice versa. It was observed from the plot that the local bentonite exhibited a very high level of fluid loss when compared with the standard bentonite. But after beneficiation with increased volume of the local bentonite, there was a relative improvement in the fluid loss of the treated samples compared to the untreated local bentonite as the filtration loss of the treated samples reduced drastically compared to the untreated bentonite. Sample five (40 g of Local Bentonite Clay + 6.7 g of Na 2 CO 3 + 10 g of CMC) showed the greatest improvement in filtration loss. The presence of CMC in the samples helped to reduce filtrate loss as CMC is a fluid loss reducer.
Conclusion
As the interest for bentonite clays rises, there is a need to improve the rheological properties of Nigerian bentonite to bring it at par with the API international standard. This is necessary to save the country of the huge sums of money lost to the countries of the International oil companies operating in Nigeria in the name of importing drilling mud of superior quality. To maximize the utilization of local bentonite clay for drilling application, beneficiation of calcium bentonite with the use of sodium carbonate, polymer (CMC), and other suitable additives has become important. The results from this research revealed that the beneficiation of local clay with CMC and the Na 2 CO 3 through the combination of thermal and mechanical treatment procedures enhanced some of the rheological and flow properties of mud samples. A gradual increase in the concentration of bentonite and Na 2 CO 3 also influenced the viscosities and properties of the mud samples even though the viscosity values were relatively low. The higher the sodium carbonate concentration, the higher the alkalinity (pH) of the mud sample. The mud sample with the highest bentonite and sodium carbonate concentration (40 g of local Bentonite Clay + 6.7 g of Na 2 CO 3 + 10 g of CMC) showed the most improved flow and rheological properties after treatment when compared with the API standard. This implies that with a higher concentration of the additives, the quality of the local bentonite can be upgraded to the desired standard for drilling operations. In order to enhance the viscosity of the local bentonite, further investigation on the use of other chemical additives for the modification of the rheological properties of the mud is encouraged. Other researches could focus on the economic analysis of beneficiating clay using local additives against imported bentonite. | 6,389.6 | 2020-07-03T00:00:00.000 | [
"Materials Science",
"Agricultural And Food Sciences"
] |
A Brief Analysis of the Current Situation and Development Trend on Green Transportation Standard System
. It is great significance to improve green transportation standards system for promoting the construction of ecological civilization and the sustainable development of recycling and low carbon of transportation. This paper summarizes the status of green transportation standards system and the progress of standard publication and revision. According to the key areas and links of green transportation development, the development trends of green transportation and the requirement of green transportation standards were analysed, and policy recommendations for improving the construction of green transportation standards system was proposed. The paper is trying to provide decisive support for the industry to promote the development of green transportation standardization.
Introduction
Green transportation is an important strategy for the transportation industry to strengthen the construction of ecological civilization and achieve circular low-carbon sustainable development. In 2013, Ministry of Transport (MOT) published Guiding Opinions on Accelerating the Development of Green Cycle and Low-Carbon Transportation that proposed the development goal of basically establishing a green cycle and low-carbon transport system by 2020 [1]. To promote green transportation construction, it is necessary to adhere to resource conservation and environmental protection, pay attention to the organic unity of development speed, quality, and efficiency, and strengthen the top-level design of the standard system, give full play to the role of standards [2][3]. In order to coordinate the development of green transportation standardization, MOT issued Green Transportation Standard System (2016) in 2016, which proposed 221 important energy conservation and environmental protection standards. The standards are of great significance to promote the application of advanced energy conservation and environmental protection technology products, reducing the impact of ecological environment, improving energy efficiency, and the transportation energy structure [4].
In the past few years, MOT had taken the green transportation standard system as a guide to strengthen the revision and implementation of green transportation standards, and has made great progress in structural optimization, resource conservation, technological innovation, and management improvement [5]. However, compared with the requirements for high-quality development of transportation in the new era, current green transportation standardization work of China cannot fully support the needs of the industry's ecological protection, resource conservation, pollution prevention, and energy conservation and emission reduction. Therefore, it is great significance for promoting the green development of transportation to systematically combine the development basis of the green transportation standard system, deeply analysis of the development trend of green transportation, and put forward strategies and suggestions of standardization development.
Green Transportation Standard System (2016)
includes product service standards and engineering construction standards, which formulated by MOT, related to energy conservation and environmental protection of highways and waterways. The standard architecture design comprehensively considers the connotation of green transportation, the key tasks of energy conservation and environmental protection during the China's Thirteenth Five-Year Plan, and energy conservation and environmental protection technology and management standardization requirements of highway and water transportation. The system divided into 7 categories, such as basic standards, energy conservation and carbon reduction standards, ecological protection standards, pollution prevention standards, resources recycling standards, monitoring, assessment and regulatory standards, and related standards (see Figure 1). The standard system includes 221 standards, which divided into 3 basic standards, 48 energy conservation and carbon reduction standards, 14 ecological protection standards, 52 pollution prevention standards, 14 resources recycling standards, 54 monitoring, assessment and regulatory standards, and 36 related national standards for energy conservation and environmental protection. The number of items published in the standard system is 136, which including 48 national standards and 78 industry standards, and the number of items that to be formulated is 95, which including 15 national standards and 80 industry standards.
Progress of Green Transportation Standards
In accordance with the standard demand planning of the Green Transportation Standard System (2016), MOT has strengthened the work of revising the energy-saving and environmental protection standards. Since the issue of standard system, 23 new standards have been issued, and another 23 standard plans are being implemented. Statistics show in Table 1. For energy conservation standards, newly issued standards include limits of fuel consumption for commercial vehicle, energy consumption evaluation methods of port equipment, energy efficiency grades evaluation of expressway electromechanical mechanical and electrical facilities, shore-to-ship power supply system technical conditions, etc. For ecological protection standards, they include drawings for highway revegetation design, and highway vegetation restoration materials. For pollution prevention Standards, they include sewage treatment facility in highway service areas, highway noise barriers, construction of terminal vapor recovery facilities, oil fingerprint identification of water oil spill based on stable isotope analysis, oil booms, belt skimmer, etc. For statistical assessment standards, they include the main pollutants statistical indexes and accounting method of transportation, and technical requirements for the assessment of green transportation facilities.
Standards under development include limits of fuel consumption for natural gas commercial vehicles, energy efficiency of commercial vehicles and intensity levels of carbon dioxide emission, oil-water separation systems for oil spill control, chemical adsorbents, bio-bitumen for pavements, technical requirements for pollutant emissions from ship, and the method of checking and measuring energy utilization efficiency for port crane, investment statistical indexes and accounting method of transportation environmental protection, requirements for drafting of compilation of ship's air pollutant emission list, technical specifications for environmental impact assessment of highway network planning, inland waterway and port layout planning, etc.
Trend Analysis of Green Transportation Standards Development
Compared with the new ideas and new strategic requirements of national ecological civilization construction, the problems of relatively extensive development modes of transportation, unreasonable transportation structure, incomplete green transportation governance system, and improvement of governance capacity still exist [7]. At the end of 2017, combined with the relevant requirements from central authorities, discontinuous characteristics and development goals of the industry development, MOT issued the Opinions on Comprehensively and Deeply Promoting the Development of Green Transportation. The opinions focus on key areas and key links, and key issues, make up for shortcomings, strength weaknesses from the optimization of transportation structure, organize innovation, green travel, intensive resources, equipment upgrades, pollution prevention, ecological protection and other aspects, and opinions promote the formation of green development methods and lifestyles. Moreover, the opinions further clearly propose the need to improve the green transportation system tasks of the standard system [8][9]. The future development of green transportation should be driven by passive adaptation to advance, promoted by pilots to move forward in an allround way, and promoted by the government to move towards common governance for the entire people.
Moreover, the future development should be used to improve the quality of transportation services by green transportation infrastructure and clean and efficient transportation equipment, thereby forming a spatial pattern, industrial structure, production mode and lifestyle of resources conservation and environment protecting.
In the future, the development of green transportation should adhere to the goal of resource conservation and environmental friendliness, promote the formation of green transportation development methods and green travel modes, and promote the harmonious unity of transportation and the ecological environment. In terms of strengthening ecological environmental protection, it is necessary to strictly abide by the ecological redline, promote ecological selection of line and site, strengthen design of ecological environmental protection, and strengthen protection and restoration of ecosystem. In terms of promoting intensive and economical use of resources, it is necessary to coordinate and organize spatial planning and layout, vigorously carry out construction materials, recycling and comprehensive utilization of scrap materials. In terms of energy conservation, emission reduction and pollution prevention, it is necessary to promote the application of clean energy, effectively preventing pollutant emissions from highway noise, and water and air of ships and ports. In terms of improving the green transportation development model, it is necessary to vigorously implement priority strategies for public transportation, actively promote technologies and products of green transportation, and accelerate the innovation and promotion of advanced energy-saving and low-carbon applicable technologies and products.
Strengthen the dynamic management of green transportation standard system
It is necessary to benchmark international and foreign transportation energy conservation and environmental protection standards, and to track changes in important technical indicators and research and development of advanced technology products. In addition, it is necessary to set up the dynamic adjustment mechanism of the green transportation standard system, and to Promote the rapid transformation of scientific and technological innovation achievements of green transportation, energy saving and environmental protection into standards in China.
Strengthen the pre-research and formulation of key green transportation standards
Combined with the key point of construction and development for green transportation system, it is necessary to strengthen the formulation of environmental protection standards for transportation infrastructure, and to focus on promoting the formulation and revision of design standards for green highway, green port, and green channel engineering. In order to reduce and prevent the emission of traffic pollutants, it is necessary to carry out formulation of technical standards, such as the control of air pollution emissions from operating vehicles, ships, and construction equipment, noise pollution prevention and sewage discharge in waterway area, resource utilization of solid waste resources, and the control of noise for extralong tunnels and exhaust pollution. It is necessary to improve standards of transportation energy-saving technologies and product, expand port applications for liquefied natural gas (LNG) clean energy and shore power energy-saving technology standards, promote the development of energy-saving and transformation standards for boom lifting equipment, improve the construction of relevant standards for new energy vehicle applications, and carry out standard development of monitoring system for public transportation vehicle charging facility and electric power supply system for long-distance single-phase.
Emphasize the demonstration and leading role of green transportation standards
It is necessary to actively assist the construction of the Xiong'an New Area in Hebei province, to base on sustainable transportation development and solving traffic congestion, to establish and carry out forward-looking and high-quality green transportation standards, to increase the proportion of public transportation trips, and to establish a new type of public transport system with high-quality services and diverse forms, and to create a demonstration model of green and intelligent transportation system standardization [10]. Relying on the demonstration projects such as scientific and technological demonstration project of MOT, transportation quality project, national public transport metropolis, transportation energy conservation and emission reduction, etc., it is necessary to strengthen full implementation of green transportation standards, to bring along utilization and extension of new technologies, products and methods for transportation energy conservation and environmental protection in the whole country. | 2,416.4 | 2020-01-01T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Business"
] |